- Meta, Facebook’s parent company, held a media event in San Francisco this week to mark the 10th anniversary of its fundamental AI research team.
- Society is more likely to acquire “cat-level” or “dog-level” AI years before human-level AI, said Yann LeCun, Meta’s chief scientist.
- Unlike Google, Microsoft and other tech giants, Meta isn’t betting heavily on quantum computing.
Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.
Chesnot | Getty Images News | Getty Images
Yann LeCun, Meta’s chief scientist and deep learning pioneer, said he believes current AI systems are decades away from achieving any semblance of sentience, equipped with common sense that could push their abilities beyond simply synthesizing mountains of text in creative ways.
His point of view contrasts with that of Nvidia CEO Jensen Huang, who recently said that AI would “quite competitive” with humans in less than five years, outperforming them in a multitude of mentally intensive tasks.
“I know Jensen,” LeCun said at a recent event marking parent company Facebook’s 10th anniversary. Fundamental AI research team. LeCun said that Nvidia The CEO has a lot to gain from the AI ​​craze. “There is a war against AI, and it is providing the weapons.”
“(If) you think AGI is there, the more GPUs you need to buy,” LeCun said of technologists trying to develop artificial general intelligence, the type of AI comparable to human intelligence. As long as researchers at companies like OpenAI continue their AGI research, they’ll need more of Nvidia’s computer chips.
Society is more likely to acquire “cat-level” or “dog-level” AI years before human-level AI, LeCun said. And the tech industry’s current focus on language models and text data won’t be enough to create the kind of advanced human-like AI systems that researchers have dreamed of for decades.
“Text is a very poor source of information,” LeCun said, explaining that it would probably take a human 20,000 years to read the amount of text used to form modern language models. “Train a system on 20,000 years’ worth of reading material, and they still don’t understand that if A is the same as B, then B is the same as A.”
“There are a lot of really fundamental things in the world that they just don’t learn through this kind of training,” LeCun said.
Therefore, LeCun and other Meta AI executives conducted extensive research into how so-called transformer models used to build applications like ChatGPT could be adapted to work with a variety of data, including audio information, image and video. The more these AI systems can uncover the likely billions of hidden correlations between these different types of data, the more fantastic feats they could potentially achieve, it is thought.
Some of Meta’s research includes software that can help teach people to play tennis better while wearing the company’s Project Aria augmented reality glasses, which blend digital graphics into the real world. Executives showed a demonstration in which a person wearing AR glasses while playing tennis was able to see visual cues teaching them how to hold their tennis rackets correctly and swing their arms in perfect form. The types of AI models needed to power this type of tennis digital assistant require a mix of three-dimensional visual data in addition to text and audio, in case the digital assistant needs to speak.
These so-called multimodal AI systems represent the next frontier, but their development will not be cheap. And as more and more companies such as Meta and parent Google Alphabet By pursuing more advanced AI models, Nvidia could gain even more of an advantage, especially if no other competitors emerge.
Nvidia was the biggest benefactor of generative AI, with its expensive graphics processing units becoming the standard tool used to train massive language models. Meta relied on 16,000 Nvidia A100 GPUs to train its Llama AI software.
CNBC asked if the tech industry would need more material Vendors like Meta and other researchers are continuing their work developing these types of sophisticated AI models.
“It’s not necessary, but it would be nice,” LeCun said, adding that GPU technology remains the gold standard in AI.
Still, the computer chips of the future may not be called GPUs, he said.
“What we hope to see emerge are new chips that are not graphics processing units, they are simply deep learning neural accelerators,” LeCun said.
LeCun is also somewhat skeptical about quantum computingwhich tech giants such as Microsoft, IBMand Google has it all resources paid In. Many researchers outside Meta believe quantum computing machines could supercharge advances in data-intensive fields such as drug discovery, because they are able to perform multiple calculations with so-called quantum bits, as opposed to the conventional binary bits used in modern computing.
But LeCun has doubts.
“The number of problems you can solve with quantum computing, you can solve them much more efficiently with classical computers,” LeCun said.
“Quantum computing is a fascinating scientific topic,” LeCun said. The “practical relevance and possibility of making actually useful quantum computers” is less clear.
Mike Schroepfer, principal investigator at Meta and former chief technology officer, agrees, saying he evaluates quantum technology every few years and believes that useful quantum machines “may appear at some point, but their time horizon is so long that it has no relevance to what we do.” “.
“The reason we started an AI lab ten years ago was because it was very clear that this technology would be commercializable in the coming years,” Schroepfer said.
WATCH: Meta on the defensive over reports of damage caused by Instagram