Tech

What we learned about AI and deep learning in 2022


View all sessions on demand from Smart Security Summit this.


This is the best time to discuss the implications of advances in artificial intelligence (AI). 2022 sees exciting progress in study carefully, especially in generative models. However, as the capabilities of deep learning models increase, so does the confusion surrounding them.

On the one hand, advanced models such as ChatGPT and DALL-E are showing compelling and impressive results in terms of thinking and reasoning. On the other hand, they often make mistakes that show they lack some basic elements of human intelligence.

The scientific community is divided over what to do with these advances. At one end of the spectrum, some scientists have gone as far as to say that complex patterns are sentient and should be attributed to personality. Others have suggested that current deep learning methods will lead to generalized artificial intelligence (AGI). Meanwhile, a number of scientists have studied the errors of current models and have shown that while useful, even the most advanced deep learning systems suffer from errors similar to those found in the existing models. before.

It is in this context that online AGI debate #3 held on Friday, hosted by AI Montreal president Vincent Boucher and AI researcher Gary Marcus. The conference includes talks by scientists from different backgrounds, discussing lessons from cognitive science and neuroscience, the path to conventional reasoning in AI, and topics. Architectural output can help take the next step in AI.

Incident

Smart Security Summit on Demand

Learn the critical role AI & ML plays in cybersecurity and industry-specific case studies. Watch sessions on demand today.

see here

What is missing in current AI systems?

“Deep learning methods can provide useful tools in many fields,” says linguist and cognitive scientist Noam Chomsky. Some of these applications, such as automatic transcription and text autofill, have become tools that we use every day.

“But beyond utility, what can we learn from these approaches to perception, thinking, and especially language?” Chomsky said. “[Deep learning] the system does not distinguish between possible and impossible languages. The more the system is improved, the deeper the error. They will do even better with impossible languages ​​and other systems.”

This flaw is evident in systems like ChatGPT, which can produce grammatically correct and consistent text but are logically and practically flawed. Speakers at the conference provided many examples of such errors, such as big language model fails to sort sentences based on length, makes fatal errors on simple logic problems, and makes false and inconsistent statements.

According to Chomsky, current approaches to leveraging deep learning systems, which rely on adding training data, creating larger models, and using “smart programming,” will only exacerbate these problems. mistakes these systems make.

“In short, they tell us nothing about language and thought, about perception in general, or about what it is to be human or any other imaginative flight in contemporary discussion,” says Chomsky. .

Marcus says that a decade after the deep learning revolution of 2012, significant progress has been made, “but there are still some problems.”

He pointed out four key aspects of perception that are missing in deep learning systems:

  1. Abstraction: Deep learning systems like ChatGPT struggle with basic concepts like counting and sorting items.
  2. Inference: Large language models cannot reason about basics, such as fitting objects into containers. “The great thing about ChatGPT is that it can answer questions, but unfortunately you can’t trust the answers,” says Marcus.
  3. Composition: Humans understand language as a whole, including parts. Existing AI continues to struggle with this problem, which can be witnessed when models like DALL-E are asked to draw hierarchical images.
  4. Reality: “Humans actively maintain imperfect but reliable world models. Large language models don’t and that has consequences,” says Marcus. “They cannot be updated gradually by providing them with new facts. They typically need to be retrained to incorporate new knowledge. They hallucinated.”

Artificial intelligence and logical reasoning

Yejin Choi, a professor of computer science at the University of Washington, said deep neural networks will continue to make mistakes under adversarial and competitive cases.

“The real problem we are facing today is that we simply do not know the depth or breadth of these adverse or difficult circumstances,” said Choi. “I have a hunch that this will be a real challenge that many may be underestimating. The real difference between human intelligence and AI is still huge.”

Choi says the gap between humans and artificial intelligence is due to a common lack of understanding, which she describes as “the dark matter of language and intelligence” and “unwritten rules of how to operating world” influences how people use and interpret it. language.

According to Choi, common sense is trivial for humans and difficult for machines because the obvious are never told, there are always infinitely many exceptions to every rule, and there is no universal truth in all. common sense problems. “It was vague, messy stuff,” she said.

AI researcher and neuroscientist, Dileep George, has emphasized the importance of mental simulation for conventional reasoning through language. The knowledge to reason with common sense is acquired through sensory experience, George said, and this knowledge is stored in the perceptual and motor systems. We use language to probe this pattern and activate the simulation in mind.

“You can think of our conceptual and cognitive system like a simulator, acquired through our locomotor experience. “Language is what drives the simulation,” he said.

George also questioned some current ideas for creating world models for AI systems. In most blueprints for these world models, the perception is a preprocessor that creates a representation on which the world model is built.

“That’s hard to do because a lot of the perceptual details need to be accessed quickly for you to run simulations,” he said. “Perception must be bidirectional and must use feedback connections to access simulations.”

Architecture for the next generation of AI systems

Although many scientists agree on the shortcomings of current AI systems, they differ on the way forward.

David Ferrucci, founder of elemental awareness and a former member of IBM Watson, say that we cannot fulfill our vision of AI if we cannot get machines to “explain why they make the product they are making.”

Ferrucci’s company is working on an AI system that integrates different modules. Machine learning models generate hypotheses based on their observations and project them onto an explicit knowledge module to rank them. The best hypotheses are then processed by an automated reasoning module. This architecture can explain its causal inferences and models, two features are missing in current AI systems. The system develops knowledge and causal models from classical deep learning methods and human interaction.

AI scientist Ben Goertzel emphasized that “the deep neural network systems that currently dominate the current commercial AI landscape will not make much progress in building true AGI systems.”

Goertzel, who is best known for coining the term AGI, says that augmenting existing models such as GPT-3 with a fact-checking tool will not fix the problems encountered by deep learning and will not making them as generalizable as the human mind.

“Designing real, unlimited intelligence with general intelligence is completely doable, and there are several avenues to achieving that,” says Goertzel.

He proposed three solutions, including performing a simulation of a real brain; create a complex self-organizing system completely different from the brain; or create a cognitive architecture that incorporates self-organizing knowledge in a self-reprogramming, self-rewriting knowledge graph that controls an embodied agent. His current initiative, the OpenCog Hyperon project, is exploring the latter approach.

Francesca Rossi, IBM colleague and Global Lead for AI Ethics at the Thomas J. Watson Center for Research, proposed an AI architecture inspired by cognitive science and the “Thinking Framework” Fast and Slow” by Daniel Kahneman.

Architecture named Slow and Fast Artificial Intelligence (SOFAI), using a multi-agent approach that includes fast and slow solvers. Quick solvers rely on machine learning to solve problems. The slow solvers are symbolic, thoughtful, and computationally more complex. There is also a metacognition module that acts as an arbiter and decides which agent should solve the problem. Like the human brain, if the fast solver cannot solve a new situation, the metacognition module passes it on to the slow solver. This loop then retrains the fast solver to gradually learn how to handle these situations.

“This is an architecture that is supposed to work for both autonomous systems and support human decisions,” said Rossi.

Jürgen Schmidhuber, scientific director of the Swiss AI Lab IDSIA and one of the pioneers of modern deep learning techniques, says many of the problems that have arisen about current AI systems have been solved in the past. systems and architectures introduced over the past decades. Schmidhuber suggests that solving these problems is a matter of computational cost and that in the future we will be able to create deep learning systems that can perform meta-learning and find new learning algorithms. and better.

Standing on the shoulders of a huge dataset

Jeff Clune, an associate professor of computer science at the University of British Columbia, presented the idea of ​​an “AI-generating algorithm”.

“The idea is to learn as much as possible, starting from very simple beginnings all the way to AGI,” says Clune.

Such a system has an outer loop that searches through the space of possible AI agents and ends up producing something very sample efficient and very generic. Evidence that this is possible is the “very expensive and inefficient algorithm of Darwinian evolution that ultimately created the human mind,” says Clune.

Clune has been discussing AI generation algorithms since 2019, which he believes is based on three main pillars: Meta-learning architecture, meta-learning algorithms, and efficient means of creating environments and data. It’s basically a system that can continuously create, evaluate, and upgrade new learning environments and algorithms.

At the AGI debate, Clune added a fourth pillar he describes as “leveraging human data.”

“If you watch years of videos of agents doing that mission and practice on that, then you can go on to learn very difficult missions,” says Clune. “It’s been a really big impetus for these efforts to try to learn as much as possible.”

Learning from human-generated data is what has enabled GPT, CLIP, and DALL-E to find effective ways to produce impressive results. “AI looks beyond by standing on the shoulders of huge datasets,” says Clune.

Clune ends by predicting a 30% chance of achieving AGI by 2030. He also says that current deep learning models — with some important improvements — will be enough to achieve AGI.

“I don’t think we as a scientific community and as a society are ready for AGI to arrive so soon, and we need to start planning for this as soon as possible,” Clune warns. We need to start planning now.”

VentureBeat’s Mission is to become a digital city square for technical decision-makers to gain knowledge of transformative and transactional enterprise technology. Explore our Briefings.

goznews

Goz News: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably.

Related Articles

Back to top button