Overcoming the limits of AI | InfoWorld

Whether we realize it or not, most of us come face to face with artificial intelligence (AI) on a daily basis. Every time you search Google or ask Siri a question, you’re using AI. The catch, however, is that the intelligence these tools provide is not really smart. They don’t really think or understand like humans do. Instead, they analyze massive datasets, looking for patterns and correlations.

It doesn’t take anything away from the AI. As Google, Siri, and hundreds of other tools demonstrate daily, today’s AI is incredibly useful. But at the end of the day, there’s not a lot of intelligence going on. Today’s AI only gives the appearance of intelligence. It lacks true understanding or awareness.

For today’s AI to overcome its inherent limitations and evolve into its next phase – defined as artificial general intelligence (AGI) – it must be able to understand or learn any intellectual task that a human can. This will allow him to constantly develop his intelligence and abilities in the same way that a three-year-old grows up to possess the intelligence of a four-year-old, and eventually a 10-year-old, a child 20 years old. -old, and so on.

The real future of AI

AGI represents the true future of AI technology, a fact that has not gone unnoticed by many companies, including names like Google, Microsoft, Facebook, Elon Musk’s OpenAI and Kurzweil-inspired Singularity.net. The research performed by all of these companies depends on an intelligence model that has varying degrees of specificity and dependence on today’s artificial intelligence algorithms. Somewhat surprisingly, however, none of these companies have focused on developing a core underlying AGI technology that replicates the contextual understanding of humans.

What will it take to get to AGI? How will we give computers an understanding of time and space?

The fundamental limitation of all research currently being conducted is that it is unable to understand that words and images represent physical things that exist and interact in a physical universe. Today’s AI cannot understand the concept of time and that causes have effects. These fundamental underlying problems have not yet been solved, perhaps because it is difficult to obtain significant funding to solve problems that any three-year-old can solve. We humans are good at merging information from multiple senses. A three-year-old will use all of their senses to learn how to stack blocks. The child learns about time by experimenting with it, by interacting with toys and the real world in which he lives.

Similarly, an AGI will need sensory modules to learn similar things, at least initially. The computers do not need to reside in the modules, but can connect remotely because the electronic signals are much faster than those of the human nervous system. But pods provide the opportunity to learn first-hand about stacking blocks, moving objects, performing sequences of actions over time, and learning the consequences of those actions. With vision, hearing, touch, manipulators, etc., the IAG can learn to understand in a way that is simply not possible for a purely textual or purely visual system. Once the AGI has gained this understanding, the sensory modules may no longer be needed.

The costs and risks of AGI

At this point, we cannot quantify the amount of data needed to represent true understanding. We can only look at the human brain and assume that a reasonable percentage of it must relate to understanding. We humans interpret everything in the context of everything we have already learned. This means that as adults, we interpret everything in the context of the true understanding that we have acquired during the first years of life. Only when the AI ​​community takes the unprofitable steps to recognize this fact and conquer the fundamental basis of intelligence can AGI emerge.

The AI ​​community must also consider the potential risks that could accompany the realization of AGI. AGIs are necessarily goal-oriented systems that will inevitably exceed the goals we set for them. At least initially, these goals can be set for the benefit of humanity and AGIs will bring enormous benefits. If AGIs are militarized, however, they are likely to be effective in this area as well. The concern here is not so much terminatorindividual robots styled as an AGI spirit capable of devising even more destructive strategies to control humanity.

Banning AGI outright would only transfer development to countries and organizations that refuse to recognize the ban. Accepting a free-for-all AGI would likely lead nefarious people and organizations to exploit the AGI for nefarious purposes.

How soon could all of this happen? Although there is no consensus, AGI may be coming soon. Consider that a very small percentage of the human genome (which totals about 750MB of information) defines the entire structure of the brain. This means that developing a program containing less than 75MB of information could fully represent a newborn’s brain with human potential. When you realize that the seemingly complex human genome project has been completed far ahead of schedule, emulating the brain in software in the not-too-distant future should be within the reach of a development team.

Similarly, a breakthrough in neuroscience could at any time lead to the mapping of the human neuroma. There is, after all, a human neuroma project already underway. If this project progresses as rapidly as the Human Genome Project, it is fair to conclude that the AGI could emerge in the very near future.

Although the timing may be uncertain, it is fairly safe to assume that AGI is likely to emerge gradually. That means Alexa, Siri, or Google Assistant, all of which are already better at answering questions than the average three-year-old, will ultimately be better than a 10-year-old, then an average adult, then a genius. With the benefits of each progression outweighing the perceived risks, we may disagree on how far the system crosses the line of human equivalence, but we will continue to appreciate – and anticipate – each level of advancement.

The massive technological effort invested in AGI, combined with rapid advances in computing power and continued breakthroughs in neuroscience and brain mapping, suggests that AGI will emerge over the next decade. This means that systems with unimaginable mental power are inevitable in the decades to come, whether we are ready or not. Given this, we need a frank discussion about AGI and the goals we would like to achieve in order to derive maximum benefit from it and avoid any possible risk.

Charles Simon, BSEE, MSCS is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of Will the computers revolt? Preparing for the future of artificial intelligence, and the developer of Brain Simulator II, an AGI research software platform. For more information, visit https://futureai.guru/Founder.aspx.

The New Tech Forum provides a venue to explore and discuss emerging enterprise technologies with unprecedented depth and breadth. The selection is subjective, based on our selection of the technologies that we think are important and most interesting for InfoWorld readers. InfoWorld does not accept marketing materials for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2022 IDG Communications, Inc.

Leave a Comment