Artificial general intelligence

[5][6] Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect.

can be desirable for some intelligent systems,[30] these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI.

Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses.

[37]A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm.

Examples include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.

"[52] Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001.

AI pioneer Marvin Minsky was a consultant[53] on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time.

They became reluctant to make predictions at all[d] and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".

[66]The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud[67] in a discussion of the implications of fully automated military production and operations.

Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".

They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

[89][90] Blaise Agüera y Arcas and Peter Norvig wrote in 2023 that a significant level of general intelligence has already been achieved with frontier models.

He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying.

These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains.

Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with Microsoft, prompting speculation about the company’s strategic intentions.

[82][98][99] For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of GPU-enabled CPUs.

[110] In 2023, Microsoft Research published a study on an early version of OpenAI's GPT-4, contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law.

This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.

[113] In March 2024, Nvidia's CEO, Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans.

[115] While the development of transformer models like in ChatGPT is considered the most promising path to AGI,[116][117] whole brain emulation can serve as an alternative approach.

Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near[102] predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.

[120] An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS).

The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate.

[128] He proposed a distinction between two hypotheses about artificial intelligence:[f] The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test.

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence: These traits have a moral dimension.

If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe.

He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards".

[156] Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?

[161] Former Google fraud czar Shuman Ghosemajumder considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear.

So far, the trend seems to be toward the second option, with technology driving ever-increasing inequalityElon Musk considers that the automation of society will require governments to adopt a universal basic income.

The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior and may incentivize artificial stupidity . [ 35 ]
Surveys about when experts expect artificial general intelligence. [ 5 ]
AI has surpassed humans on a variety of language understanding and visual understanding benchmarks. [ 96 ] As of 2023, foundation models still lack advanced reasoning and planning capabilities, but rapid progress is expected. [ 97 ]
Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, Anders Sandberg and Nick Bostrom ), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises. [ 119 ]