The Full Wiki

Strong AI: Map


Wikipedia article:

Map showing all locations mentioned on Wikipedia article:

Strong AI is artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as "artificial general intelligence" or as the ability to perform "general intelligent action". Science fiction associates strong AI with such human traits as consciousness, sentience, sapience and self-awareness.

Some references emphasize a distinction between strong AI and "applied AI" (also called "narrow AI" or "weak AI"): the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases are completely outside of) the full range of human cognitive abilities.


Many different definitions of intelligence have been proposed (such as being able to pass the Turing test) but there is to date no definition that satisfies everyone. However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:
* reason, use strategy, solve puzzles, and make judgments under uncertainty;
* represent knowledge, including commonsense knowledge;
* plan;
* learn;
* communicate in natural language;
* and integrate all these skills towards common goals.
Work is underway to design machines that have these abilities and it is expected that strong AI would have most if not all of these capabilities.

There are other aspects of the human mind besides intelligence that also bear on the concept of strong AI:
* consciousness: To have subjective experience and thought.
* self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts.
* sentience: The ability to "feel."
* sapience: The capacity for wisdom.
* innovation: The capacity for originality.
It remains to be shown whether any of these traits are necessary for strong AI—for example, it is not clear if consciousness is necessary for a machine to reason as well as human beings can. It is also not clear whether any of these traits are sufficient for intelligence: if a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have the ability to represent knowledge or use natural language? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.

Research approaches

History of mainstream AI research

Modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that strong AI was possible and that it would exist in just a few decades. As AI pioneer Herbert Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who accurately embodied what AI researchers believed they could create by the year 2001. Of note is the fact that AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time, having himself said on the subject in 1967, "Within a generation...the problem of creating 'artificial intelligence' will substantially be solved."

However, in the early 70s, it became obvious that researchers had grossly underestimated the difficulty of the project. The agencies that funded AI became skeptical of strong AI and put researchers under increasing pressure to produce useful technology, or "applied AI". As the eighties began, Japan's fifth generation computer project revived interest in strong AI, setting out a ten year timeline that included strong AI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money back into the field. However, the market for AI spectacularly collapsed in the late 80s and the goals of the fifth generation computer project were never fulfilled. For the second time in 20 years, AI researchers who had predicted the imminent arrival of strong AI had been shown to be fundamentally mistaken about what they could accomplish.

By the 1990s, AI researchers had gained a reputation for making promises they could not keep. AI researchers became reluctant to make any kind of prediction at all and avoid any mention of "human level" artificial intelligence, for fear of being labeled a "wild-eyed dreamer." Confidence in the field arguably saw a resurgence with the likes of the 1997 Deep Blue victory chess match.

Mainstream AI research

For the most part, researchers today choose to focus on specific sub-problems where they can produce verifiable results and commercial applications, such as neural nets, computer vision or data mining.

Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various subproblems using an integrated agent architecture, cognitive architecture or subsumption architecture. Hans Moravec wrote in 1988 "I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."

Artificial general intelligence

Artificial General Intelligence research aims to create AI that can replicate human-level intelligence completely, often called an Artificial General Intelligence (AGI) to distinguish from less ambitious AI projects. (The concept is derived from the psychometric notion of natural general intelligence (often denoted "g")[62943], though no adherence to any particular theory of g is implied.) As yet, researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. Some small groups of computer scientists are doing AGI research, however. Organizations pursuing AGI include Adaptive AI, Artificial General Intelligence Research Institute and Singularity Institute for Artificial Intelligence with the open-source OpenCog project, and TexAI. One recent addition is Numenta, a project based on the theories of Jeff Hawkins, the creator of the Palm Pilot. While Numenta takes a computational approach to general intelligence, Hawkins is also the founder of the RedWood Neuroscience Institute, which explores conscious thought from a biological perspective. AND Corporation has been active in this field since 1990, and has developed machine intelligence processes based on phase coherence principles, having strong similarities to digital holography and QM with respect to quantum collapse of the wave function.

Simulated human brain model

Simulated human brain model could be one of the quickest means of achieving strong AI, as it doesn't require complete understanding of how intelligence works. Basically, a very powerful computer would simulate a human brain, often in the form of a network of neurons. For example, given a map of all (or most) of the neurons in a functional human brain, and a good understanding of how a single neuron works, it is theoretically possible for a computer program to simulate the working brain over time. Given some method of communication, this simulated brain might then be shown to be fully intelligent. The exact form of the simulation varies: instead of neurons, a simulation might use groups of neurons, or alternatively, individual molecules might be simulated. It's also unclear which portions of the human brain would need to be modeled: humans can still function while missing portions of their brains, and areas of the brain are associated with activities (such as breathing) that might not be necessary to think.

Speculation: human brains have developed to accommodate certain necessities, such as breathing and interpreting sensory input from a variety of sources. Without adequate simulations of these necessities (such as, for example, input that simulates the sensation of sufficient oxygen levels in the body), it is possible that an artificial brain could have difficulty functioning. In addition, human brains are reliant for stability on a number of mediating factors, including stages of development and external training. An artificial duplicate of the human brain, without input of mediation, could conceivably suffer from a number of cognitive and functional difficulties. In addition, the construction and sustenance of an artificial brain raises moral questions, namely regarding personhood, freedom, and death. Does a "brain in a box" constitute a person? What rights would such an entity have, under law or otherwise? Once activated, would human beings have the obligation to continue its operation? Would the shutdown of an artificial brain constitute death, sleep, unconsciousness, or some other state for which no human description exists? After all, an artificial brain is not subject to post-mortem cellular decay (and associated loss of function) as human brains are, so an artificial brain could, theoretically, resume functionality exactly as it would if it was before it was shut down?

This approach would require three things:

  • Hardware. An extremely powerful computer would be required for such a model. Futurist Ray Kurzweil in the book "The Singularity Is Near" (2005) looks at various estimates for the hardware required to equal the human brain and writes "These estimates all result in comparable orders of magnitude (10^14 to 10^15 cps). Given the early stage of human-brain reverse engineering, I will use a more conservative figure of 10^16 cps for our subsequent discussions." 10^16 cps is equivalent to 10 petaflops. Using Top500 projections, it might be estimated that such levels of computing power might be reached using the top-performing CPU-based supercomputers by ~2015 (for 100 petaflops), up to a more conservative estimate of ~2025 (for 100,000 petaflops). However, considering that GPU processing and Stream Processing power appears to double every year, these estimates will be reached much sooner using GPGPU processing as high-end GPU's such as the AMD FireStream can already process over one teraflop. It should also be noted, however, that the overhead introduced by the modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) might require a simulator to have access to computational power much greater than that of the brain itself and that current simulations and estimates do not account for the importance of Glial cells which outnumber neurons 10:1.
  • Software. Software to simulate the function of a brain would be required. This assumes, as is the consensus in neuroscience, that the human mind is the central nervous system and is governed by currently known and understood physical laws . Constructing the simulation would require a great deal of knowledge about the physical and functional operation of the human brain, and might require detailed information about a particular human brain's structure. Information would be required both of the function of different types of neurons, and of how they are connected. Note that the particular form of the software dictates the hardware necessary to run it. For example, an extremely detailed simulation including molecules or small groups of molecules would require enormously more processing power than a simulation that models neurons using a simple equation, and a more accurate model of a neuron would be expected to be much more expensive computationally than a simple model. The more neurons in the simulation, the more processing power it would require.
  • Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power. However, the simulation would also have to capture the detailed cellular behaviour of neurons and glial cells, presently only understood in the broadest of outlines.

Once such a model is built, it will be easily altered and thus open to trial-and-error experimentation. This is likely to lead to huge advances in understanding, allowing the model's intelligence to be improved/motivations altered.

The Blue Brain project aims to use one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to simulate a single neocortical column consisting of approximately 60,000 neurons and 5 km of interconnecting synapses. The eventual goal of the project is to use supercomputers to replicate an entire brain announced Henry Markam, director of the Blue Brain project at the TED conference in 2009, believing this could be achievable in as little as 10 years time.

The brain gets its power from performing many parallel operations, a standard computer from performing operations very quickly. It should be noted, however, that supercomputers also perform many operations in parallel. Good examples of this are the Cray and NEC vector computers which operate as a single machine but perform many calculations at once. Another example is any form of cluster computing, where multiple single computers operate as one.The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses. By comparison, a modern computer microprocessor uses only 1.7 billion transistors.[62944] Although estimates of the brain's processing power put it at around 1014 (100 trillion) neuron updates per second, it is expected that the first unoptimized simulations of a human brain in real time will require a computer capable of 1018 FLOPS. Non-real time simulations of the human brain (1011 neurons) were performed in 2005 [62945] and it took 50 days on a cluster of 27 processors to simulate 1 second of a model (see also [62946]). By comparison a general purpose CPU (circa 2006) operates at a few GFLOPS (109 FLOPS). (each FLOP may require as many as 20,000 logic operations).

However, a neuron is estimated to spike 200 times per second (this giving an upper limit on the number of operations). Signals between them are transmitted at a maximum speed of 150 meters per second. A modern 2 GHz processor operates at 2 billion cycles per second, or 10,000,000 times faster than a human neuron, and signals in electronic computers travel at roughly half the speed of light; faster than signals in humans by a factor of 1,000,000. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5x1020 op/sec/watt at room temperature).

Neuro-silicon interfaces have also been proposed. [62947] [62948]

Critics of this approach believe that it is possible to achieve AI directly without imitating nature and often have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. The direct approach is used in AI - What is this, where it is shown that if we have a formal definition of AI, it can be found by enumerating all possible programs and then testing each of them to see whether it has produced Artificial Intelligence, or has not.

Artificial consciousness research

Artificial consciousness research aims to create and study artificially conscious systems. Igor Aleksander argues that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.

Franklin’s Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard BaarsGlobal Workspace Theory (GWT). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA’s task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual’s skills and preferences with the Navy’s needs. IDA interacts with Navy databases and communicates with the sailors via natural language email dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996-2001 at Stan Franklin’s "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA’s top-down architecture, high-level cognitive functions are explicitly modeled; see and for details. While IDA is functionally conscious by definition, Franklin does “not attribute phenomenal consciousness to [his] own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that’s how I do it' while watching IDA’s internal and external actions as she performs her task."

Haikonen’s cognitive architecture

Pentti considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. and . A low-complexity implementation of the architecture proposed by was reportedly not capable of AC, but did exhibit emotions as expected .

Ben Goertzel's OpenCog

Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands,as well as integration with real-world robotics, being done in at the robotics lab of Hugo de Garis at Xiamen Universitymarker.

Origin of the term: John Searle's strong AI

The term "strong AI" was adopted from the name of an argument in the philosophy of artificial intelligence first identified by John Searle as part of his Chinese room argument in 1980. He wanted to distinguish between two different hypotheses about artificial intelligence:
  • An artificial intelligence system can think and have a mind.
  • An artificial intelligence system can (only) act like it thinks and has a mind.
The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage, which is fundamentally different than the subject of this article, is common in academic AI research and textbooks.

The term "strong AI" is now used to describe any artificial intelligence system that acts like it has a mind, regardless of whether a philosopher would be able to determine if it actually has a mind or not. Dijkstra has been quoted as saying, "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

As Russell and Norvig write: "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."AI researchers are interested in a related statement (that some sources confusingly call "the strong AI hypothesis"):
  • An artificial intelligence system can think (or act like it thinks) as well as or better than people do.
This assertion, which hinges on the breadth and power of machine intelligence, is the subject of this article.

See also


  1. or see Advanced Human Intelligence where he defines strong AI as "machine intelligence with the full range of human intelligence."
  2. . This the term they use for "human-level" intelligence in the physical symbol system hypothesis.
  3. Encyclopedia Britannica Strong AI, applied AI, and cognitive simulation or Jack Copeland What is artificial intelligence? on
  4. The Open University on Strong and Weak AI
  5. AI founder John McCarthy writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." Basic Questions. For a discussion of some definitions of intelligence used by artificial intelligence researchers, see philosophy of artificial intelligence.
  6. This list of intelligent traits is based on the topics covered by major AI textbooks, including: , , and .
  7. Note that consciousness is difficult to define. A popular definition, due to Thomas Nagel, is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See
  8. quoted in
  9. Scientist on the Set: An Interview with Marvin Minsky
  10. quoted in
  11. The Lighthill report specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. In the U.S., DARPA became determined to fund only "mission-oriented direct research, rather than basic undirected research". See under "Shift to Applied Research Increases Investment". See also and
  12. , and see also
  13. , , under "Shift to Applied Research Increases Investment"
  14. As AI founder John McCarthy wrote in his Reply to Lighthill, "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case."
  15. "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."
  16. Sutherland J.G. (1990) "Holographic Model of Memory, Learning, and Expression" in International Journal of Neural Systems Vol. 1-3. pp. 256-267.
  18. "nervous system, human." Encyclopædia Britannica. 9 Jan. 2007
  19. As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
  20. The word "mind" is has a specific meaning for philosophers, as used in the mind body problem or the philosophy of mind
  21. Among the many sources that use the term in this way are: , Oxford University Press Dictionary of Psychology (quoted in "High Beam Encyclopedia"), MIT Encyclopedia of Cognitive Science (quoted in "AITopics"), Planet Math, Arguments against Strong AI (Raymond J. Mooney, University of Texas), Artificial Intelligence (Rob Kremer, University of Calgary), Minds, Math, and Machines: Penrose's thesis on consciousness (Rob Craigen, University of Manitoba), The Science and Philosophy of Consciousness Alex Green, Philosophy & AI Bernard, Will Biological Computers Enable Artificially Intelligent Machines to Become Persons? Anthony Tongen, and the Usenet FAQ on Strong AI
  22. A few sources where "strong AI hypothesis" is used this way: Strong AI Thesis, Neuroscience and the Soul


External links

Embed code:

Got something to say? Make a comment.
Your name
Your email address