The Artificial Intelligence community failed to grasp the power of the mind, the most powerful intelligence in the universe because they used computational models. They wrongly believed that intelligence was the achievement of life goals through computation. The AI study was set in motion by the arrival of computers in the 1940s, on the basic premise that the brain did some kind of computation. Alan Turing was one of the first to work on intelligent machines by programming computers. Algorithmic procedures did enable programs to achieve striking results. Computers could solve complex mathematical and engineering problems. A few scientists even believed that a large enough assembly of programs and collated knowledge could achieve human-level intelligence.
While there could be other possible ways, computer programs were the best available resource for attempting to simulate human-level intelligence. But, in the 1930s mathematical logicians, including Turing and Godel, established that algorithms could not be guaranteed to solve problems in certain mathematical domains. Both the theory of computational complexity, which defined the difficulty of general classes of problems and the AI community failed to identify the properties of problems and problem-solving methods, which enabled humans to solve problems. Every direction of search seemed to lead only to dead ends.
The AI community could not design a machine, which could learn and become significantly intelligent. No program could learn much by reading. Computers could use vast computational capabilities to play chess at the grandmaster level, but their intelligence was limited. Parallel processing computers looked promising but proved difficult to program. Computer programs could only solve domain specific problems. They could not distinguish between problems, or be a "General Problem Solver." Since humans could solve problems in unique domains, Roger Penrose argued that computers were intrinsically incapable of achieving human intelligence. The philosopher Hubert Dreyfus also suggested that AI was impossible. But, the AI community continued its search, even though most researchers felt the need for new fundamental ideas. In the end, the general consensus was that computers were only "somewhat intelligent." So, was the basic definition of "intelligence" itself wrong?
Since much of human intelligence was little understood, it was impossible to define a particular computational procedure as being intelligent. Intelligence was clearly an ability to solve problems. In nature, it was a matured intelligence, which empowered the "homeostasis" of animals in the survival process. Homeostasis was the ability of an entity to function normally, achieving a relatively constant state within the body, in a changeable, or even hostile, environment. It was an intelligent process, internally maintained by the animals at many levels, through various sensing, feedback, and control systems, supervised by a hierarchy of control centers. This process, achieved by even the lowest animal was the ultimate "General Problem Solver." The process was not domain specific. It recognized problems and responded with effective motor activity. It applied to every aspect of survival.
The nervous system received a kaleidoscopic combination of trillions of sensory inputs. A phenomenal memory enabled it to remember and identify patterns. Intuition, an algorithmic process, enabled it to isolate the context of a single pattern from a galactic memory. The system could identify objects from millions of received sensory inputs. That pattern recognition ability was not limited to the identification of static objects. It could identify problems. It recognized and interpreted dynamic events to generate patterns of emotions. Emotions clearly defined problems. Animals recognized the difference between a friendly nudge and a deadly slither and responded. Fear, anger, or jealousy motivated them. Each motor response had a particular sequence of problem-solving steps, which were, again, remembered patterns of activities.
The environment presented the system with millions of enigmatic phenomena. Many of these were caused by other phenomena. Most problems were patterns of events, which had contextual links to remembered successful problem-solving strategies. Pattern recognition enabled identification. The process was not domain specific. It straddled the entire problem-solving domain. Pattern recognition merely identified the link between one phenomenon and another. Intuition instantly identified the contextual link. It did not identify the complex reasoning links between the two. It did not use incremental logical steps to solve problems. When the primitive man took shelter as the storm clouds advanced, he was merely responding to a perceived pattern.
Across thousands of years, mankind responded adequately to much of nature, without understanding underlying causes. That intelligence was not computation, which reasoned its way through life, by analyzing the logical and mathematically precise links between particular causes and their effects. The reasons behind causes were discovered only later, with advanced study and research. Such analysis benefited only a minor segment of the problem-solving the world. A group of symptoms related to a disease. Physicians identified illnesses, without always knowing the logical or reasoned links between the symptom and the disease. Software code was logical. But, many quirks of complex code were patterns of effects, related to particular programming events, which could only be recognized by a pattern recognition intelligence. Complex problem solving was achieved through sensitive pattern recognition. True intelligence was this powerful pattern recognition capability, which also, incidentally, discovered logic, reasoning, and mathematics.