Building Machines in our Image
Published:
How do you define intelligence? We each have our own notions of what it means to be intelligent: perhaps someone who is skilled in math or adept in social situations. But providing a general definition is surprisingly difficult. Herein lies the challenge for artificial intelligence (AI): how do we structure scientific study around a term that is typically reserved for humans? While progress has been made in mimicking aspects of human intelligence, the human-focused origins of the field of AI may be limiting the scope of our scientific pursuit. As we move forward, perhaps drawing from areas like biology and engineering will provide a more comprehensive definition of intelligence, or a new concept altogether.
The concept of ‘intelligence’ comes from human psychology, where it is measured using IQ tests. Although imperfect, these tests evaluate a range of capabilities, from reasoning to memory and verbal comprehension. Not surprisingly, when AI adopted the term intelligence over a half-century ago, along came a focus on performing similar cognitive tasks. Famously, the Turing test pits a machine against a human, with the machine using written conversation to attempt to convince the human that it is also human. Similar motivations fuel modern-day efforts to master games like Go and Atari, as well as tasks like processing language or identifying objects in images. In the absence of a clear definition of intelligence, these approaches implicitly assume that human tasks are a proxy for human intelligence; a machine capable of these tasks and more will thus attain ‘artificial general intelligence,’ capable of performing any task.
While such generally intelligent machines do not yet exist, advances have been made in many of the aforementioned tasks, impacting society through applications like self-driving cars, facial recognition, and language translation. This progress has been largely the result of deep neural networks, mathematical models which are loosely inspired by the biological neurons in brains. Through a process of learning to map input data to corresponding outputs, machines are now becoming capable of recognizing, reasoning about, and manipulating objects. Many of these basic human cognitive capabilities now seem on the horizon, but will these achievements bring us any closer to understanding intelligence? It remains unclear whether such machines would unlock the mysteries of our own range of capabilities or those of other organisms. Rather than exploring broader, fundamental principles underlying intelligent systems, the field of AI has been teaching to our own capabilities.
Stepping back to survey AI, I’m reminded of the Copernican revolution. For centuries, astronomers placed the Earth at the center of the universe, with our desire for significance guiding our conception of reality. However, when observations discredited this theory, we came to understand that the Earth orbits the Sun. In a similar way, I feel that we have placed humans at the center of our definition of intelligence. Clearly, we have unique capabilities, just as our planet is unique among its neighbors. Yet, any comprehensive definition of intelligence should account not only for our own capabilities, but those of other entities as well. Looking to other biological and non-biological entities will also help us see ourselves within a broader scope of intelligence, like studying the Earth in the context of other planets.
When we look at biology, we see systems that sense and respond to their surroundings. One such system is the cell, where we see sensors for chemicals, as well as corresponding cellular responses, like protein production. Entire organisms can also be considered as systems. Animals, from the smallest insect to the largest whale, interpret and interact with their environments in a multitude of ways. Even plants are attuned to sensory inputs like sunlight, moisture, and temperature, prompting responses like orienting leaves, extending roots, and releasing seeds. And groups of organisms, from forests of trees to colonies of ants, collectively sense and respond to their environments in ways that we are still just beginning to understand. Ultimately, all of these processes share a common form: they convert energy into work, biasing their environments toward particular states, in this case, survival of genes.
Our technological inventions can also be viewed from this systems perspective. The tools of our early ancestors, like spears and boats, expanded the ways in which they could respond to their environments. More recent inventions, like radios and cameras, have similarly expanded the ways in which we sense our environments. Modern advances in computing and now AI have taken this trend further, creating systems that can sense and respond to their environments largely independently of human input. This is a world in which power grids can automatically sense and respond to supply and demand, and vehicles can automatically sense and respond to obstacles.
The progress we have made in AI certainly has the power to positively impact society, and interacting with humans clearly requires human capabilities, such as speech and vision. However, there is a shared sentiment that focusing too narrowly on human tasks is ultimately limiting. There is a place for studying human capabilities, but this should not define the field. Recasting AI from a broader systems perspective requires integrating knowledge across many existing areas, from biology to control theory to computer science and physics. Doing so will create a scientific discipline that studies systems at multiple levels, from different forms of computation at the molecular level up to large-scale, collective behavior of network systems spanning the globe and beyond. Indeed, this process has already started, as machine learning algorithms become more pervasive in engineering and science. As we continue to explore, my hope is that we will arrive at a more complete view of intelligence, in turn, even helping us to understand ourselves in relation.