Last year was one of the best years for artificial intelligence since the "AI winter" of the 1980s and early 1990s. The most notable achievement was Adam, the AI with robotic arms and lab equipment that can formulate hypotheses and run its own scientific experiments. In one example, Adam investigated the genetic expression of baker's yeast. Adam was named the 4th top scientific discovery of the year by Time magazine. Another major AI breakthrough in 2009 was Hod Lipson's program which independently discovered the laws of physics by observing the swinging of a pendulum. The program is now available for anyone to download. Just search for "Eureqa."
It's time for the world to get optimistic about artificial intelligence again. Instead of viewing the mind as a mystery, many of today's cognitive scientists view it as a fertile ground for the scientific method, producing thousands of papers each year which further elucidate the operations of the brain. MIT professor Ed Boyden is working on a technology which allows investigators to fire individual neurons on demand using light signals, an approach which could soon lead to "high-throughput circuit screening" of neural circuits-a tool that has long been needed to untangle the complexities of human intelligence.
Scientists studying the activation patterns of neurons have even discovered that cognitive systems seem to be laid out as approximations of Bayesian reasoning, a statistical method that has been a strong focus of artificial intelligence over the last decade. This study has even given rise to a new subfield of cognitive science, known as Bayesian cognitive science. The Bayesian methods used by Gmail to filter spam are comparable to the Bayesian processing used by our brains to identify faces or distinguish objects from the background in a visual scene. This surprising parallel shows that we may have more of the basic toolset necessary for human-level AI than many people assume.
Futurist and inventor Ray Kurzweil has predicted the arrival of roughly human-level AI in 2029, based on what appears to be exponential growth in the availability of computing power and the resolution of brain-scanning devices. Many scientists agree with Kurzweil that brain scanning and simulation would allow scientists to build human-level artificial intelligence, even if we don't understand intelligence on an abstract level. There are already scientists working on simulating the hippocampus, a part of the brain responsible for memory, just by scanning it, interpreting the scans, and translating the pieces into code. The end result will be a hippocampal implant that could restore memory-formation abilities to victims of brain damage and age-related decay.
If Kurzweil is right, then the human race could be confronting the next intelligent species on the planet within 20 years. As we've discussed elsewhere in the Singularity 101 series, intelligence is the most powerful known force on the planet. Even if human-level AI is than 20 years away, it could still be developed within the lifetimes of people alive today. Our continued survival and prosperity will depend on cooperation with it. Instead of adopting a confrontational attitude-common among a species that evolved in mutually distrustful tribal societies, like humans-we must realize that AI is ours to make. We should be careful to create it with our best qualities, such as compassion and moral complexity. This will not be easy.
As Bill Gates recently pointed out, society has a strong bias towards short-term solutions which offer incremental improvements at best. To solve the big problems-poverty, war, resource depletion, and so on-we need more innovation than the planet's scientists and researchers can provide. Only 1 percent of the population goes into research. By creating a new intelligent species as our allies, we can blend the best of human and machine intelligence to create an "Intelligence Explosion," where intelligence can endlessly improve itself in an open-ended fashion, instead of being kept essentially static with our current Homo sapiens brains. If we design AI properly and carefully, we will be gifted with explosions of wisdom and compassion as well.
Ensuring that future AIs act as our allies and not our competitors will require careful investigation and design that needs to begin today. The uncertainty is massive, but so are the potential benefits. If you're interested in creating globally beneficial, technologically-enabled future for humanity, get in contact with the Singularity Institutetoday, and help us in our quest for a positive singularity. Thank you for reading!
Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, Accelerating Future. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.