A new nonprofit group prioritizes ‘good outcomes for all’ over ‘self-interest’ in the emerging field of A.I.
Still image from A.I. Artificial Intelligence. Warner Bros. / DreamWorks
Artificial intelligence research has typically been the domain of universities like MIT’s Computer Science and Artificial Intelligence Laboratory or, more recently, Silicon Valley corporations like Google, which purchased DeepMind (an A.I. startup) in 2014. Whatever path it takes, chances are the future of AI systems will in some way be monetized.
But last Friday, Elon Musk tweeted the launch of OpenAI, an open-source, nonprofit artificial intelligence platform.
This is not to say there isn’t money behind OpenAI, because there is plenty. Musk’s financial contributions to the research company, along with those made by Y Combinator president Sam Altman, Palantir CEO and angel investor Peter Thiel, and LinkedIn founder Reid Hoffman, total $1 billion. What makes this project different is that OpenAI’s team, led by research director Ilya Sutskever, a world expert in machine learning, is not bound by the profit motive of private businesses or publicly traded corporations.
The launch of OpenAI is therefore, without a doubt, a significant development in artificial intelligence research and development. A more democratic approach to R&D could potentially ensure that breakthroughs in advanced A.I. meet the best interests of the people and the planet—not just the bank accounts of creators or investors. OpenAI basically says as much on its website:
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” the OpenAI team announced. “Since our research is free from financial obligations, we can better focus on a positive human impact. We believe A.I. should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”
“Because of A.I.’s surprising history, it’s hard to predict when human-level A.I. might come within reach,” OpenAI’s statement added. “When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”
OpenAI aims to move far beyond the early days of A.I. research, which they characterize as trying to tackle tasks like chess to produce “human-level intelligence algorithms.” Instead, they plan to pursue “deep learning” A.I. architectures that can “twist” into a wide array of algorithms based on the data fed into them.
“This approach has yielded outstanding results on pattern recognition problems, such as recognizing objects in images, machine translation, and speech recognition,” OpenAI stated. “But we’ve also started to see what it might be like for computers to be creative, to dream, and to experience the world.”
Just as important as OpenAI’s nonprofit approach is its open-source, public-domain nature. Yes, the researchers will be strongly encouraged to publish their work in scientific papers and on blogs, as well as share code and patents with the world. But even better, OpenAI is fostering an environment of free collaboration with other individuals, institutions, and companies. This means that people, wherever they might be across the world (particularly those who might not have the privilege of working for Google DeepMind or researching at MIT’s Computer Science and Artificial Intelligence Laboratory), can contribute to present and future A.I. research.
Interestingly, OpenAI’s open-source mission dovetails with what technology reporter Paul Mason, author of Postcapitalism: A Guide to Our Future, told me in a recent interview. Looking 50 years into the future at what an automated society might look like, Mason envisioned, amongst other things, open-source A.I. and robotics platforms working for the social benefit of humankind.
“There will be an open-source robotics provider, there will be an open-source A.I. platform that is like TCP/IP nodes [the internet’s communications protocol], almost like a tool or standard,” Mason says. “It’s so invisible that people don’t even think who owns it because who owns TCP/IP, who owns HTML?”
We’re not there yet. But with OpenAI, we might just be on the way.