[Note: these comments assume some familiarity with OpenAI and the motivations behind it – see e.g. their website, this article, and this detailed interview].
The announcement of OpenAI has justifiably gotten a lot of attention, both in the media and in the relevant expert community. I am currently flying home from NIPS 2015, a major machine learning conference, and right after OpenAI was announced, I noticed several people around me checking out the website on their phones and computers to learn more, and I expect that OpenAI’s information/recruiting session at NIPS later today will be extremely popular. Here, I will summarize some preliminary, sleep-deprived thoughts on why this is getting so much attention and why that is appropriate.
The AI Arms Race
Over the past few years, many billions of dollars have been poured into AI and robotics research and development. OpenAI is in fact only the second of two billion dollar research centers announced very recently (the other being Toyota’s). Stuart Russell, lead author of the main textbook in the field, has claimed that more money has been invested by corporations in AI in the past few years than was invested by governments in the field’s entire prior history. This is largely a result, as pointed out in the open letter on AI signed by thousands of researchers and other interested parties, of AI methods (especially but not limited to deep learning) having just recently reached levels of performance that make them commercially lucrative, in turn leading to more funding to reach higher levels of performance, and so forth.
This has led to an extremely competitive race between technology companies to snatch up talent in especially “hot” sub-fields of AI. Walking around at NIPS, one would see badges listing the names of not just stereotypical “tech” companies like Google and Facebook but also miscellaneous hedge funds and other large companies seeking to cash in on the AI bonanza. Tellingly, NIPS, I saw a flyer for an academic job that started with “If you’re still interested in academic jobs…” Leaving aside the question of whether all this investment constitutes a bubble (I don’t think so), superficially, at least, things are extremely “hot” in areas like deep learning – for example, at NIPS, the workshop on deep reinforcement learning (a combination of deep learning and reinforcement learning that has led to impressive results in various domains recently) was so popular that people were sitting on the floor, standing along the walls, and getting shut out entirely for fire hazard concerns (including Rich Sutton, author of “the book” on reinforcement learning. I gather he eventually got in, though).
This arms race is not just a matter of accelerated deployment of AI technologies, or massive spending by tech giants, though that is happening—it’s also a battle for the hearts and minds of AI researchers, the most accomplished of which are being sought after like star athletes. Already, several years ago, AI researchers at the likes of Google were making plenty of money for a reasonable person’s purposes. Now, with talk of six or more figure salaries being offered to researchers, we may be at a point of diminishing returns for fat paychecks—many researchers also would ideally like the ability to publish (and be able to talk openly about) all their research and to work towards a mission (such as OpenAI’s, to benefit humanity through AI) that resonates with them. In this competitive context, OpenAI is stepping in with a third alternative beyond partially-secretive industry and often bureaucratic and under-resourced academia. It also comes at a critical time in discussions of the future of AI and the prospects for benefiting all of humanity through it. Speaking of arms races…
AI Safety and Ethics
In recent years, concerns about the short- and long-term implications of AI, and safety concerns in particular, have gone from being a fringe topic mostly researched outside the AI community to a mainstream topic of discussion. On Thursday at NIPS, there was a major symposium on the social impacts of machine learning attended by hundreds of researchers. Speakers such as Nick Bostrom (whose book Superintelligence helped catalyze the particular form of current concerns around the future of AI) and Erik Brynjolfsson (whose co-authored book The Second Machine Age helped spark discussions on the impact of AI on the economy) and panelists from industry and academia discussed short and long term issues with AI and its social implications, as well as what the AI community can do about it.
While a range of opinions was expressed on the urgency of different issues, there was no dispute in the symposium about the enormous consequences that AI is likely to have. Even Andrew Ng, a top deep learning researcher who leads AI research for Chinese search engine Baidu and who has compared concerns about existential risk from AI (of the sort Bostrom writes about, and Musk has echoed) to concerns about overpopulation on Mars, backpedaled his comments to some extent when he said that he thinks it makes sense for at least some people to be working on understanding potential long-term AI safety risks. The last panel of the symposium went into some detail about research priorities in that area, an area that Musk has previously invested $10 million in through the Future of Life Institute. There is certainly still much disagreement on the topic of long-term AI risks, as evinced by the diverse audience reactions at the symposium, some clapping at Ng downplaying long-term risks, and others responding favorably to DeepMind’s Shane Legg comparing AI safety research to engineers on the Apollo Project investigating ways that the mission could go wrong as a matter of common sense and simple responsibility. However, there has been a noticeable change in the tenor and number of people involved in these discussions in recent years.
In this context, OpenAI represents not only a big investment in AI safety (since this is among the areas Musk et al. have indicated the institution will study). It is also a sign of the mainstreaming of the issue, given the talent and prestige of those affiliated with the new organization, including Ilya Sutskever of Google deep learning fame. Along with safety research in particular, the funders and chairs of OpenAI have indicated that the organization will be concerned with shorter term issues such as the economic implications of AI, the need for AI to be designed in a way that is usable and complementary to human skills, and, of course, as the name implies, openness. This latter factor is particularly notable, not just for recruiting purposes as discussed above, but also because it represents a significant decision on the part of Musk and the funders. As Musk’s recommended book Superintelligence attests, diffusing AI technologies as widely as possible is not straightforwardly preferable over scenarios in which, say, it is more concentrated in the hands of governments or corporations with particular sets of safety constraints. Indeed, Musk’s latest comments (for example, in the Medium interview linked to above) suggest a shift in his thinking from when he compared AI to nuclear weapons and implied favoring a less open approach to AI safety. It will be interesting to see how the growing AI safety research community reacts to this move and to the specific arguments put forth by Musk et al., and to see how OpenAI will interpret the noteworthy last word in the statement of their mission on their website of ensuring that AI technology is “as broadly and evenly distributed as possible safely” (emphasis added). Thus, OpenAI has a lot of symbolic significance, but how will its mission be realized, if it will be?
Institutional Innovation
It’s not yet clear how exactly OpenAI will operate, but its status as a non-profit is of particular significance. This means that there will be no pesky shareholders questioning the return on investment of their research, or the particular balance struck between emphasis on AI performance progress and safety (a delicate issue again dealt with at some length in Superintelligence), in the event that OpenAI begins to focus more on the latter. Additionally, standing outside of traditional corporations will give OpenAI the ability to think more outside the box about potential applications of its AI approaches, instead of working around existing value streams as, e.g. Google and Facebook do to an extent. This, among other factors, will likely lead to a big influx of talented researchers interested in applying their talent to grand challenges, potentially leading OpenAI to grow to hundreds of people (or, someday, even more). It will also be interesting to see how OpenAI differentiates itself from a comparable (at least in stated ambition if not, yet, committed funds) organization, the Allen Institute for Artificial Intelligence, a non-profit funded by Paul Allen, which seems to have focused so far on applying AI to improving scientific productivity and on particular methods like natural language processing. I hope that OpenAI will differentiate itself by tackling truly difficult challenges like applying AI to, e.g. addressing poverty, mental health, or sustainability, or if they focus more on research, that they give very deep thought to what human-complementary AI means and what sorts of research won’t otherwise be funded by industry.
Scale is one of the ways that OpenAI may be distinct from academia, though it is not obvious all of the ways in which it intended to be different. Mark Riedl noted on Twitter that academia has many of the characteristics OpenAI is purported to have—though, I would add, academic research too often fails to have maximum positive impact and is too often bogged down in grant applications and other paperwork. Relative to industry, I’d expect OpenAI salaries to be on the order of 50-90% or more lower than what some of the best researchers could make, so the organization will need to stay true to its stated principles if it will continue recruiting successfully over the long term. To continue to recruit the best talent, I suspect, as was the case with another instance of institutional innovation I’m familiar with (ARPA-E), that the top leadership of OpenAI will want to not only pore through the hundreds or thousands of applications they will soon receive, but also aggressively recruit top talent in the field and bring in the likes of Musk to help with such recruitment.
The role of data at OpenAI also raises interesting questions from an institutional perspective, as Beau Cronin and others have noted on Twitter. With Amazon’s investment and Tesla’s indirect involvement through Musk, it would seem that OpenAI will potentially have access to a lot of the data needed to make deep learning and other AI approaches work well. But these are different sorts of datasets than Google and Facebook have, and may lend themselves to different technical approaches. They also raise the question of proprietary data – how will OpenAI balance its push for openness with this proprietary access? Will it release code but not data? How will its experiments be replicable if they are tailored to particular data streams? How will privacy be addressed? Some of these questions are unique to OpenAI and others aren’t. However, they’re the sorts of questions OpenAI will need to answer.
Conclusion/Open Questions
As I and others have written elsewhere, there are many open questions on the future of AI in general and how to innovate responsibly in this area, AI and the future of work, how AI progress may proceed, etc. and these precede OpenAI. The launch of OpenAI, however, makes them particularly concrete. Here I suggested some preliminary formulations of these questions in the context of a particular organization that ostensibly will address many of them. I look forward to thoughts people have on whether I formulated these issues correctly, how OpenAI could realize its goals, and what ripple effects its launch could have on the broader AI arms race/safety/ethics conversation.