But as the fluency of GPT-3 has puzzled many observers, the big-language model’s approach has also attracted considerable criticism over the past few years. Some skeptics argue that software is only capable of blind mimicry – that it mimics syntactic models of human language, but is unable to generate its own ideas or make complex decisions, a major limitation that will prevent the LLM approach from maturing. like human intelligence. For these critics, GPT-3 is simply the latest shiny object in the long history of artificial intelligence noise, directing research dollars and attention to what will eventually be a dead end, not allowing other promising approaches to mature. Other critics believe that software such as GPT-3 will forever be compromised by bias, propaganda and misinformation in the data on which it is trained, which means that its use for anything more than salon tricks will always be irresponsible.
Wherever you find yourself in this debate, the pace of recent improvement in major language models makes it difficult to imagine that they will not be commercialized in the coming years. And this raises the question of exactly how they – and in this sense, the other major achievements of AI – should be unleashed in the world. With the rise of Facebook and Google, we’ve seen how dominance in a new field of technology can quickly lead to amazing power over society, and AI threatens to be even more transformative than social media in its end effects. What is the right kind of organization to build and own something of such scale and ambition, with such promise and such potential for abuse?
Or do we have to build it at all?
The origins of OpenAI date back to July 2015, when a small group of luminaries from the tech world gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place against the backdrop of two recent events in the world of technology, one positive and one more disturbing. On the one hand, radical advances in computing power – and some new breakthroughs in neural network design – have created a palpable sense of excitement in machine learning; There was a feeling that the long “winter of AI”, the decades in which the area could not withstand its early noise, was finally beginning to thaw. A group from the University of Toronto had trained a program called AlexNet to identify classes of photographic objects (dogs, castles, tractors, tables) with a level of accuracy far higher than any neural network has ever achieved. Google quickly rushed to hire the creators of AlexNet, while acquiring DeepMind and launching its own initiative called Google Brain. The widespread adoption of smart assistants such as Siri and Alexa has shown that even scripted agents can be user hits.
But during that same period, there was a seismic shift in public attitudes toward high technology, with once-popular companies like Google or Facebook criticized for their almost monopoly powers, their strengthening of conspiracy theories, and their relentless diversion to algorithmic emissions. Long-term fears about the dangers of artificial intelligence appear in publications and on the TED stage. Nick Bostrom of Oxford University has published his book Superintelligence, introducing a number of scenarios in which advanced AI could deviate from the interests of humanity with potentially catastrophic consequences. In late 2014, Stephen Hawking told the BBC that “the development of full artificial intelligence could lead to the end of the human race.” that this time the algorithms may not just sow polarization or sell our attention to the one who offers the highest price – they can destroy humanity itself. Again, all the evidence suggested that this force would be controlled by several Silicon Valley megacorporations.
The Sand Hill Road dinner program that July night was nothing if not ambitious: devising the best way to direct AI research to the most positive result possible, avoiding both the short-term negative consequences that plagued the era of Web 2.0 and long-term existential threats. From this dinner, a new idea began to emerge – one that will soon become a complete craze for Sam Altman of Y Combinator and Greg Brockman, who recently left Stripe. Interestingly, the idea was not so much technological as organizational: if AI were to be launched in a safe and useful way, it would require innovation at the management level and incentives and stakeholder involvement. The technical path to what the field calls artificial intelligence or AGI was not yet clear to the group. But alarming predictions from Bostrom and Hawking convinced them that AI’s achievement of human intelligence would consolidate an astonishing amount of strength and moral burden in those who eventually managed to invent and control them.
In December 2015, the group announced the creation of a new facility called OpenAI. Altman has signed on as the company’s chief executive, with Brockman overseeing the technology; another attendee at the dinner, AlexNet creator Ilya Sutzkever, was hired by Google to lead the study. (Elon Musk, who also attended the dinner, joined the board of directors but left in 2018.) In a blog post, Brockman and Sutzkever outlined the scope of their ambition: “OpenAI is a nonprofit study for an artificial intelligence company.” they write. “Our goal is to improve digital intelligence in ways that are likely to benefit humanity as a whole, unlimited by the need to generate financial returns.” They added: “We believe that AI should be a continuation of individual human wills and in the spirit of freedom, as widely and evenly distributed as possible.
The founders of OpenAI would issue a public charter three years later outlining the basic principles behind the new organization. The document is easily interpreted as a not-so-subtle digging into Google’s “Don’t Be Evil” slogan from its early days, acknowledging that maximizing the social benefits – and minimizing the harms – of new technologies has not always been a simple calculation. While Google and Facebook have achieved global dominance through closed source algorithms and their own networks, the founders of OpenAI have promised to go the other way, freely sharing new research and code with the world.
Add Comment