

But the company has to deal with “capability overhang”: the issue that its own systems are more powerful than it knows at release.Īs researchers experiment with GPT-4 over the coming weeks and months, they are likely to uncover new ways of “prompting” the system that improve its ability to solve difficult problems. Since the release of GPT-4, OpenAI has been adding capabilities to the AI system with “plugins”, giving it the ability to look up data on the open web, plan holidays, and even order groceries. “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” they add. If researchers will not voluntarily pause their work on AI models more powerful than GPT-4, the letter’s benchmark for “giant” models, then “governments should step in”, the authors say. In a post from February, Altman wrote: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” The authors, coordinated by the “longtermist” thinktank the Future of Life Institute, cite OpenAI’s own co-founder Sam Altman in justifying their calls. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter says.
