October 24, 2019
Artificial Intelligence Research Needs Responsible Publication Norms
After nearly a year of suspense and controversy, any day now the team of artificial intelligence (AI) researchers at OpenAI will release the full and final version of GPT-2, a language model that can “generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.” When OpenAI first unveiled the program in February, it was capable of impressive feats: Given a two-sentence prompt about unicorns living in the Andes Mountains, for example, the program produced a coherent nine-paragraph news article. At the time, the technical achievement was newsworthy—but it was how OpenAI chose to release the new technology that really caused a firestorm.
There is a prevailing norm of openness in the machine learning research community, consciously created by early giants in the field: Advances are expected to be shared, so that they can be evaluated and so that the entire field advances. However, in February, OpenAI opted for a more limited release due to concerns that the program could be used to generate misleading news articles; impersonate people online; or automate the production of abusive, fake or spam content. Accordingly, the company shared a small, 117M version along with sampling code but announced that it would not share key elements of the dataset, training code or model weights.
Read the full article from Lawfare.
Learn more about the Artificial Intelligence and International Stability Project:
More from CNAS
-
Catalyzing Crisis
The arrival of ChatGPT in November 2022 initiated both great excitement and fear around the world about the potential and risks of artificial intelligence (AI). In response, s...
By Bill Drexel & Caleb Withers
-
Obstacles and Opportunities for Transformative Change
Watch:...
By Paul Scharre
-
Every Country Is on Its Own on AI
But establishing such an institution quickly enough to match AI’s accelerating progress is likely a pipe dream, given the history of nuclear arms controls and their status tod...
By Bill Drexel & Michael Depp
-
The Time to Regulate AI Is Now
Policymakers should also be under no illusion that a light regulatory touch will somehow prevent a degree of concentration at AI’s frontier....
By Caleb Withers