SVG
Commentary
Wall Street Journal

IAEA for AI? That Model Has Already Failed

mike_watson
mike_watson
Associate Director, Center for the Future of Liberal Society
.
Caption
Samuel Altman testifies before the Senate on May 16, 2023, in Washington, DC. (Win McNamee via Getty Images)

Having been confined to academic discussions and tech conference gab-fests in years past, the question of artificial intelligence has finally caught the public’s attention. ChatGPT, Dall-E 2 and other new programs have demonstrated that computers can generate sentences and images that closely resemble man-made creations.

AI researchers are sounding the alarm. In March, some of the field’s leading lights joined with tech pioneers such as Elon Musk and Steve Wozniak to call for a six-month pause on AI experiments because “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” Last month the leadership of OpenAI, which created ChatGPT, called for “something like an IAEA”—the International Atomic Energy Agency—“for superintelligence efforts,” which would “inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.” If the alarmists are right about the dangers of AI and try to fix it with an IAEA-style solution, then we’re already doomed.

Read more in the Wall Street Journal.