December 04, 2024

Trump Must Rebalance America’s AI Strategy

Imagine if the U.S. Federal Reserve based its monetary policy on cryptocurrency’s speculative hype—or the Defense Department bet its manufacturing future on the overexcitement for 3D printing in the 2010s that never panned out. As detailed in a memorandum on artificial intelligence released on Oct. 24, President Joe Biden’s administration was beginning to run a similar risk by staking the lion’s share of the United States’ AI strategy on uncertain projections about the progress of large-scale frontier models, like those that power ChatGPT.

As President-elect Donald Trump’s incoming tech czars craft a new AI agenda, they have the opportunity to be both more ambitious and more risk averse: turbocharging the progress of frontier models and accelerating alternative uses of the technology, specifically for national security, in equal measure. Such a diversified approach would better account for the inherent uncertainty in AI development. It would also put the United States on firmer footing to expand its lead over China in the most transformative technology in a generation.

The disagreements about AI progress are so fundamental and held with such conviction that they have evoked comparisons to a “religious schism” among technologists.

AI's investors, technologists and policymakers are divided into two camps about the future of the technology. One camp believes that the future of AI is frontier models—general-purpose AI systems like the ones that power ChatGPT, able to solve problems across a variety of fields. Proponents of frontier models, often folks from industry giants like OpenAI and Anthropic, believe that these models will could surpass human intellect if they had sufficient computing power, revolutionizing science and technology. Opinions vary on the ultimate potential of “superintelligent” frontier models, but frontier lab CEOs have predicted that their chatbots will soon transcend the intellect of Nobel Prize winners and deliver “unimaginable” prosperity.

Critics believe that while frontier models may prove useful for a variety of tasks, current methods to build these models intrinsically lack the sophistication needed to supersede human intelligence and may never reach it. In this view, general-purpose frontier models are only one avenue among many for AI growth. This camp believes narrow models—which solve problems in specific domains but do not aim to “think” in a general sense—play an equal or greater role in the AI revolution. An example would be the specialized AlphaFold model, which slashed the time it took researchers to predict protein structures from months to minutes.

Read the full article on Foreign Policy.

View All Reports View All Articles & Multimedia