Who, What, When: A sharp rift over AI diplomacy
National Security Advisor Jake Sullivan has publicly and privately warned that recent statements and policy proposals from former President Donald Trump risk undoing a year‑long push by the Biden administration to build an international architecture for safe, secure and trustworthy artificial intelligence. The tension has been especially acute since late 2023 and into 2024, as Washington sought to codify AI norms through executive action, allied coalitions and industry engagement.
Background: What the Biden team built
The Biden administration used multiple levers in 2023–24 to create a U.S. lead on AI governance. On October 30, 2023, President Biden signed an executive order aimed at promoting safe, secure and trustworthy development and use of AI. U.S. negotiators also participated in the November 2023 AI Safety Summit hosted in the U.K., where countries and major tech firms discussed common principles for “frontier AI” oversight and the so‑called Bletchley‑style commitments on transparency and testing.
At the same time, the White House and the Commerce Department advanced export controls and licensing regimes to limit the most dangerous dual‑use AI capabilities from reaching adversaries. Senior administration officials cultivated cooperative relationships with major technology firms — OpenAI, Microsoft, Google DeepMind and Anthropic among them — to establish voluntary guardrails such as red‑teaming, external audits and model reporting for high‑risk systems.
What Trump’s posture would change
Trump has campaigned on a deregulatory stance and repeatedly questioned multilateral institutions and alliance commitments — positions that Sullivan and other officials say could weaken ongoing AI diplomacy. If enacted, a Trump administration’s rollback of executive directives, loosening of export controls, or a pivot away from coordinated transatlantic and Indo‑Pacific engagement would hamper the interoperability of policies that underpin model governance and risk mitigation.
That interoperability matters: export controls, joint threat assessments, coordinated disclosure standards and shared incident response protocols all rely on alignment among allies. Without that, companies face divergent regulatory regimes, adversaries could exploit gaps, and the United States would cede leverage in negotiating safety standards for large language models and other foundation models.
Policy levers at stake
Specific tools Sullivan has pushed include export controls on advanced chips and cloud services used to train frontier models; licensing frameworks for transfer of model weights; mandatory vulnerability and incident reporting for high‑risk AI deployments; and multilateral mechanisms for red‑teaming and third‑party audits. Each of these is sensitive to executive action and regulatory continuity — meaning abrupt reversals would not only slow implementation but could require years to rebuild trust with allies and industry.
Expert perspectives and industry reaction
Policy analysts and former officials caution that a fragmented approach increases systemic risk. One former national security official described the stakes as similar to arms‑control erosion: when regimes fray, incentives to race — rather than restrain — become stronger. Tech industry leaders have publicly signaled a preference for predictable, harmonized rules because heterogeneous national policies drive compliance complexity and increase costs.
Academic researchers and think tanks working in AI safety point out that “frontier AI” systems are rapidly advancing — the faster pace intensifies the importance of multilateral coordination. Without agreed‑upon standards for model verification, provenance, and incident disclosure, independent adversaries could exploit training datasets, model weights or compute supply chains in ways that undermine both commercial integrity and national security.
Analysis: Why Sullivan is alarmed
Sullivan’s frustration reflects both strategic and practical concerns. Strategically, U.S. influence on global AI norms depends on demonstrating a stable, rules‑based alternative to laissez‑faire or adversarial models. Practically, industry cooperation on transparency and safety has been easiest when Washington could credibly promise consistent regulation and a multilateral negotiating posture.
If Washington instead becomes a patchwork of rapid reversals and transactional deals, allies may pursue their own regimes — or worse, align with authoritarian competitors that offer looser constraints in exchange for preferential access. That outcome would complicate export controls, intelligence sharing and the ability to mount unified responses to misuse.
Conclusion: Outlook and takeaways
The debate over AI foreign policy is now as much about diplomacy and alliance management as it is about technology. Whether Sullivan’s warnings translate into durable policy depends on election outcomes, legislative processes and industry choices. For now, the risk is clear: undermining a nascent multilateral framework could set back efforts to manage frontier AI for years, increasing geopolitical tension and operational risk for both governments and private sector developers.
Stakeholders should watch three things closely: any executive or regulatory rollbacks that affect export controls and reporting requirements; shifts in allied coordination forums on AI governance; and corporate commitments to transparency standards. Those moving parts will determine whether the U.S. remains a standard‑setter or merely a participant in a fractured global marketplace of AI rules.