🇺🇸 United States' National AI Strategy

The U.S. national AI strategy prioritizes global leadership through a decentralized, innovation-driven ecosystem aligned with democratic values and economic resilience.
Federal efforts focus on enabling R&D, scaling talent, and securing infrastructure, while coordinating with industry, allies, and states to guide ethical AI development.
From 2015 to 2025, AI policy evolved from foundational planning to active governance, emphasizing competitiveness, trust, safety, and global standard-setting.
Contents
This report was prepared by GINC in mid-2025 to provide a comprehensive analysis of the United States' national AI strategy, drawing on the latest policy developments, regulatory frameworks, and global positioning. The table of contents below outlines the structure of the report, organized into five thematic parts covering strategic vision, innovation systems, sectoral deployment, social impact and policy evolution over time.
Part I: National Vision and Strategic Foundations
Strategic Vision & Objectives
Governance Architecture
Policy Instruments & Incentives
Part II: Innovation System and Talent Development
R&D and Innovation Ecosystem
Talent, Education & Mobility
Data, Compute & Digital Infrastructure
Part III: Sectoral Integration and Global Influence
Industrial Deployment & Tech Diffusion
Regulatory, Ethical & Safety Frameworks
International Engagement & Standards
Defence, Security & Dual-Use Considerations
Part IV: Performance, Resilience and Social Impact
Performance Metrics & Monitoring
Strategic Foresight & Resilience
Foundational AI Capabilities
Public Trust, Inclusion & Social Equity
Part V: Evolution of US National AI Strategy (2015–2025)\
Strategic Vision & Objectives
The United States’ AI strategy prioritizes maintaining technological leadership, fostering innovation through a market-driven ecosystem, and aligning AI development with democratic values and economic prosperity. Initiatives like the 2019 American AI Initiative and the 2023 Executive Order on Safe, Secure, and Trustworthy AI underscore the dual objectives of global competitiveness and equitable benefit distribution. This vision emphasizes AI’s critical role in national security, economic growth, and upholding civil liberties.
Leadership and Global Competition
U.S. policymakers view AI as central to geopolitical competition, especially against China. The National Security Commission on AI warned in 2021 of America’s unpreparedness for AI-era competition, urging a comprehensive strategy to sustain technological leadership. In response to China’s AI ambitions, the U.S. aims to enhance its global dominance through continuous innovation rather than fixed milestones, leveraging its robust market-driven innovation ecosystem and elite research institutions.
Innovation Ecosystem Approach
Contrasting state-driven models, the U.S. strategy promotes a dynamic public-private innovation system. The 2019 American AI Initiative focuses on increasing R&D, releasing federal resources, setting standards, workforce development, and international collaboration. This decentralized, market-oriented approach involves tech giants like Google, Microsoft, Amazon, and labs like OpenAI, supported by strategic federal investments and collaborative initiatives to drive transformative AI advancements reflective of democratic norms.
Economic & Societal Benefits
The U.S. frames AI as crucial for economic growth, societal well-being, and job creation. Documents like the National AI R&D Strategic Plan highlight AI’s role in solving healthcare, environmental, and educational challenges. President Biden’s 2023 AI Executive Order stresses AI’s potential and risks, emphasizing economic competitiveness and societal benefits, including addressing inequalities and promoting inclusion. The U.S. approach balances technological leadership with societal welfare, reinforcing national prosperity and democratic values.
Governance Architecture
The U.S. AI governance model is decentralized yet coordinated, with leadership primarily from the White House’s Office of Science and Technology Policy (OSTP) and distributed implementation across federal agencies. The National AI Initiative Office coordinates strategy implementation and stakeholder engagement, ensuring cohesive cross-government efforts and decentralized experimentation at state and local levels.
Federal Leadership and Coordination
The OSTP, through the Select Committee on Artificial Intelligence, directs interagency coordination of AI investments and policies. Key agencies—NSF, DoD, NIST, DOE, and HHS—lead sector-specific AI initiatives. NIST provides technical governance, establishing AI standards and guidelines, while national security integration occurs via the National Security Council.
Public-Private Partnerships and Advisory Bodies
The U.S. actively incorporates industry and academia through advisory boards like the National AI Advisory Committee and partnerships such as the Partnership on AI. Collaborative frameworks ensure alignment of government oversight with private-sector innovation, maintaining trust and regulatory responsiveness. Organizations like IEEE and ISO standard committees provide additional quasi-governance roles, shaping global AI standards collaboratively.
State and Local Initiatives
U.S. states and cities pursue diverse AI initiatives, fostering decentralized experimentation. Examples include California’s Future of Work Commission and local regulations on facial recognition technology in cities like San Francisco. Federal programs like the Smart Cities Initiative support these local innovations, creating dynamic feedback loops between local experiments and federal strategies.
Policy Instruments & Incentives
The U.S. employs diverse policy tools—including R&D funding, tax incentives, infrastructure investment, and procurement policies—to foster AI development. Legislation such as the CHIPS and Science Act of 2022 provides substantial support, aiming to bolster competitiveness and innovation through extensive public and private investments.
Public Funding for R&D
Federal AI funding, managed through agencies like NSF, DARPA, and DOE, supports fundamental and applied research. Initiatives like the National AI Research Institutes and DARPA’s AI Next campaign exemplify strategic investments in cutting-edge research, fostering collaboration between academia and industry to sustain innovation pipelines.
Tax Incentives and Private Investment Climate
Broad-based R&D Tax Credits and targeted state incentives enhance the investment climate, significantly benefiting AI firms. Federal programs like the CHIPS Act further stimulate domestic hardware supply chains, encouraging substantial private sector investment and reinforcing the U.S. as the global hub for AI startup financing.
Government Procurement and Market Shaping
Federal agencies strategically use procurement to accelerate AI adoption. Programs like the DoD’s Joint AI Center and GSA’s AI Services schedule exemplify how procurement supports domestic AI solutions, fostering market validation and driving innovation through public-sector adoption.
Regulatory Flexibility and Sandboxes
Regulators adopt flexible, innovation-friendly approaches to AI, evident in initiatives like the FDA’s Digital Health Precertification Pilot and FAA’s UAS Integration Pilot Program. These pilots offer controlled environments for AI experimentation, balancing innovation with consumer protection and allowing responsive regulatory adaptation.
4. R&D and Innovation Ecosystem
The United States hosts a vibrant and diverse AI R&D ecosystem that spans elite universities, federally funded laboratories, corporate research centers, and open-source communities. This ecosystem is fueled by substantial public and private investment and is characterized by tight integration between academia and industry. U.S. institutions have historically led in core AI research – from pioneering machine learning algorithms to creating benchmark datasets – and continue to produce many of the field’s breakthroughs. An emphasis on open scientific inquiry, competition (e.g. conferences, challenge contests), and entrepreneurship has enabled rapid translation of research into real-world applications. The federal government’s long-term support for basic research (e.g. through NSF and DARPA) provides the substrate for innovation, while tech companies contribute massive computing resources and data access that accelerate progress. As a result, the U.S. accounts for a large share of top-tier AI publications, patents, and advanced model developments, although global competition is intensifying.
Academic and National Research Centers: American universities are at the forefront of AI science, housing renowned labs that have driven the field for decades. Institutions like Stanford, MIT, Carnegie Mellon, Berkeley, and the University of Washington consistently rank among the top in AI research output and citations. They, along with others (e.g. University of Illinois, Georgia Tech, University of Michigan), produce influential work in areas such as deep learning, computer vision, natural language processing (NLP), and robotics. Academic conferences in AI (NeurIPS, ICML, CVPR, AAAI, etc.) were originally U.S.-centric and still see heavy participation from U.S.-based researchers, although attendance has globalized. In addition to traditional departments, universities have created interdisciplinary AI institutes (e.g. Stanford’s Human-Centered AI Institute, MIT’s CSAIL, Berkeley’s BAIR) that bring together computer scientists with experts in ethics, law, and domain sciences. These centers often collaborate on national initiatives; for instance, several universities jointly lead the NSF AI Research Institutes focusing on themes like Agricultural AI for Transforming Workforce (led by University of Illinois) or AI in Molecular Discovery (led by MIT). Beyond campuses, the U.S. boasts federal research hubs including the Beijing (now Berkeley) Lab for AI – sorry, the U.S. analog would be national laboratories such as Oak Ridge, Lawrence Berkeley, and Los Alamos, which have AI programs targeting energy, climate modeling, and national security. The Department of Energy’s Argonne National Lab has an AI for Science initiative, while NIH’s National Library of Medicine supports AI-driven biomedical research. A unique feature of the U.S. landscape is the Beijing Academy of Artificial Intelligence (BAAI) – which is Chinese; let’s correct: the U.S. has analogous bodies like the Allen Institute for AI (AI2) in Seattle (founded by Microsoft co-founder Paul Allen) that operates as a nonprofit research institute producing open AI research (e.g. the Semantic Scholar academic search engine, or cutting-edge NLP models). The synergy among these academic and national centers is facilitated by open publication and frequent personnel exchange (professors taking industry sabbaticals, grad students interning at companies, etc.), creating a rich knowledge network.
National R&D Programs and Strategy: The U.S. formalized a national AI R&D Strategic Plan as early as 2016 (under the Obama Administration), outlining key priorities for federal investment. This plan – updated in 2019 and undergoing revision for 2025 – emphasizes long-term, transformative research in areas like reasoning and abstraction, robust AI, human-AI collaboration, and ethical, trustworthy AI. It also highlights the need for benchmarks and shared resources, which the government has facilitated (for example, NIST maintains datasets and runs evaluation challenges for facial recognition, and DARPA has organized grand challenges in self-driving cars, robotics, and cyber defense). Programs like the Science and Technology Initiative 2030 in the U.S. context translates to the “Industries of the Future” initiative, which under the Trump Administration bundled AI with quantum computing, 5G, and advanced manufacturing as priority areas for federal science agencies. As part of this, an explicit goal to double non-defense AI R&D funding over five years was set around 2019, a target that has been substantially met according to OSTP’s budget reports. Defense-related R&D is also significant: the DoD’s Artificial Intelligence and Data Accelerator (ADA) initiative and DARPA’s aforementioned AI Next program inject billions into more applied or mission-driven R&D, including autonomy for drones, AI for electronic warfare, and AI-assisted intelligence analysis. Notably, the National AI Initiative Act (2020) not only created coordinating offices but also authorized the National AI Research Resource (NAIRR) – envisioned as a shared infrastructure of cloud computing and data for researchers. In 2023, a NAIRR Task Force recommended establishing this resource to broaden access to computing beyond the tech giants, so university groups can experiment with large models and big data. While the NAIRR is still in planning, it reflects strategic foresight to democratize AI R&D capabilities. In summary, the federal R&D strategy balances fundamental research (to ensure paradigm-shifting discoveries) with challenge-focused efforts (to solve specific problems and keep ahead in strategic tech competition).
Academia-Industry Collaboration: A hallmark of the U.S. AI ecosystem is the porous boundary between academia and industry. Virtually all major tech companies have active collaborations with universities: they endow faculty chairs, sponsor labs, and run joint research projects. For example, Google partnered with Stanford on the Stanford Center for AI Safety and with Cornell on natural language understanding research; IBM has AI collaboratories with MIT (the MIT-IBM Watson AI Lab) focusing on physics-informed AI and healthcare, and with the University of Illinois for cognitive computing. Companies often recruit top academics as advisors or part-time researchers (Google’s Brain team and Microsoft Research have included prominent professors who split time between campus and company). The result is a high rate of co-authored papers between university researchers and industry researchers, cross-pollinating ideas. Competitions and benchmarks also foster collaboration: the ImageNet challenge – pioneered by Stanford professor Fei-Fei Li – involved both academic teams and corporate labs and led to the deep learning revolution in 2012. Since then, industry labs like DeepMind (owned by Alphabet) and OpenAI have published extensively in academic forums and even open-sourced significant software (e.g. OpenAI’s Gym for reinforcement learning). The government encourages these ties; DARPA projects often require an academic and industry partner in each team, and NSF’s grants in AI sometimes come with industry cost-sharing. Additionally, mobility of talent is high: students frequently take internships at AI companies (which can lead to sharing of non-public datasets or tools for academic research use), and many faculty have founded startups or joined companies full-time after breakthroughs (e.g. the creators of the ImageNet dataset went on to work at Facebook and Princeton; leading NLP professor Andrew Ng co-founded Google Brain and later Coursera). This close academia-industry nexus ensures that theoretical advances (like novel neural network architectures) swiftly find their way into practical deployment, and that industrial challenges inform academic research directions. It also means the U.S. benefits from a robust innovation pipeline – fundamental ideas often born in universities are scaled up by companies with massive computational resources, a synergy that has been pivotal in areas like large language models.
Open Source Platforms and Shared Infrastructure: The spread of open-source culture in AI, strongly embraced in the U.S., serves as a de facto innovation platform enabling widespread experimentation. American companies and researchers have created most of the dominant AI frameworks and tools: TensorFlow (released by Google) and PyTorch (originating from Facebook’s AI Research) are globally used frameworks that significantly lowered the barrier to entry for AI development. These frameworks are continually improved via contributions from both corporate engineers and academics, and they are provided free of charge, which has helped them become international standards. The U.S. open-source ecosystem also includes libraries like scikit-learn, Apache MXNet, and newer ones such as JAX, many of which come out of U.S. institutions or companies. Additionally, the culture of open benchmark datasets started early in the U.S.: ImageNet (Stanford), COCO for vision (Microsoft/Caltech), GLUE and SuperGLUE for language (academia/industry coalition) – all these enable researchers anywhere to evaluate models and compete in an apples-to-apples way, driving rapid progress. The federal government supports this openness: NIST’s Open Knowledge Network program, for instance, seeks to create open datasets and knowledge graphs for AI research. More recently, U.S. researchers launched Hugging Face’s Transformers library (initially from a U.S.-based startup Hugging Face) which became a hub for sharing pre-trained models – by 2023 it hosted thousands of models including those from Google, Meta, and EleutherAI (a grassroots collective releasing open models). OpenAI, despite its name, moved towards closed-source for its latest models, but in response, other U.S. groups (e.g. EleutherAI’s GPT-Neo and Stability AI’s Stable Diffusion) open-sourced powerful models, sparking community-driven innovation. The U.S. government and philanthropies have also set up shared compute resources: the COVID-19 High Performance Computing Consortium in 2020 granted researchers access to supercomputers for AI work on pandemic solutions, illustrating the ability to pool infrastructure in crises. In summary, the American innovation ecosystem is underpinned by a philosophy of openness and collaboration – from shared code and data to cross-sector partnerships – which accelerates advancement and widens the circle of AI contributors domestically and internationally. This approach has helped keep the U.S. at the cutting edge, although it also means foreign researchers benefit from American open resources, a trade-off U.S. policymakers are increasingly mindful of in the strategic competition context.
5. Talent, Education & Mobility
Building and attracting a skilled AI workforce is a central pillar of the U.S. strategy, as human capital is seen as the lifeblood of innovation. The United States has undertaken numerous initiatives to expand AI education, from K-12 through graduate levels, and to retrain the current workforce for the AI-driven economy. American universities annually graduate thousands of AI specialists, and the country remains a magnet for global talent – though recent immigration hurdles and international competition are notable challenges. Private sector efforts complement public programs: Big Tech companies fund AI training programs and advocate for more liberal STEM immigration policies. By mid-2020s, the U.S. is producing more AI PhDs than any other nation and is host to a large share of the world’s top AI researchers, but ensuring broad AI literacy and inclusion in the workforce remains an ongoing effort .
Higher Education and Degree Expansion: In the last decade, U.S. universities have dramatically ramped up AI-related offerings. Over 300 institutions now offer specialized programs or degrees in AI or machine learning, often through computer science departments or newly formed interdisciplinary schools. Flagship programs exist at universities like Carnegie Mellon (which offers undergraduate and PhD degrees specifically in AI), Stanford (with its AI-focused tracks and the Human-Centered AI Institute), and MIT (which in 2019 founded the College of Computing largely to integrate AI across disciplines). Enrollment in computer science and data science programs has surged – many universities report computer science is now the most popular undergraduate major, fueled by interest in AI and machine learning. Graduate education has also expanded: the number of annual AI-related PhDs in the U.S. has more than doubled since 2015, topping well over 1,000 per year by 2020 . According to the Stanford AI Index, U.S. universities confer a plurality of global AI doctorates, and these graduates are highly sought by industry and academia worldwide. Curriculum development has kept pace: universities offer not only technical courses (deep learning, NLP, computer vision) but increasingly courses on AI ethics, policy, and societal impact – aligning with the “AI with American Values” emphasis . The U.S. Department of Education has encouraged integration of AI into diverse fields (“AI + X” programs), supporting partnerships like AI in Medicine or AI for Business curricula. Federal grants (via NSF) help develop open courseware for AI and fund traineeship programs that give graduate students AI research experience in government or industry labs. Overall, the higher ed system is scaling up to meet demand, though industry’s voracious hiring sometimes means top students or faculty leave academia early for lucrative tech jobs, posing its own challenge for universities to retain teaching talent.
STEM in K-12 and Pre-professional Training: There is a growing movement in the U.S. to introduce computational thinking and AI concepts at earlier education stages to cultivate a domestic talent pipeline. Programs like CSforAll (promoted during the Obama administration) aimed to give every K-12 student exposure to coding; many school districts have since added computer science courses, and some high schools now offer introductory AI modules or clubs (e.g. using basic machine learning tools appropriate for teens). Nonprofits and companies sponsor AI competitions for youth – for example, First Robotics competitions incorporate AI in their challenges, and the AI4K12 Initiative (backed by NSF and private partners) has developed guidelines for what AI knowledge should be taught in K-12. Additionally, educational technology startups (like Squirrel AI in the U.S. context, we have Khan Academy experimenting with AI tutors) are bringing AI-driven personalized learning into classrooms, doubling as both a teaching tool and a way to spark student interest in AI itself. At the workforce level, community colleges and technical bootcamps have launched AI certificate programs targeting working adults looking to transition into data science or AI roles. For instance, Coursera and Udacity – American online learning platforms – offer popular nano-degrees in AI engineering and machine learning, often developed in collaboration with industry (Google’s TensorFlow developer certification, etc.). The government has recognized AI training as part of its broader workforce development – the Department of Labor’s Employment and Training Administration has funded apprenticeship programs in data analytics and AI, and included AI skills in its definitions for high-demand job training. By treating AI literacy as a basic skill for the 21st century, these efforts aim to prevent a divide where only a small elite can work with AI, instead preparing a wide swath of Americans for jobs augmented or created by AI.
Reskilling and Upskilling the Current Workforce: With automation poised to transform many jobs, U.S. policy emphasizes reskilling workers so they can complement AI tools rather than be displaced by them. The National Skills Coalition and other workforce groups have worked with the government on programs to teach digital skills to workers in industries like manufacturing, retail, and transportation that are adopting AI. For example, there are initiatives to train manufacturing workers in operating AI-enabled machinery and robotics, often through community college courses backed by the Department of Commerce’s Manufacturing Extension Partnership. The Internet of Things/AI job training program under the Workforce Innovation and Opportunity Act is one instance that subsidizes courses for mid-career workers to learn data analysis and AI basics. Tech companies have also started large-scale training pledges: IBM’s “SkillsBuild” and Microsoft’s Global Skills Initiative claim to have trained millions worldwide in AI and cloud skills, including significant numbers in the U.S. These typically offer free online modules in things like AI fundamentals or specific tools (Azure ML, etc.), some culminating in certificates that employers recognize. Another avenue is the use of AI itself in training – personalized learning systems that can help workers acquire new skills more efficiently. The Department of Veterans Affairs, for instance, uses an AI-driven platform to help veterans learn new tech skills and match them with jobs. Still, challenges persist in reaching those with lower education levels or those in small businesses. To incentivize companies, a few states offer tax credits for employers that invest in approved worker training programs in emerging technologies (AI included). Moreover, labor unions in sectors like automotive and healthcare are negotiating for employer-funded retraining programs as part of contracts, acknowledging the impact AI will have on job roles (for example, training radiology technicians to work with AI diagnostic tools). These efforts reflect a social priority: to ensure the AI revolution creates opportunity broadly and mitigates unemployment or inequality.
Immigration and Global Talent Attraction: The United States has historically relied on international talent to sustain its leadership in advanced technology, and AI is no exception. A significant portion of AI researchers in American universities and companies are foreign-born, many arriving first as students. Policies to attract and retain this talent are therefore crucial. Programs like the H-1B visa for highly skilled workers have been instrumental – major tech firms each employ thousands of AI specialists on H-1B visas. However, the annual cap and lottery for H-1Bs, as well as more restrictive immigration policies around 2017–2020, have created uncertainty. In response, industry and academia lobbied for reforms: there are proposals for a new “STEM visa” that would automatically provide green cards to graduates with advanced STEM degrees from U.S. universities. While not yet law, the intent is to remove barriers for AI PhDs and engineers to remain in the U.S. permanently. The Biden Administration has taken some executive actions, for instance expanding the National Interest Waiver criteria to make it easier for those with AI expertise to get green cards without employer sponsorship. Additionally, the State Department launched the “Einstein Visa” (O-1) promotion campaign to encourage top AI scientists to apply as individuals of extraordinary ability. The U.S. also engages in talent outreach abroad: initiatives like the U.S.-India Initiative on Critical and Emerging Technology (iCET) include measures for smoother movement of experts between the countries (important as India is a major source of AI talent). Despite geopolitical tensions, attracting talent from China – which historically sent many top students to U.S. grad schools – remains a factor; although Chinese student visas in AI fields faced tighter scrutiny post-2018 due to espionage concerns, many Chinese AI scientists trained in the U.S. still choose to stay and contribute to American research . The Thousand Talents Program mentioned on the China side is something the U.S. watches warily; conversely, the U.S. seeks to out-compete by being a more attractive environment for innovation and free inquiry. Indeed, surveys show a majority of international AI graduates would prefer to work in the U.S. if possible, drawn by the rich ecosystem of companies and research (and often by higher salaries). Maintaining this edge in talent attraction is seen as vital: the NSCAI’s 2021 report explicitly recommended easing immigration for AI experts as a national security imperative .
Brain Circulation and Retention: Beyond one-way immigration, the U.S. benefits from global “brain circulation” – many AI luminaries engage in cross-border collaboration yet maintain ties to U.S. institutions. The U.S. hosts numerous international conferences and visiting scholar programs, ensuring a continuous exchange of ideas. For example, American conferences often feature workshops sponsored by foreign tech companies or governments, but the intellectual milieu remains U.S.-centric. The Fulbright scholarship and newer programs like the Schmidt Science Fellowships bring young researchers from abroad to U.S. labs to work on AI projects. Retaining top talent is a growing concern as other countries build their own opportunities; in recent years, a few notable AI researchers have left the U.S. for openings in Europe or Canada due to immigration issues or better research funding. To counter this, U.S. universities and companies have offered creative arrangements such as remote roles or satellite offices. Google Research, for instance, established AI centers in Canada (Montreal, Toronto) largely to tap talent that might not get U.S. visas, while still integrating that work into its global efforts. Meanwhile, many foreign-born AI scientists in the U.S. eventually start companies, adding to the economy – examples include the founders of Google Brain (Andrew Ng, originally from the UK/Hong Kong) and Zoom (Eric Yuan from China). The American approach thus far has been to emphasize openness: by being the place where anyone can come to do world-class AI research, the U.S. gains a disproportionate share of the best minds. Ensuring policies continue to reflect that openness (e.g. avoiding overly onerous export controls on researchers and maintaining a welcoming culture in universities) is seen as key to long-term talent dominance in AI.
6. Data, Compute & Digital Infrastructure
The foundations of AI competitiveness – large-scale data, computing power, and the underlying digital infrastructure – are areas where the U.S. has significant strengths but also faces strategic vulnerabilities. American technology companies have built vast data ecosystems and cloud computing networks that fuel AI development, while the government has promoted open data and invested in cutting-edge compute resources. At the same time, increasing concern about data privacy and security is shaping new governance frameworks, and reliance on foreign semiconductor manufacturing has prompted major federal intervention to localize the supply chain. The U.S. approach focuses on leveraging its rich data environment and advanced infrastructure, guided by laws and norms that protect individual rights, and on ensuring the hardware and software stack for AI remains under its control or influence.
Data Resources and Governance: U.S. companies like Google, Facebook (Meta), Amazon, and Microsoft hold some of the world’s largest troves of user data, which have been a competitive advantage in training AI models (e.g. vast text corpora for language models, images for vision models from platforms like Instagram and YouTube). Unlike China’s state-led data pooling, the American data landscape is largely private-sector driven, with tech firms accumulating data through consumer services and enterprise operations. However, the U.S. government has worked to make certain data more accessible for innovation. The OPEN Government Data Act (2018) requires federal agencies to publish their data in machine-readable formats, leading to portals like Data.gov which host thousands of public datasets (from census demographics to weather records) that AI developers can use freely. Additionally, agencies have curated domain-specific data for AI research: NIH created a database of anonymized medical images for AI diagnostics, DOT publishes detailed traffic datasets for autonomous vehicle development, etc. These efforts treat data as a public good to spur progress. The U.S. has not declared data a national property or instituted a national data exchange; instead, it relies on market transactions and open releases. On the governance front, personal data protection in the U.S. is addressed by a patchwork of sectoral laws (HIPAA for health, FERPA for education, etc.) and state statutes rather than a single national privacy law. California’s Consumer Privacy Act (CCPA), effective 2020, gives residents rights to access and delete personal data and opt out of sale, indirectly shaping AI practices by compelling companies to be more transparent. In absence of comprehensive federal privacy legislation, the Federal Trade Commission (FTC) uses its authority to punish “unfair or deceptive” data practices, which has implications for AI – for instance, the FTC fined companies for misusing facial recognition data and has warned that selling biased AI algorithms trained on ill-gotten data could be illegal. Meanwhile, recognizing that data is fuel for AI, the U.S. is cautious about cross-border data flows. While generally promoting a free flow of information, it has restricted government use of foreign-owned platforms (e.g. bans on TikTok on federal devices due to data security concerns) and is negotiating frameworks with allies (like the EU-U.S. Data Privacy Framework) to ensure continued data exchange under privacy guarantees. Compared to China’s centralized approach (e.g. China’s National Data Administration), U.S. data governance is more decentralized and pluralistic, but a consensus is emerging that secure and privacy-preserving access to large datasets is crucial for AI innovation – leading to proposals for federated data networks or privacy-enhancing technologies to reconcile data needs with rights.
Compute Power and Cloud Infrastructure: The United States leads in many aspects of AI compute infrastructure, thanks largely to its dominant cloud providers and supercomputing capabilities. Companies such as NVIDIA, AMD, and Intel (all U.S.-based or with major U.S. operations) design the majority of high-end AI chips, like GPUs and specialized AI accelerators, which are then used in data centers worldwide. The fastest supercomputers have often been U.S. machines: as of 2022, the Frontier supercomputer at Oak Ridge National Lab (using AMD CPUs and GPUs) achieved the world’s first exascale performance, a milestone for large-scale AI and simulation. U.S. cloud giants – Amazon Web Services (AWS), Microsoft Azure, Google Cloud – operate a massive global network of data centers that collectively offer exaflops of on-demand computing for AI training. These clouds have enabled American AI labs to train frontier models like GPT-3 and GPT-4 by providing virtually unlimited scaling. They also host government workloads; e.g. the Pentagon’s Joint Warfighting Cloud initiative contracts with these providers to ensure military has access to top-tier cloud AI capabilities. Recognizing the strategic importance of compute, the U.S. launched initiatives to coordinate and expand resources. The “NSF AI Supercomputing Centers” program grants academic researchers access to high-performance clusters (often at national labs or universities) specifically for AI projects, addressing the gap that many universities cannot afford the tens of millions for state-of-the-art GPUs on their own. Regionally, the U.S. does not direct computing tasks the way China’s “Eastern Data, Western Compute” does, but private sector trends have naturally led to major cloud regions in areas with cheap power and land (e.g. Microsoft and Google have huge data centers in Iowa, Oregon, Arizona, akin to the West accommodating data). To improve access equity, the pending National AI Research Resource (NAIRR) aims to federate existing academic supercomputers and cloud credits into a shared pool for researchers not affiliated with big industry labs. Security of compute is also a focus: laws like the 2018 Cloud Act assert U.S. law enforcement rights over data in U.S.-controlled clouds even if stored abroad, and the 2023 Biden AI Executive Order called for the Commerce Department to ensure that leading AI compute power (especially cloud infrastructure) isn’t misused by adversaries – including possibly requiring cloud providers to verify foreign customers’ identities for national security reasons. On balance, the U.S. enjoys a robust compute advantage: by one estimate, in 2023 the U.S. had over 50% of global AI computing capacity when combining cloud and HPC facilities. The challenge is keeping it: there are concerns about energy consumption, supply chain resilience for data center hardware, and the need to continue innovating (e.g. in AI-specialized chips, where Google’s TPU and startups like Cerebras provide home-grown solutions). The CHIPS Act’s funding for semiconductor fabs (e.g. new plants in Arizona and Texas) is partly aimed at ensuring the U.S. can physically produce the advanced chips its AI computing relies on.
Semiconductor Supply Chain and Sovereignty: The AI hardware stack – particularly cutting-edge chips – has been an area of strategic vulnerability acknowledged by U.S. policymakers. While American firms design the most advanced AI chips (NVIDIA’s GPUs are the industry gold standard), manufacturing of those advanced chips has long been outsourced primarily to TSMC in Taiwan and Samsung in South Korea. This concentration raised alarms, given geopolitical risks and the U.S.-China tech rivalry. In response, the CHIPS and Science Act (2022) earmarked $52 billion explicitly for domestic semiconductor manufacturing subsidies and R&D, aiming to boost onshore production of <10nm chips crucial for AI. By 2024, ground was broken on new fab projects: TSMC is building plants in Arizona, Samsung in Texas, Intel expanding in Ohio and Arizona – all supported by federal and state incentives. This marks a significant shift towards “sovereign” compute capability, ensuring the U.S. and allies can fabricate the AI chips they design without potential chokepoints abroad. Concurrently, U.S. export controls introduced in late 2022 (and updated in 2023) restricted China’s access to top-tier AI chips and the equipment to make them . The rules specifically targeted GPUs like NVIDIA’s A100/H100, requiring licenses for export to China, which has slowed Chinese training of large models and underscored the U.S.’s intent to maintain a hardware edge. On software, the U.S. approach is more about open leadership than restriction: as mentioned, most foundational AI software platforms come from U.S. entities and are open source. However, there is a growing push for “secure” and “trustworthy” AI infrastructure. NIST, for example, promotes development of standards for robust AI systems and is examining the supply chain for AI components for vulnerabilities. The notion of “digital sovereignty” in the U.S. context means ensuring that from chips (physical) to cloud services to critical AI algorithms, the U.S. and its allies are not dependent on strategic competitors. This has led to increased R&D in alternative chip designs (neuromorphic computing research at Sandia Lab, photonic AI chips by DARPA programs) and to collaborations like the Quad (U.S.-Japan-Australia-India) working group on critical technology, which includes semiconductors and AI as areas for joint capacity building. While the U.S. is not replicating a centrally planned stack, it is aligning policies to secure each layer: financing fabs, supporting open-source software ecosystems (thus keeping global developers tied into U.S.-originated platforms), and safeguarding data centers against cyber threats.
Digital Infrastructure and Broadband: Underlying all AI progress is the general digital infrastructure – broadband networks, 5G, sensor networks – where the U.S. aims to stay competitive. The 2021 Infrastructure Investment and Jobs Act allocated significant funding ($65 billion) to expand high-speed internet access across the country, recognizing that widespread connectivity is needed to both generate data for AI and ensure all communities can benefit from AI services. For instance, rural broadband expansion allows farmers to use AI-driven precision agriculture tools (like sensor data analytics) and enables telemedicine AI solutions in remote areas. In 5G, U.S. carriers (Verizon, AT&T, T-Mobile) rolled out networks that, while behind some Asian countries in user adoption, provide the platform for IoT and edge-AI applications such as smart traffic systems or AR/VR. The U.S. government’s stance on telecom has also been security-focused – effectively banning Chinese 5G vendors like Huawei from U.S. networks – which by extension is aimed at ensuring the integrity of the infrastructure on which AI data travels. This shows a convergence of infrastructure policy with AI strategy: secure, modern networks are needed so AI can be deployed confidently in critical sectors (energy grid, transportation, defense). Additionally, the U.S. has invested in cloud infrastructure for government (e.g. FedRAMP authorized clouds) making sure agencies have modern environments to host AI solutions – for example, the Department of Veterans Affairs set up a cloud platform to host AI models that analyze medical records. Summarily, the U.S. sees its rich data and compute environment as a key advantage, and is taking steps to fortify and broaden that foundation. By promoting open data, maintaining leadership in computing tech, and shoring up supply chains and networks, the U.S. is striving to secure the “fuel and engine” of the AI revolution on home turf.
7. Industrial Deployment & Tech Diffusion
The integration of AI into industry and government services is a priority of U.S. strategy, though pursued through enabling policies rather than central directives. AI adoption in the United States is largely driven by private sector innovation and market demand, with the government acting as facilitator and early adopter in specific domains. Across sectors – from manufacturing to healthcare to transportation – AI technologies are being deployed to improve efficiency, quality, and decision-making. Federal initiatives such as Manufacturing USA and sector-specific AI programs aim to diffuse advanced AI techniques (like robotics, predictive analytics) into traditional industries. The U.S. approach emphasizes creating the right conditions (R&D support, standards, regulatory clarity) for AI to flourish in each sector, while leveraging public procurement and pilot projects to demonstrate capabilities. By the mid-2020s, AI is increasingly embedded in American industrial processes and consumer services: factories use computer vision for quality control, farms use AI for precision spraying, banks deploy AI for fraud detection, and retailers rely on AI forecasting for inventory – all indicating broad tech diffusion.
Priority Sectors and Economic Modernization: The U.S. has identified certain sectors as critical for AI-driven transformation due to their economic importance or national security implications. Manufacturing is a key focus, aligning with the push to revitalize domestic industry. Through the Made in America initiatives and partnerships with industry, advanced AI-powered robotics and process controls are being introduced especially in automotive, aerospace, electronics, and chemical production. Factories increasingly use AI for predictive maintenance (anticipating equipment failures), generative design of components, and vision systems that inspect products for defects faster than human inspectors. The Manufacturing USA institutes (like ARM – Advanced Robotics for Manufacturing in Pittsburgh) bring together companies and universities to help small and mid-size manufacturers adopt AI and robotics, often with funding from the Department of Defense or Commerce. Healthcare is another priority: the FDA has fast-tracked approvals of AI-based medical devices such as imaging diagnostics for radiology and cardiology, and the Centers for Medicare & Medicaid are piloting AI for predictive patient care (e.g. identifying high-risk patients for early intervention). U.S. hospitals have begun deploying FDA-cleared AI tools for detecting cancers in scans or guiding surgery, improving outcomes and potentially reducing costs. Transportation sees heavy investment in AI, led by the private sector in autonomous vehicles (Waymo, Cruise, Tesla’s Autopilot) and by government in smart infrastructure. Over 40 U.S. states have authorized testing or operation of self-driving cars in some form, encouraged by federal DOT guidance that sets a permissive tone. In cities like Phoenix and San Francisco, autonomous taxis and delivery bots are moving from pilot to commercial stages. Simultaneously, AI is used in traffic management – e.g. Los Angeles’ ATSAC system optimizes traffic lights using real-time sensor data, and the Department of Transportation’s Integrated Corridor Management projects employ AI to smooth traffic across highways and arterials. Agriculture has seen AI-driven precision farming: American farms use AI-based analytics of satellite imagery and IoT sensor data to guide irrigation and fertilizer use, supported by USDA grants and startups like Climate Corp. And energy: utilities employ AI for grid management, forecasting demand, and integrating renewable energy (with DoE national labs helping develop these algorithms). These examples illustrate how AI is woven into the modernization of legacy industries, with government often playing a convening role through workshops, grants, and challenges to spur adoption. A quantitative goal was set in the 2021 National AI Initiative: to have AI contribute significantly to annual GDP growth and to specific outputs (like improving power grid reliability by X% via AI by mid-2020s) – while not as explicit as China’s numerical targets, the ethos is similar in believing AI will be a major productivity driver across the economy .
Government as Early Adopter and Catalyst: The U.S. federal government has increasingly turned to AI to improve its own operations, which in turn helps validate technologies for wider use. Agencies are deploying AI in areas ranging from citizen services to defense operations. For instance, the Internal Revenue Service (IRS) uses AI models to detect tax fraud patterns in filings, and the Social Security Administration is testing AI assistance to handle the huge volume of disability claims and appeals more quickly. These applications aim to increase efficiency and reduce backlogs in public services. The General Services Administration (GSA) established an AI Center of Excellence to help other agencies pilot AI solutions (like using chatbots for answering public inquiries or machine learning for procurement analytics). On the defense side, DoD’s Project Maven was an early flagship (started 2017) using AI to analyze drone surveillance footage, thereby augmenting intelligence analysts; despite initial controversy (Google’s involvement and subsequent withdrawal due to employee protests), Project Maven has continued and expanded with other contractors . The U.S. Army’s Project Convergence exercises have integrated AI for multi-domain operations, demonstrating things like real-time sensor fusion and targeting recommendations on the battlefield. Such military deployments act as proving grounds for AI under high-stakes conditions, potentially yielding spin-off benefits to commercial tech (for example, advances in computer vision robustness from Maven could translate to better civilian autonomous systems). The government also uses “prize challenges” to catalyze solutions – NASA’s annual Space Robotics Challenge invites teams to develop AI for autonomous rovers, and DOE’s Grid Optimization Competition pushes AI methods for power grid resilience. By participating as a customer, the government can accelerate the maturity of AI products: one example is the City Brain urban AI platform originally by Alibaba, which has a counterpart in some U.S. cities via local integrators customizing it for traffic and emergency response management. The difference is these are voluntary procurements rather than mandated programs. When the U.S. federal government or a big city chooses an AI solution, it often encourages similar adoption in other municipalities or by contractors. Thus, public sector demand (even if smaller than the private market) serves as a strategic catalyst, especially in areas where market incentives alone might under-invest (like AI for social services or environmental monitoring).
SME Inclusion and Democratizing AI: Recognizing that the vast majority of American businesses are small or medium-sized enterprises (SMEs), there’s an emphasis on not leaving them behind in the AI revolution. Many SMEs lack the expertise or resources to develop AI in-house. To address this, the Commerce Department’s NIST and Small Business Administration (SBA) have run outreach programs on how AI can benefit small business operations (from automating back-office tasks to enhancing e-commerce recommendations). The SBA has partnered with tech companies to offer AI training workshops for small business owners. Moreover, a network of Manufacturing Extension Partnership (MEP) centers across all 50 states provides hands-on assistance to small manufacturers, including guidance on adopting Industry 4.0 technologies like AI-driven quality control. The U.S. also promotes open-source AI tools as a leveling force: freely available models and libraries mean even startups or SMEs can implement AI without huge R&D budgets. A good example is the release of Meta’s LLaMA 2 language model (developed by a U.S. company) as open source in 2023, which many small developers have used to build custom chatbots and business automation without needing to train a massive model from scratch. The government indirectly supports this through funding for open datasets and computing credits for startups via NSF’s small business programs. On the workforce side, training grants help SMEs upskill their employees to use AI software (for instance, an auto parts factory training its technicians to operate AI-based visual inspection systems). There are signs these efforts are working: surveys by McKinsey and others indicate AI adoption in at least one function has grown not only among large firms but also mid-sized ones (with global surveys showing ~25% of smaller firms using some AI by 2022, up from single digits a few years prior). However, disparities remain, often due to cost and talent gaps. The U.S. strategy thus includes lowering cost barriers – encouraging cloud providers to offer pay-as-you-go AI services and encouraging tech consultancies to package affordable AI solutions for different sectors (e.g. pre-trained models for retail inventory optimization that a small retailer can plug into their system). By spreading AI beyond the tech giants and unicorn startups to the long tail of businesses, the aim is to boost overall economic productivity and maintain broad-based competitiveness.
Demonstration Projects and Standards: Similar to China’s pilot zones, the U.S. has relied on demonstration projects and consortia to validate AI technologies in context and develop standards. For example, the Autonomous Vehicle Proving Grounds initiative (launched by U.S. DOT in 2017) selected 10 sites, including North Carolina and Michigan, to pilot autonomous car tech in varied conditions and share lessons learned with regulators. On the healthcare side, the AI Healthcare Incubator at HHS identifies promising AI tools and arranges trials in Medicare/Medicaid settings to test efficacy and safety on real patient populations – results from these trials then inform FDA approvals and insurance coverage decisions. The National Institute of Standards and Technology (NIST) plays a key role in developing technical standards and benchmarks that facilitate deployment. NIST’s Face Recognition Vendor Test (FRVT) is a global benchmark that annually measures the accuracy of facial recognition algorithms, including assessing demographic bias. These evaluations have pushed vendors to improve and have guided agencies like DHS in procuring the most accurate systems. In another example, NIST in 2019 released a plan for AI technical standards that emphasized industry-led development of benchmarks for accuracy, interpretability, and security of AI – supporting creation of standards so that, say, autonomous vehicles or medical AI devices meet consistent performance criteria. Such standards reduce uncertainty for adopters. The Institute of Electrical and Electronics Engineers (IEEE), largely driven by U.S. experts, has published recommended practice documents (like IEEE 7001-2021 on transparency of autonomous systems) which help industries self-regulate and assure customers/government of safety. All these measures – demonstration, support for SMEs, standards – aim to ensure AI technologies don’t remain confined to tech sector enclaves but diffuse widely and responsibly through the fabric of the U.S. economy. The prevailing philosophy is that the market, if properly nudged, will integrate AI wherever it yields value, thus the role of policy is to provide those nudges and guardrails rather than direct control.
8. Regulatory, Ethical & Safety Frameworks
The United States has been developing a multifaceted approach to AI governance that combines non-binding guidelines, sector-specific regulations, and emerging federal rules to address the ethical and safety implications of AI. In contrast to the more centralized and prescriptive regulations seen in the EU or China, the U.S. so far has favored a principles-based, light-touch regulatory stance – at least at the federal level – to avoid stifling innovation. However, the past few years saw significant progress: the issuance of the Blueprint for an AI Bill of Rights (a White House framework for protecting citizens in the AI age), the development of the NIST AI Risk Management Framework, and an expansive Executive Order on AI Safety in late 2023. These, alongside existing laws and agency actions, form one of the world’s most comprehensive albeit decentralized AI governance regimes. Emphasis is placed on transparency, fairness, accountability, and security in AI systems, aligning with democratic values. Additionally, professional norms and ethical principles – from both government and industry – play a major role in shaping AI development and deployment in the U.S.
Principles and Soft Law – AI Bill of Rights and Agency Guidance: In October 2022, the White House OSTP released the Blueprint for an AI Bill of Rights, which articulates five core protections Americans should have in the context of automated systems. These principles are: (1) Safe and Effective Systems (AI should be tested for safety); (2) Algorithmic Discrimination Protections (AI should not exacerbate bias) ; (3) Data Privacy (users should have agency over how data is used); (4) Notice and Explanation (people should know when AI is being used and understand outcomes); and (5) Human Alternatives, Consideration, and Fallback (the option to opt out to a human where appropriate). While this Bill of Rights is not a binding law, it serves as guidance for federal agencies and a statement of values for industry. Around the same time, federal agencies were directed (via OMB memo M-21-06 in late 2020 and an update in 2023) to ensure their own use of AI upholds these principles – for instance, the DOJ must ensure AI used in law enforcement doesn’t violate civil rights, and HUD must watch that AI in housing finance doesn’t discriminate. Agencies like the FTC, CFPB, and EEOC have jointly affirmed that existing laws (consumer protection, fair lending, equal employment) apply to AI outcomes. The FTC has been particularly vocal, warning companies against selling “biased or ineffective AI” and asserting it will prosecute deceptive claims about AI performance or uses that lead to harm (e.g. biased hiring algorithms violating employment laws). In January 2023, the NIST AI Risk Management Framework (RMF) 1.0 was published after extensive consultation. It provides a process for organizations to identify and mitigate risks in design, development, use, and evaluation of AI, emphasizing robustness, bias mitigation, and transparency. Though voluntary, it quickly gained traction as a de facto standard among U.S. companies and government agencies for internal AI governance. Collectively, these soft-law instruments push U.S. industry toward responsible AI by outlining clear expectations without rigid rules, reflecting a regulatory culture that prefers guidance and oversight to statutory mandates in these early days of AI.
Emerging Federal Regulations – Executive Order 2023 and Legislative Proposals: Recognizing the rapid advancements (especially generative AI’s rise in 2022–23), the Biden Administration moved to formalize some requirements. In October 2023, President Biden signed a sweeping Executive Order on Safe, Secure, and Trustworthy AI, the most significant U.S. action on AI governance to date. This EO, citing urgency due to AI’s fast pace, lays out new mandates: developers of the most powerful AI models (e.g. foundation models beyond a certain capability threshold) must share the results of red-team safety tests with the government before public release. It also instructs NIST to establish guidelines for AI watermarking to identify AI-generated content – an effort to combat deepfakes and misinformation. The order directs the Department of Commerce to advance a framework for certifying AI model safety and to work on rules ensuring cloud computing providers verify identities of foreign users (to prevent illicit AI use, as part of national security controls). Moreover, it tasks agencies across the board with new responsibilities: the FDA for evaluating AI in health, the DOT for autonomous vehicle safety standards, the Education Department for protecting against AI-enabled cheating and ensuring equitable use in schools . While an EO isn’t legislation, it effectively compels the federal bureaucracy to incorporate these AI governance measures. On Capitol Hill, there’s growing bipartisan interest in AI regulation. In 2023, Senate Majority Leader Chuck Schumer proposed a legislative framework that would require licensing of highly capable AI models, liability for harms, and perhaps a new federal AI safety authority. Committees held hearings (notably Sam Altman of OpenAI testifying in May 2023) to explore regulatory ideas, ranging from transparency requirements for AI-generated content to mandating impact assessments for high-risk AI systems. The Algorithmic Accountability Act, a bill originally introduced in 2019 and updated in 2022, would direct the FTC to create regulations forcing companies to audit AI systems for bias and fairness; though not passed yet, it has influenced discourse and state-level laws. Indeed, some U.S. states have started to fill gaps: e.g. Illinois’ AI Video Interview Act (2020) requires employers to inform and get consent from job candidates for AI analysis in video interviews, and New York City’s Local Law 144 (effective 2023) mandates bias audits of automated hiring tools and candidate notification. These state actions could become templates for federal law or push companies to adopt practices nationwide to comply with the strictest locales. Overall, the U.S. regulatory approach is evolving from purely voluntary principles toward a mix of mandates for transparency, risk assessment, and safe development practices – especially for advanced AI capabilities – albeit in a piecemeal fashion reflecting the complexity of the regulatory system.
Ethical Frameworks and Industry Self-Regulation: Long before any government action, American tech companies and research organizations established their own AI ethics principles, which have contributed to an industry norm of considering issues like bias, privacy, and human oversight. Google’s AI Principles (announced in 2018 after internal backlash over a military contract) famously declare the company will not design AI for weapons or surveillance that violates human rights, and emphasize principles of social benefit, avoidance of unfair bias, and accountability. Likewise, Microsoft, IBM, Facebook, and others published ethical guidelines and set up internal review processes for sensitive AI projects. Many firms created AI ethics boards (though some, like Google’s, have seen controversy and turnover) and employed bias bounties or red teams to find flaws in AI products. The Partnership on AI, co-founded in 2016 by companies including Amazon, Google, Facebook, IBM, and Microsoft alongside NGOs like the ACLU, has produced best practice reports on topics like AI explainability and workers’ rights in automated workplaces. It serves as a multi-stakeholder forum to discuss AI impacts openly. Professional societies contribute as well: the Association for Computing Machinery (ACM) and IEEE have codes of ethics for computing professionals that cover AI (e.g., an obligation to avoid unjust impacts). The U.S. government has encouraged this self-regulation culture – for instance, the 2020 White House guidance on AI regulation explicitly said agencies should avoid overreach and trust in voluntary frameworks unless there’s clear evidence of harm. However, with rising public concern, voluntary efforts are increasingly supplemented by public commitments. In July 2023, the White House announced it had secured voluntary commitments from 7 leading AI companies (including OpenAI, Google, Meta, Amazon, Microsoft, Anthropic, and Inflection) to implement measures like external security testing of AI models, sharing information on AI risks with governments and academia, and developing watermarking for AI-generated content. These commitments, though not legally binding, were made public to hold companies accountable and were a prelude to more formal regulation. They align with the administration’s eight AI Governance Principles outlined in the 2023 executive order – such as ensuring AI is safe, secure, fosters competition, protects workers, respects civil rights, and advances equity. In effect, the U.S. is crafting a hybrid governance model: corporate self-governance and ethical design, guided and reinforced by government standards and oversight. This reflects a belief that those developing AI have a responsibility to mitigate harms from the outset, while external checks ensure they do so.
Addressing High-Risk AI: Content, Safety, and Liability: As AI systems become more powerful and ubiquitous, specific concerns have drawn regulatory attention. Deepfakes and AI-generated content is one area: besides the planned watermarks via the EO, some states have outlawed malicious deepfakes (e.g., California and Texas ban deepfake videos intended to influence elections within a certain period of voting). The federal level is also considering criminal penalties for using AI in fraud or election interference – for instance, a bipartisan bill introduced in 2023 would penalize distribution of fake AI media impersonating federal candidates. Autonomous vehicles and drones: The U.S. has taken a flexible approach, issuing guidelines rather than strict rules for AVs, but requiring safety assessments. The National Highway Traffic Safety Administration (NHTSA) updated its policies to clarify that automated driving systems still fall under its recall authority if defective, and in 2022 it issued a standing order mandating that companies report any crashes of vehicles with automated driving systems – a kind of accountability measure building a safety database. For lethal AI systems (like autonomous weapons), while the U.S. military presses forward (as discussed in section 10), on the civilian regulatory side the U.S. has no outright bans but is engaging in international talks. Liability for AI-caused harms in the U.S. currently falls back on existing tort law – e.g. if an AI-powered medical device causes injury, the manufacturer can be sued for product liability. But there’s active debate if new frameworks are needed, especially if AI systems act in less predictable ways. Several proposals suggest establishing clear developer or deployer liability for AI systems above a certain risk threshold, to ensure companies can’t evade responsibility by saying “the AI made the decision.” The EU’s approach in its AI Act and updated Product Liability Directive is being studied in the U.S., but any similar move would likely require congressional legislation, which may come after further high-profile incidents that galvanize consensus. As of mid-2025, the U.S. regulatory, ethical, and safety framework for AI is an evolving mosaic: guiding principles and voluntary commitments set the tone, agencies are beginning to assert regulatory authority where they can, and new rules are emerging for areas where self-regulation proves insufficient. The trajectory is toward greater accountability and transparency in AI, executed in a typically American way – balancing innovation and regulation, involving multi-stakeholder input, and iterating on policies as technology advances.
9. International Engagement & Standards
As a global leader in AI, the United States actively engages in shaping international norms, partnerships, and technical standards for artificial intelligence. U.S. diplomacy on AI operates on two main fronts: upholding democratic values in AI governance globally (often in concert with allies) and maintaining strategic leadership in setting the technical rules of the road. Unlike China’s centralized export of AI infrastructure or the EU’s regulatory power, the U.S. leverages its influence through multilateral forums, alliances, and its preeminent technology industry. In recent years, the U.S. has championed initiatives to promote trustworthy AI worldwide, counter authoritarian uses of AI, and ensure an open innovation environment. Simultaneously, American experts and companies play an outsized role in international standardization bodies, which determines interoperability and safety benchmarks for AI. Through the Global Partnership on AI, the OECD, the G7, and other venues, the United States pushes for an AI future that aligns with liberal democratic values and market-driven innovation, while also addressing shared risks.
Multilateral Leadership and Alliance Coordination: The United States has positioned itself as a leading voice for international cooperation on AI grounded in human rights and democratic principles. In 2019, it joined over 40 countries in adopting the OECD AI Principles, which were the first intergovernmental standards on AI – emphasizing inclusive growth, human-centered values, transparency, robustness, and accountability. These principles later underpinned the G20 AI Principles, signaling broad consensus. To operationalize such cooperation, the U.S. became a founding member of the Global Partnership on Artificial Intelligence (GPAI) in 2020 (initially a French-Canadian initiative). GPAI is a multinational forum that evaluates AI opportunities and challenges, from bias to COVID-19 response, and the U.S. co-chairs some working groups, lending its experts from NIST, NSF, and academia. In 2023, during its G7 presidency, Japan (with strong U.S. support) launched the “Hiroshima AI Process,” a framework for the G7 to lead on setting voluntary codes of conduct for advanced AI systems. The U.S. has been closely involved, advocating an approach that industry should implement immediate safeguards ahead of formal regulation. U.S.-EU collaboration is particularly significant: through the Trade and Technology Council (TTC), launched in 2021, the transatlantic allies are aligning on AI terminology, risk classification, and standards development to ensure regulatory compatibility across the Atlantic. In late 2023, the U.S. and EU announced they were jointly drafting a voluntary Code of Conduct for AI to bridge the gap until EU’s AI Act enforcement, focusing on issues like testing and transparency for generative AI. This move is partly to set a global benchmark that can be taken to the OECD or UN. The U.S. has also integrated AI into security alliances: NATO’s AI Strategy (adopted 2021) was strongly influenced by U.S. input, embedding principles of lawful use and interoperability among allies’ AI systems. At the United Nations, the U.S. has cautiously engaged in discussions on global AI governance – supporting the idea of a UN focal point on AI (as suggested by the UN Secretary-General) but pushing back against any notion of broad bans or restrictive treaties that might hinder innovation. Instead, the U.S. endorses “responsible AI use” declarations, such as the REAI (Responsible AI in the Military) agreement signed by 60 countries including the U.S. in February 2023, which outlines best practices for military AI and autonomy. Through these channels, the U.S. seeks to shape a global environment where AI develops in line with open society values and where its own companies can compete on a level playing field internationally.
Technical Standards and Rule-Making: American entities – both government agencies like NIST and private companies – are deeply involved in the development of international AI standards that will govern interoperability, safety, and quality. The U.S. approach to standards is private-sector led but with government support. NIST has a mandate (per the AI Initiative Act) to engage in and lead international standards efforts. As such, NIST personnel and U.S. industry experts chair or co-chair important committees in bodies like the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) Joint Technical Committee 1, Subcommittee 42 on Artificial Intelligence (ISO/IEC JTC 1/SC 42). This committee is developing foundational standards on AI terminology, reference architecture, trustworthiness, bias in AI systems, etc. U.S. contributions have been key in drafts on risk management and bias mitigation, ensuring they align with frameworks like NIST’s RMF. Additionally, the Institute of Electrical and Electronics Engineers (IEEE), based in the U.S., has launched multiple standards projects (IEEE 7000-series) on AI ethics and governance – U.S. academics and tech companies are heavily represented in those working groups. In the International Telecommunication Union (ITU), which has seen proposals from China on facial recognition and surveillance standards, the U.S. often pushes back through its delegations, arguing for caution and privacy safeguards. For instance, when a Chinese proposal for a traffic surveillance AI standard was debated, U.S. participants raised concerns about human rights implications, contributing to those standards being revisited. The U.S. also promotes pro-innovation standards – essentially making sure that technical requirements don’t create unnecessary trade barriers for its companies. One example: the U.S. advocated within ISO for a standard on AI lifecycle governance that is flexible and principle-based rather than a prescriptive checklist, aligning with how U.S. firms implement AI ethics. Coordination with allies is strong here as well: the U.S., EU, Japan, and others have formed ad hoc groups (like under the TTC) to present united fronts in standard meetings, often to counteract standards proposals that might encode authoritarian practices or give advantage to non-market economies. It’s a subtle but important aspect of tech competition – who writes the rules can shape future market dynamics. With its wealth of technical talent, the U.S. still wields great influence: a Stanford study in 2022 found that about 55% of contributions to AI standards globally came from U.S.-affiliated experts, far more than any other country. Maintaining this influence is a strategic aim, hence why the U.S. government actively encourages companies and consortia (like the Open Source ML Consortium) to participate internationally. Ultimately, by steering the technical standards, the U.S. helps ensure that AI systems worldwide embody interoperability, safety, and respect for users in ways consistent with Western norms – and that U.S. products and services can seamlessly integrate and be trusted in global markets.
Bilateral AI Cooperation and Global South Engagement: The United States also pursues AI diplomacy through bilateral relationships and support for AI capacity in developing nations. With close allies like the UK, Canada, Australia, and Japan, the U.S. has deepened AI cooperation via information sharing agreements and joint research. A notable example is the U.S.-UK Declaration on Cooperation in AI R&D signed in 2021, which facilitates collaborative research projects and reciprocal access to testbeds (e.g., sharing computing resources between national labs). The U.S. and Canada, both members of the Five Eyes intelligence alliance, regularly coordinate on AI-enabled defense and cybersecurity tools, aligning ethics as well (Canada co-founded GPAI and mirrors many U.S. principles). Israel is another partner with which the U.S. has MOUs on AI in defense and health. Beyond allies, the U.S. is increasing engagement with the Global South to counterbalance China’s Digital Silk Road influence. Through USAID and State Department programs, the U.S. funds AI for social good projects in Africa, Latin America, and Southeast Asia – such as AI systems for agricultural yield prediction in Ethiopia or wildlife conservation in Nepal. These projects often involve American tech companies providing technology at low cost and training local data scientists. In 2023, the State Department launched an “AI Capacity-Building Initiative” offering scholarships and training for students from developing countries to study AI in the U.S., aiming to foster a network of AI professionals with ties to the U.S. (a softer mirror to China’s talent programs) . Moreover, the U.S. Commerce Department’s Commercial Service helps promote U.S. AI firms abroad and advises other governments on creating pro-innovation regulatory environments – effectively exporting the U.S. approach to AI governance through advocacy in trade talks and development forums. For instance, the U.S. has encouraged Indo-Pacific countries to sign onto the Blue Dot Network (a high-standards infrastructure initiative), which includes digital infrastructure elements; implicitly this nudges them towards trusted vendors (often American or allied) for AI-related infrastructure like smart city tech, rather than lower-cost Chinese surveillance solutions. In multilateral development banks (World Bank, etc.), the U.S. pushes for AI investments that come with strong governance components. This strategy is about soft power in AI: ensuring that as countries adopt AI for development, they do so with tools and values compatible with the open, rights-respecting vision the U.S. supports, rather than a techno-authoritarian model.
In summary, internationally the U.S. acts to write the rules and norms of AI before others do: through principles like those at the OECD and GPAI, through technical standards at ISO/IEC and ITU, and through alliance coordination that amplifies its vision. It is a complex effort requiring diplomatic nuance, but it is seen as critical for both moral and competitive reasons – as one NSC official put it in 2023, “The AI competition is also a values competition” . By rallying like-minded nations and engaging global institutions, the U.S. aims to ensure AI strengthens free societies and global stability, while keeping the field favorable to American innovation and enterprise.
10. Defence, Security & Dual-Use Considerations
AI is a transformative technology for national defense and security, and the United States has moved aggressively to integrate AI into its military and intelligence operations. The U.S. defense strategy views AI as essential to maintaining a competitive edge against near-peer adversaries (chiefly China and Russia) in what officials often term the coming era of “algorithmic warfare.” Efforts are underway to apply AI in areas such as intelligence analysis, autonomous vehicles and weapons, cyber operations, logistics, and command decision-support. The Pentagon’s approach, however, is tempered by an explicit commitment to ethics and lawful use – for example, requiring human judgment in the use of lethal force – reflecting both operational trust needs and democratic values. At the same time, the U.S. is enacting controls to prevent its advanced AI technologies from enhancing adversary militaries, using export regulations and investment screening. Internationally, the U.S. participates in discussions on norms for military AI but has opposed outright bans on autonomous weapons, signaling it will pursue AI-enabled capabilities while advocating for responsible use. In the broader security landscape, AI is also central to countering cyber threats, securing critical infrastructure, and filtering misinformation, all of which are priorities for U.S. national security agencies.
Military Integration and the JAIC/CDAO: The U.S. Department of Defense (DoD) outlined its vision in the 2018 DoD AI Strategy, aiming to accelerate AI adoption to transform the military over the next decade. A key institutional step was the creation of the Joint Artificial Intelligence Center (JAIC) in 2018, a dedicated body to coordinate AI efforts across services. Headquartered under the DoD CIO, the JAIC funded and developed AI prototypes in areas like predictive maintenance (identifying when military aircraft need repairs via AI), humanitarian aid/disaster relief (AI for mapping damage), and business process automation in the Pentagon. In 2022, the DoD elevated and consolidated these functions into the new Chief Digital and AI Office (CDAO), reflecting the importance of AI at top leadership levels. The CDAO now oversees data, analytics, and AI, breaking down service-by-service silos to adopt AI at scale. Under the concept of “Joint All-Domain Command and Control (JADC2)”, AI algorithms fuse sensor data from Air Force, Army, Navy, etc., to create a real-time battlefield picture and recommend actions – early versions of this were demonstrated in the Project Convergence and Global Information Dominance Experiments. For instance, in a Project Convergence test, an AI system cut the time to target identification and fire authorization from minutes to seconds by automating threat recognition and suggesting optimal responses (with a human still approving the shot). The Navy has trialed AI for autonomous ships and decision aids for fleet operations (the Sea Hunter drone ship being a prototype). The Air Force is working on “loyal wingman” drones that use AI to fly alongside manned jets to extend sensing and strike capabilities. The Army’s recent experiments include AI-driven battle management systems that coordinate artillery, drones, and troops far faster than manual methods. Notably, AI is also permeating logistics and personnel: predictive supply algorithms ensure troops get parts and food more efficiently; recruiting services use AI to find candidates; and AI tutors help train personnel at scale in technical tasks. The underlying aim is that by 2025, the foundations for widespread AI integration in DoD should be in place – a goal set by the NSCAI – and indeed the DoD has begun fielding some systems while planning many more.
Ethical Guardrails and Doctrine: The U.S. military’s pursuit of AI is accompanied by formal ethical guidelines to ensure alignment with international law and American values. In February 2020, the DoD adopted AI Ethical Principles (recommended by its Defense Innovation Board) which include: Responsible (humans have responsibility), Equitable (avoid bias), Traceable (AI decisions should be understandable), Reliable (safety, security, robustness), and Governable (ability to disengage or deactivate if unintended behavior occurs). These principles were among the first of their kind and signal to operators and developers what constraints to uphold. A practical outgrowth is that any AI project in DoD must undergo an Algorithmic Warfare Cross-Functional Team review or similar process to assess these factors. Particularly sensitive is the use of AI in lethal force. The U.S. has stated that autonomous weapons will still have human oversight: DoD Directive 3000.09 (updated as of 2023) requires a human operator or commander to exercise appropriate levels of judgment over any autonomous weapon system that can select and engage targets. This falls under the concept of “meaningful human control”, which the U.S. asserts is necessary even as it experiments with high levels of autonomy. In wargames and exercises, the U.S. is testing how to maintain human command authority when AI systems offer recommendations at machine speed – e.g., a human may set parameters for an AI defense system but allow it to act within those until the human intervenes (as in some missile defense setups). The U.S. also trained its military lawyers (JAGs) to evaluate AI under the Law of Armed Conflict, ensuring things like distinction and proportionality are not violated by AI-supported operations. By publicly committing to ethical use, the DoD aims to build trust internally (e.g., so personnel are comfortable using AI) and externally (to assure allies and deter adversaries that the U.S. will use AI responsibly). Deputy Secretary of Defense Kathleen Hicks reiterated in 2024 that “our AI is more resilient and effective than ever” due to these safeguards, and that values “set us apart from our strategic competitors”. Nonetheless, the U.S. is keenly aware that rivals may not observe similar restraints, which influences the need to push forward with military AI so as not to be outpaced.
AI in Intelligence and Cybersecurity: Beyond the battlefield, AI is revolutionizing intelligence gathering and cybersecurity – domains where the U.S. has significant investments. The intelligence community (IC) uses AI to process the deluge of data from satellites, signals, and open sources. The Augmenting Intelligence with Machines (AIM) Initiative of the Office of the Director of National Intelligence (ODNI) set goals for all 17 IC agencies to incorporate AI tools. For example, the National Geospatial-Intelligence Agency employs computer vision algorithms to flag changes in satellite imagery (like identifying new missile sites or natural disaster impacts) far faster than human analysts scanning images . The CIA and NSA use natural language processing to sift through foreign communications and social media in multiple languages, finding relevant patterns or threats. AI-based anomaly detection helps identify cyber-intrusions and insider threats amidst huge network activity logs – an approach the NSA has openly discussed to protect defense networks. In cybersecurity, the U.S. Cyber Command and Department of Homeland Security rely on AI for both defense and offense: AI models can recognize novel malware or predict likely cyberattacks to preempt them, and conversely can probe adversary networks for vulnerabilities more efficiently. As noted in the NSCAI report, “We will not be able to defend against AI-enabled threats without ubiquitous AI capabilities” – this drives initiatives like the DoD’s Project Thor (an AI to hunt threats inside its networks) and DHS’s implementation of AI in monitoring critical infrastructure (power grids, pipelines) for sabotage indicators. Another security use is misinformation monitoring: the State Department’s Global Engagement Center employs AI to analyze social media for malign influence campaigns, seeking to counter propaganda bots and deepfake content, especially around elections. Given the 2024 U.S. election concerns, agencies are on high alert, deploying AI tools to detect fake personas or doctored videos being spread by foreign actors. These intelligence and security uses of AI are dual-edged – they protect national security, but if misused, could infringe on civil liberties. As such, oversight mechanisms like inspector general reviews and congressional committees keep an eye on domestic-facing AI deployments (for example, ensuring the FBI’s use of facial recognition or predictive policing is legally compliant – the FBI was admonished by GAO in 2021 to implement more oversight for its face recognition systems). The U.S. thus attempts to balance leveraging AI’s power in intel and security with adherence to privacy and constitutional rights at home.
Export Controls and Tech Protection: To prevent U.S. AI advantages from aiding rival militaries or oppressive regimes, the government has tightened export controls on critical AI technologies. A landmark action came in October 2022, when the Commerce Department’s Bureau of Industry and Security (BIS) issued rules broadly restricting the export of high-end AI chips (like NVIDIA’s A100/H100 and similarly capable GPUs) and semiconductor manufacturing equipment to China . This was explicitly to slow China’s progress in training advanced AI models that could have military applications (e.g. AI for autonomous drones or cyber warfare). BIS also added Chinese AI firms such as SenseTime, Megvii, iFlytek, and drone-maker DJI to the Entity List starting in 2019, meaning U.S. companies need a license to sell them technology due to human rights or security concerns. The result: Chinese firms have been cut off from some U.S. software (like certain machine learning software from American companies) and struggled to access state-of-the-art chips and tools . The U.S. has even leveraged intellectual property controls – for instance, new GPU models (like a downgraded NVIDIA A800) must be designed to meet export limits. Additionally, the U.S. expanded the definition of “emerging technologies” under export law to include certain AI software: a notable case was geospatial imagery AI software, which was briefly controlled in 2020 after it was realized such software could enhance military targeting. In 2023, discussions arose about possibly controlling frontier AI models themselves (if weights are exported, etc.), but currently, the focus is on hardware and key enterprise software. On the investment side, the U.S. is finalizing a mechanism to screen outbound investments in sensitive tech to China (often called “reverse CFIUS”), likely covering advanced AI startups – to stop U.S. capital and expertise from accelerating Chinese AI in military-relevant areas. Conversely, inbound restrictions exist too: CFIUS (Committee on Foreign Investment in the U.S.) has intervened in Chinese attempts to acquire U.S. AI or semiconductor firms on national security grounds. For example, a Chinese-backed fund was blocked from buying a stake in a U.S. AI chipmaker, and broader rules now mandate disclosure of any foreign (particularly Chinese) participation in critical tech companies. These actions, while sometimes contentious with industry due to market impacts, underscore that the U.S. sees cutting-edge AI hardware and algorithms as strategic assets – not to be freely shared when national security is at stake . That being said, the U.S. does share a lot with allies under agreements – it has technology sharing arrangements through NATO and bilateral deals like AUKUS (with the UK and Australia, which includes cooperation on AI for defense). Ultimately, tech protection measures aim to ensure the U.S. and its allies retain superiority in the AI tools that shape military power.
International Norms and Arms Control: On the global stage, the U.S. takes a nuanced position on AI in warfare. It has opposed a binding ban on Lethal Autonomous Weapon Systems (LAWS) at the United Nations, reasoning that a broad prohibition would be premature and potentially unenforceable. The U.S. argues that well-employed AI can reduce collateral damage (by improving precision) and that human oversight – rather than a ban – is the appropriate safeguard . U.S. diplomats in UN CCW (Convention on Certain Conventional Weapons) meetings have supported non-binding norms, like a political declaration on responsible military AI use (which, as mentioned, it endorsed in The Hague in 2023 with dozens of countries). This declaration, while voluntary, establishes that signatories will maintain human responsibility for AI-based decisions in warfare and ensure such systems have safety mechanisms. The U.S. likely sees this as balancing moral concerns with preserving flexibility to develop systems – in contrast, some nations and NGOs pushing for a ban see it as insufficient. Meanwhile, to reduce escalation risks, the U.S. has begun strategic stability talks with other major powers focusing on AI. For example, the U.S. and Russia held preliminary discussions in 2020–21 about the impact of AI on nuclear deterrence (given concerns that AI could upset second-strike capabilities or early warning systems). With China, direct military AI dialogues are limited due to broader tensions, but academics and former officials have engaged in Track 2 dialogues on avoiding accidental conflict triggered by autonomous systems. The Pentagon also has an internal directive that any AI nuclear command and control is prohibited – ensuring humans remain firmly in control of nuclear decisions, a point U.S. officials often emphasize publicly to avoid miscalculation. Additionally, the U.S. supports confidence-building measures like sharing information on AI military exercises with others to foster transparency. Looking forward, the U.S. position might evolve as AI tech and global pressure develop. But as of mid-2025, the stance is essentially: lead in military AI, set ethical example, block adversaries from exploiting U.S. tech, and engage internationally to manage risks without hampering U.S. capabilities. It’s a delicate balance, reflective of AI’s promise and peril in national security.
11. Performance Metrics & Monitoring
The United States closely monitors various indicators of AI progress and impact to inform policy and measure success. Unlike centrally planned economies with formal AI targets, the U.S. does not have a single unified dashboard of AI metrics tied to a national plan; however, a combination of government reports, independent indexes, and agency evaluations provide a comprehensive picture of AI performance. Key metrics include R&D outputs (publications, patents), talent development (degrees granted, skilled immigration), commercial activity (startup funding, market adoption), and federal implementation milestones. These metrics serve multiple purposes: assessing U.S. competitiveness globally, identifying gaps or bottlenecks (e.g. talent shortages, R&D underinvestment), and ensuring accountability for public initiatives. The culture of transparency means much of this data is publicly accessible, often compiled by research institutions like Stanford University’s AI Index or think-tanks, which policymakers reference. Congress and the Executive Branch rely on such monitoring to adjust funding and strategy – for instance, using metrics on AI patent leadership or private investment to justify increased federal R&D, or using studies on algorithmic bias to justify regulatory interventions. Overall, the U.S. employs a data-driven, iterative approach to its AI strategy, tracking performance and outcomes to remain agile in a fast-changing field.
Innovation Output and Industry Health: A primary set of metrics revolves around the quantity and quality of AI innovation. The Stanford AI Index Report, produced annually, is a key resource that U.S. officials and industry leaders consult. For example, the 2025 AI Index notes that in 2024, U.S.-based institutions produced 40 out of the top 100 notable AI models, compared to China’s 15 and Europe’s 3. It also highlights that while China leads in sheer number of AI publications, the U.S. retains an edge in highly cited conference papers and repository citations, though the gap is narrowing. This suggests U.S. research is still slightly more influential on average – a metric that influences funding decisions for fundamental research. Patent filings in AI-related fields are tracked via the U.S. Patent and Trademark Office and WIPO; as of latest counts, U.S. entities have among the highest AI patent grants globally, second to or possibly overtaking China in some categories. These numbers are seen as proxies for innovation vigor and intellectual property generation. The government also tracks economic output metrics: the Department of Commerce has started estimating the contribution of AI technology to GDP growth and productivity. By 2023, the Bureau of Economic Analysis had experimental stats showing AI-intensive industries (like software, finance, and advanced manufacturing) growing faster than the economy average, indicating AI-driven productivity uptick. Another critical metric is private sector investment: as noted before, U.S. private AI investment was about $67 billion in 2023, dwarfing other countries. Monitoring this informs whether policy is encouraging a healthy AI startup ecosystem or if more incentives are needed. Indeed, venture capital and startup formation rates in AI are reported by groups like the National Venture Capital Association and feed into White House economic analyses. If, for instance, VC investment were to dip significantly, it might prompt inquiries or policy responses (like boosting SBIR funding or easing regulations) to ensure the U.S. remains the most attractive place to build AI companies.
Talent and Workforce Metrics: The U.S. measures its talent pipeline through educational and immigration data. The Department of Education reports on degrees awarded in AI-related fields (often using proxies like computer science, data science, and statistics degrees). A significant metric celebrated in the Stanford AI Index is that the U.S. awarded more than twice as many AI/CS doctoral degrees in 2022 as a decade prior and that around 60% of top-tier AI researchers (by citation) work in the U.S. . Retention of PhD graduates is also tracked: surveys indicate a large majority of foreign AI PhDs from U.S. universities choose to stay in the country after graduation, a sign of success for immigration retention policies. The National AI Initiative Office publishes an annual report to Congress (per the NAII Act) which in 2023 noted the number of AI-related workers in the U.S. economy and shortages in certain sectors (for example, an estimate of 250,000 unfilled data science jobs). This spurs funding for training programs to close gaps. The Global AI Talent Ranking by Tortoise Media (another independent index) has consistently placed the U.S. at or near the top in terms of concentration of top talent and talent attractiveness, something American officials cite to argue that despite competition, the U.S. still draws the best minds. That said, metrics have revealed concerns: for instance, the AI Index 2023 showed the proportion of U.S. AI PhDs going to industry directly has grown to over 65%, potentially reducing the pool for academia. This could impact long-term fundamental research, prompting NSF to create more fellowships to lure PhDs into teaching and research careers. Diversity in AI is another monitored metric: reports often show underrepresentation of women and certain minorities in the AI workforce (e.g., <20% of AI professors are female). These figures are used to bolster calls for inclusive educational initiatives and bias mitigation in hiring.
Adoption and Societal Impact Measures: To gauge how widely AI is being adopted and its effects, multiple surveys and studies are used. McKinsey’s Global AI Adoption Survey is one industry standard, which in 2022 found about 50% of U.S. companies had adopted AI in at least one business function, up from 20% in 2017. The government doesn’t do this exact survey, but agencies like the Census Bureau started including AI-related questions in their business pulse surveys. Early results show usage especially in larger firms; this guides SME outreach policies. Public opinion metrics are also crucial: Pew Research in late 2022 found 52% of Americans are more concerned than excited about AI in daily life, and Gallup in 2023 found 77% of Americans don’t trust companies to use AI responsibly. These stats are watched by the White House Office of Science & Tech Policy and have directly influenced their communications strategy and policy priorities (for example, those concerns helped justify the AI Bill of Rights to address bias and transparency). The government also monitors incidents and failures – there’s no formal national incident database yet (though NIST is researching the idea of an AI incident repository), but high-profile cases like accidents with self-driving cars or discriminatory algorithm outcomes are tracked to see if interventions are needed. For example, after a widely reported incident where an AI recruitment tool was biased against women, the EEOC launched an initiative in 2021 to issue guidance on AI in hiring and has since collected data on algorithmic bias complaints. If such complaints rise, it’s a metric of potential harm requiring more oversight. On the positive side, performance metrics for AI solving societal challenges are noted: how many FDA-approved AI diagnostic tools are now in use, how much faster FEMA can respond to disasters using AI damage assessment, etc. During the COVID-19 pandemic, metrics like time saved in vaccine development due to AI (for example, deep learning models that helped identify vaccine candidates) were celebrated and used to justify continued AI funding in biomedical research.
Federal Implementation and Accountability: Internally, each federal agency that undertakes AI projects keeps metrics on their performance. The OMB has required agencies to inventory their AI use cases and report on outcomes by the end of 2023. This inventory process has revealed, for instance, that nearly 70 federal agencies (out of ~100) reported using AI in some capacity – from customer service chatbots at the Social Security Administration to predictive models for maintenance at the General Services Administration. OMB will use this to identify best practices and areas where agencies lag. Program evaluations also include metrics: The JAIC (now CDAO) measured how many workflows it automated or how many hours of analysts’ time were saved by AI deployments. If a project underperforms, oversight committees (like the GAO – Government Accountability Office) investigate. GAO has released audits like a 2021 report on “AI in Healthcare” examining if HHS has metrics for patient outcomes with AI tools, recommending better data collection. Performance metrics thus feed a cycle of improvement. Congress, through the National Defense Authorization Act (NDAA) and other laws, sometimes writes in reporting requirements – e.g., the 2021 NDAA (which contained the NAII Act) mandated an annual report on the national AI initiative including progress on R&D funding, educational programs started, international cooperation steps, etc. The first of these reports (delivered 2022) showed that the government had met or exceeded targets for establishing AI institutes and increasing R&D budgets . Where gaps were seen (like the NAIRR still being in planning), it flagged those for priority.
Global Benchmarking: Lastly, the U.S. benchmarks itself against other countries. The National Security Commission on AI’s final report in 2021 explicitly measured U.S. vs China in various dimensions (talent, hardware, research) and concluded the U.S. was ahead in most, but China was rapidly closing . Follow-up by think tanks like CSET track China’s progress – e.g., noting when China surpassed the U.S. in journal publications or when Chinese AI startups received record funding. The Global AI Vibrancy Tool by Stanford HAI (2024) gave the U.S. a score of 70.06 vs China’s 40.17, reflecting strength in investment, talent, and research. Metrics like these are cited in speeches and used to rally support for policies like the CHIPS Act (the argument being: to keep ahead, we must invest in X). They also help avoid complacency – if a future index shows the U.S. slipping in a key area, it will likely trigger congressional hearings or executive actions.
In summary, the U.S. employs a rich array of metrics and monitoring mechanisms, from independent academic indices to formal government reports, to keep a pulse on the AI ecosystem’s performance. This data-driven oversight helps the decentralized U.S. strategy adapt and ensures that lofty goals (like maintaining leadership, spreading benefits widely, mitigating harms) are backed by empirical progress and course corrections when needed.
12. Strategic Foresight & Resilience
Anticipating future challenges and ensuring long-term resilience are integral to the United States’ AI strategy. American policymakers are acutely aware that the AI landscape can shift rapidly – due to technological breakthroughs, geopolitical developments, or emergent risks – and thus they emphasize agility and preparedness. Several forward-looking initiatives are in place: efforts to strengthen supply chains and reduce strategic dependencies (especially in semiconductors), investments in next-generation AI research (to avoid being blindsided by paradigm shifts), and scenario planning for potential societal disruptions from AI (like job displacement or malicious use). The U.S. approach blends optimism about AI’s benefits with sober planning for worst-case scenarios. Government bodies like the National Security Council, OSTP, and even the intelligence community perform horizon-scanning to forecast AI trends and their implications for economic and national security. This foresight informs “future-proofing” policies such as the CHIPS Act and emerging proposals for global coordination on AI safety. The underlying goal is to ensure that the U.S. not only maintains its leadership position in AI but can also withstand or quickly adapt to shocks – whether that’s a sudden cutoff of critical tech imports, a dangerous capability emerging (like advanced bio-AI or autonomous weapon swarms), or rapid changes in the labor market caused by AI automation.
Supply Chain Security and Technological Bottlenecks: One of the most emphasized resilience issues is the semiconductor supply chain – the “silicon backbone” of AI. As noted, the U.S. historically depended on East Asia for cutting-edge chip fabrication, a vulnerability laid bare by trade tensions and the 2020–2021 global chip shortage. Strategic foresight analyses (including those by NSCAI) warned that if the U.S. was cut off from top-tier chips or the tools to make them, its AI progress could stall . The U.S. responded with the CHIPS and Science Act (2022) precisely to remove this bottleneck by onshoring production capacity. Concurrently, export controls on chip tech to adversaries are as much about preserving a relative advantage as they are about denial – by slowing China’s access to 7nm-and-below fabrication, the U.S. buys time to sprint ahead on domestic production and R&D. Another bottleneck is talent: high-skilled AI experts are scarce globally, so the U.S. has to ensure it remains the top destination and producer of talent. Foresight here meant recognizing early that restrictive immigration could drive talent elsewhere; indeed, when immigration hurdles appeared 2017–2019, countries like Canada and the UK capitalized, prompting U.S. adjustments (like National Interest Exceptions for STEM visas). The NSCAI bluntly stated that “the U.S. needs to win the talent competition” and recommended stapling green cards to STEM PhDs . While Congress hasn’t done that outright, the spirit is being followed via executive tweaks and more funding for domestic STEM education. Dependency on foreign data hasn’t been a major bottleneck (since U.S. companies gather plenty), but dependency on foreign-made software (like if most AI frameworks were Chinese, hypothetically) is also monitored. To avoid any such scenario, the U.S. has supported open-source dominance of American frameworks and is wary of foreign apps that vacuum up data from Americans (hence the scrutiny on TikTok, which is not directly an AI tech dependency but seen as a data exposure risk). In summary, the American strategy actively identifies single points of failure or choke points and seeks to eliminate them: diversify supply, build domestic alternatives, or collaborate with allies to create redundant paths (like possibly a U.S.-EU-Japan alliance on chip materials and equipment sharing). The CHIPS Act’s large R&D funding portion ($13 billion) is partially for exploring new materials (so future chips might not rely on rare earths largely mined in China, for example) – another nod to resilience.
Scenario Planning and Strategic Forecasts: Government and affiliated think tanks regularly produce AI foresight studies. The National Intelligence Council (NIC) included AI prominently in its quadrennial Global Trends 2040 report, foreseeing AI as a central force reshaping economies and power balances, and warning of AI’s potential to amplify authoritarian surveillance if democratic norms don’t prevail. At the national security level, the Pentagon runs war-gaming scenarios involving AI – for instance, to explore outcomes if an adversary uses AI to jam communications or launch deepfake propaganda in a crisis. These exercises have led to the development of counter-AI strategies, such as hardening systems against adversarial AI attacks or building “explainability” into command systems so human officers trust AI advice in heat of battle (lack of which could cause hesitation or errors). On the societal front, scenario planning is seen in the Department of Labor’s future of work forecasts: they run models of job automation impacts under different AI advancement assumptions. If, say, generative AI becomes capable of replacing large portions of white-collar jobs, how quickly could the workforce re-skill? To that end, the Biden Administration in 2023 convened a task force on AI’s impact on the workforce, instructing them to deliver policy options (like strengthening unemployment insurance, education reforms, portable benefits for gig work, etc.) in case AI-driven displacement accelerates. Similarly, Treasury and others are studying AI’s impact on economic inequality – one scenario is that without intervention, AI might concentrate more wealth in highly automated firms. These analyses haven’t yet resulted in big new legislation, but they lay groundwork for rapid responses if needed (for example, if labor force participation drops, one might see a quick move to expand social safety nets or AI taxes). Another scenario being actively monitored is the race to artificial general intelligence (AGI). While many experts doubt near-term AGI, the sudden emergence of extremely capable models like GPT-4 forced policymakers to consider: what if a much more powerful AI arrives unpredictably? In early 2023, OSTP ran an internal “red team” scenario exercise imagining an advanced AI that could, for instance, design bioweapons or hack critical systems autonomously. This fed into the October 2023 Executive Order requiring developers of very large models (above certain compute thresholds) to share risk information with the government and into increased funding for AI alignment research (via NSF and DARPA). The creation of an AI Safety Institute at NIST (called for by some experts) is under discussion – an independent testing center for pre-deployment evaluation of high-risk AI – which would be a very direct measure to handle AGI-like scenarios. All these reflect how seriously the U.S. takes strategic foresight: by entertaining even low-probability, high-impact AI futures, the government aims to be less caught off guard.
AI Safety and Alignment Research: Part of resilience is ensuring AI systems themselves are safe and aligned with human values, so they don’t pose an inadvertent threat. The U.S. has ramped up support for AI safety research – historically a niche in academia – to better understand and guard against issues like unintended behavior, bias, and “AI accidents.” In 2023, the White House announced $140 million for new NSF-led National AI Research Institutes with focus areas including AI safety, security, and evaluation. One such institute is dedicated to studying how to make machine learning models robust against adversarial attacks (like slight image perturbations that fool a classifier) and how to verify AI system outputs. DARPA launched a program called Assured AI to develop techniques for AI that can explain its decisions and gracefully handle inputs outside its training distribution (to avoid unpredictable actions). Another DARPA project, SAFE-Life (Science of AI Framework for Electronics), is trying to create formal proofs of AI system reliability, starting with simpler autonomous vehicles, as a template for broader use. The NIST AI Risk Management Framework mentioned earlier provides best practices, but there’s recognition more R&D is needed on technical alignment – things like training models to adhere to ethical constraints or detect when they might be failing. The U.S. research community, including non-profits like OpenAI and academic labs, is actively publishing on alignment (for example, techniques like reinforcement learning from human feedback, which OpenAI used for ChatGPT, were pioneered in U.S. labs). The federal government’s role is increasingly to encourage these efforts and ensure they are shared widely (consistent with openness, unless there are clear national security sensitivities). This extends to considering standards or certification for AI. For instance, the Executive Order 2023 tasks NIST and others to develop guidelines for red-teaming AI models and to evaluate the emergent capabilities of frontier models. The idea is to institutionalize safety checks as a normal part of AI development – similar to how drug trials are mandatory for pharmaceuticals. Such moves are preventative resilience: they might slightly slow deployment in exchange for reducing the chance of catastrophic failures or misuse.
Adaptive and Agile Governance: Resilience in governance means the U.S. is willing to continuously update its policies as AI evolves. We see this agility in how quickly the U.S. moved on generative AI – within a year of ChatGPT’s release, there was an Executive Order and voluntary commitments addressing it, whereas normally regulations lag years behind tech. The U.S. regulatory philosophy now includes test-and-learn: issuing guidance or interim rules, gathering feedback and data, then refining. OMB’s iterative memos on AI in government (M-21-06 in late 2020, updated M-22-** and again in early 2025) show this approach. They started with light encouragement, then added more concrete requirements (like bias training for government AI procurement). Also, the FTC and DOJ have signaled they will use existing laws creatively to address AI issues (like treating biased algorithms as employment discrimination, or unsecured AI systems as violations of consumer data protection) – this flexible enforcement can quickly adapt to novel AI harms without waiting for new statutes. On the legislative front, Congress as of 2025 is in exploratory mode: instead of rushing a broad AI law (like the EU’s AI Act), they have held numerous hearings, formed bipartisan working groups, and even run AI demonstrations for lawmakers (Sam Altman’s Senate session included a closed demo of GPT-4’s capabilities). This educative approach aims to make lawmakers agile and informed, so when they do legislate, it’s precise and up-to-date. Meanwhile, states and cities act as “laboratories” for policy (e.g., some cities banning face recognition, some states mandating AI transparency in hiring), and the federal government watches these experiments to learn what works or what unintended consequences arise. Then it can scale effective measures nationally. All told, the U.S. is trying to craft a governance ecosystem that’s resilient to AI’s rapid change: proactive in reducing vulnerabilities, rich in safety nets for potential downsides, and nimble in rule-making. This dynamic approach is seen as necessary to “future-proof” the American society and economy against AI-related shocks, while still riding the waves of innovation that AI brings.
13. Foundational AI Capabilities
The United States is committed to sustaining leadership across the foundational building blocks of AI: advanced hardware, critical software frameworks, and cutting-edge AI models themselves. Ensuring superiority (or at least parity) in these foundational elements is seen as key to long-term strategic advantage. U.S. policy and industry activity reflect a drive to develop an independent, full-stack AI ecosystem – one not reliant on strategic competitors – and to push the frontiers of what AI technology can do. This encompasses semiconductor design and manufacturing breakthroughs, nurturing widely-used AI development platforms (which often originate from U.S. companies), and a vibrant environment for creating and deploying large-scale AI models (like GPT-4 or image generation models) that set global benchmarks. In the 2020s, the U.S. has indeed led the world in many of these foundational aspects: NVIDIA’s GPUs dominate deep learning computing, American frameworks like TensorFlow/PyTorch are standard, and most of the famous “frontier models” have come from U.S. organizations. However, the competition is intensifying, and the U.S. is making concerted efforts to guard these advantages – for example, by incentivizing domestic chip fabs (as discussed) and investing in next-gen computing paradigms.
Compute Hardware and Chips: The U.S. strategy on hardware is two-fold: push the envelope of performance and secure the supply chain. On performance, American companies regularly release the state-of-art AI chips. NVIDIA, headquartered in California, has iteratively produced the world’s most capable GPU accelerators for AI (the A100 in 2020 and H100 in 2022 were leaps that significantly improved training times for large models). These chips are so critical that they’ve been called the “workhorses” of the AI revolution . Google has developed its own Tensor Processing Units (TPUs) for its data centers, achieving similar top-tier performance in specialized tasks; TPUs have been offered via Google Cloud to external researchers, broadening impact. Intel and AMD also innovate AI features into CPUs and GPUs (Intel’s Habana accelerators, AMD’s MI series GPUs), ensuring multiple U.S. firms at the cutting edge. Beyond conventional architectures, startups in Silicon Valley have explored novel chip designs: Cerebras makes wafer-scale chips for AI, Groq (founded by ex-Google) works on ultra-low-latency processors – some of these have seen adoption in niches requiring massive speed. The DoD, through DARPA’s Electronics Resurgence Initiative, has funded research into specialized AI hardware like neuromorphic chips (which mimic brain neurons) and analog computing, aiming for breakthroughs that might leapfrog current silicon limits. As a result, U.S. labs hold top ranks in supercomputing – the Frontier supercomputer uses U.S.-made chips to reach over 1 exaflop on AI tasks, and upcoming exascale systems (like Aurora at Argonne) will further push boundaries using Intel and AMD tech. On security of supply, as elaborated in section 12, the CHIPS Act investments and export controls are critical. They represent an understanding that designing the best chip means little if you cannot fabricate it; thus TSMC and Samsung being lured to U.S. soil is as strategic as any missile defense system. The synergy between public and private is clear: the government provides incentives and basic R&D funding, companies execute with engineering prowess, and together they try to maintain a 1–2 generation lead in capability over others. This is also why the U.S. is deeply concerned with quantum computing. Though not directly AI, quantum computers could eventually solve optimization and machine learning problems beyond classical reach. U.S. leads in quantum research (IBM and Google have record-breaking quantum processors), so policy ensures robust quantum R&D funding (the National Quantum Initiative). The mindset is to prepare for any paradigm shift that could alter AI hardware foundations. If quantum or optical computing for AI becomes viable, the U.S. intends to already be at the forefront.
Software Ecosystem and Frameworks: The prevalence of U.S.-origin software in AI development is a significant strength that the country seeks to maintain. Machine learning frameworks like TensorFlow (Google) and PyTorch (Facebook/Meta) are essentially the operating systems of AI research – and both were created and open-sourced by American tech giants. PyTorch, in particular, has become the dominant framework in research and industry by the mid-2020s, and Meta moved its governance to an open foundation (the Linux Foundation) but the core development still happens in California. This gives the U.S. a soft influence: improvements, plugins, and tools around these frameworks often come from the vibrant U.S. developer community, and global users depend on updates from these U.S.-led projects. The U.S. supports open-source in part to counter proprietary ecosystems from rivals (e.g., if China tried to popularize its own closed AI framework globally, it would face an uphill battle because TensorFlow/PyTorch are entrenched and free). Similarly, platforms like Hugging Face (though co-founded with French connections, it’s heavily U.S.-integrated) host model libraries that proliferate American-developed architectures and practices. Even the popular programming languages and tools for AI (Python, Jupyter notebooks, etc.) emanate from the U.S./Western open-source culture. That said, the U.S. is careful to remain ahead in critical software algorithms. For instance, reinforcement learning algorithms that underlie many advanced AI systems were pioneered by U.S./UK groups (DeepMind, OpenAI). The U.S. funds academic research in core machine learning theory (through NSF, DARPA’s fundamental AI research grants) to keep generating new ideas that often feed open-source implementations. There’s awareness to not be complacent: if, say, a better programming paradigm than current frameworks emerged, the U.S. would likely invest in it early. In fact, DARPA’s Software AI programs look at automating parts of software development itself using AI (AI coding assistants – which already exist as GitHub’s Copilot – mostly powered by OpenAI’s model and training on open code). Maintaining a strong software ecosystem also involves focusing on cybersecurity for AI software – the EO 2023 instructs NIST to develop standards for AI cybersecurity because if someone can compromise widely used frameworks, they could sabotage many models. Resilience in the software layer means ensuring these tools remain freely accessible, secure, and continue to evolve with contributions from top talent (which loops back to keeping talent in the U.S.).
Leadership in Foundation Models (LLMs and Beyond): Over the last few years, the U.S. has led in creating large-scale AI models that garner global attention and set technical milestones. OpenAI’s GPT-3 (2020) and GPT-4 (2023), Google’s PaLM and BERT models, Meta’s LLaMA series (open-sourced in 2023 with LLaMA 2) – all these originated from U.S.-based entities. These “foundation models,” often trained on massive datasets, are general-purpose and can be adapted for countless tasks. The U.S. strategy supports this leadership indirectly by fostering the environment (massive compute availability, investment, and talent concentration) and directly by some funding (OpenAI started as a nonprofit with donations from U.S. tech billionaires; government contracts have since flowed for specific AI models like GPT-4 via Microsoft’s Azure for federal use). By mid-2024, over 20 large language models developed in the U.S. were available, including those above and others like Anthropic’s Claude, NVIDIA’s Megatron-Turing NLG, and smaller open models (EleutherAI’s GPT-NeoX, etc.). These often outperformed or came earlier than similar efforts in other countries (China’s largest models, e.g. WuDao or Zhipu’s models, have been significant but generally follow the architectural trends set by these U.S. models). The U.S. intends to keep this edge. For example, sensing the importance of generative AI, the White House in 2023 convened AI companies to discuss safely accelerating deployment – essentially to ensure the public benefits from U.S. models (like widespread use of ChatGPT) while managing risks, thereby reinforcing the preference for American AI products globally. The National AI Research Resource idea, once implemented, would allow academic researchers to train fairly large models without needing industry’s scale, which could maintain diversity and innovation in foundation models beyond corporate labs. Another angle is evaluating where open models vs closed models make sense. The U.S. environment supports both: OpenAI took a closed approach with GPT-4, citing safety and competition; Meta released LLaMA openly, saying this spurs innovation. This mix means the foundational technology is not monolithic – many approaches are tried. From a strategic view, open-source foundation models (particularly ones that can run on smaller compute) can diffuse U.S. influence since developers worldwide then build on these models that have, for instance, English and other Western language strengths and alignments with Western norms. However, open models can also be used by adversaries, so the U.S. carefully weighs releasing certain capabilities (it has, at times, restricted open publication of some defense-related AI advances). The EO on AI in 2023 even tasks an interagency group to evaluate the compute and capability thresholds that should trigger more oversight – implicitly about foundation models that are too powerful to release unchecked. Future foundational capabilities like multi-modal AI (which OpenAI’s GPT-4 and others began doing – image+text handling) and advanced decision-making AI are in active development. DARPA’s forthcoming AI Forward initiative is rumored to focus on more general AI capabilities, ensuring the DoD and U.S. researchers push toward AI that can adapt and learn more like humans. If something akin to AGI is on the horizon, U.S. stakeholders want to be leading that development, not reacting to someone else’s. To sum up, the U.S. is not resting on the laurels of having produced GPT-4 or Stable Diffusion – it’s investing in what’s next (be it GPT-5 or novel architectures) while establishing governance to handle their impact.
Digital Infrastructure and Platforms: Foundational capability isn’t just about chips and models in isolation, but also the ecosystem connecting them. The U.S. sees cloud and data infrastructure as part of foundational AI strength. The largest cloud platforms are American (AWS, Azure, Google Cloud), and they are building specialized AI services (like AWS’s Trainium and Inferentia chips, Google’s Vertex AI platform). Being home to these platforms means global AI startups often build on U.S. clouds, creating an interdependency that both benefits U.S. economy and extends its influence. The U.S. government itself uses these clouds now – the Joint Warfighter Cloud initiative contracted to AWS, Azure, Google, Oracle, effectively ensures DoD can tap the best commercial infrastructure. Conversely, by being their customer, the U.S. can press these providers to meet high standards (security, compliance) and even certain policy goals (like requiring model audits for any AI services sold to government). The government is also cognizant that connecting models to the real world (data pipelines, IoT sensors) is crucial – hence initiatives to upgrade broadband, 5G, and even early 6G R&D (so that, for example, autonomous vehicles and drones, which rely on connectivity for AI decision updates, can function reliably). Summing up, foundational AI capability in U.S. strategy is about owning the “ground truth” of AI technology: from the electrons in a chip to the abstractions of a neural network architecture. By solidifying each layer – hardware, software, and models – and how they interlink, the U.S. aims to secure its position as the primary innovation hub and supplier of advanced AI, well-positioned to set terms in the global AI landscape for years to come.
14. Public Trust, Inclusion & Social Equity
Public trust and broad societal inclusion are recognized as essential for the sustainable advancement of AI in the United States. American policymakers and industry leaders understand that if AI is perceived as a threat – to privacy, fairness, or jobs – public backlash could slow its deployment and diminish its benefits. Therefore, building and maintaining trust is a deliberate facet of the U.S. AI strategy. This involves transparency with the public about AI use, responsive governance that addresses legitimate concerns, and proactive efforts to ensure AI’s benefits reach diverse communities rather than exacerbating inequalities. The U.S. approach to trust is inherently linked to its democratic system: debates in the media, civil society advocacy, and grassroots feedback all shape how AI initiatives are implemented. Inclusion initiatives are aimed at closing the digital divide and giving all Americans access to AI-enhanced services and education, so that no group is left behind as the economy transforms. Through education campaigns, stakeholder engagement, and targeted programs for underrepresented groups, the U.S. is trying to foster a social context in which AI is seen as an opportunity, not merely a risk. At the same time, the government is wary of misuse of AI (like deepfakes or discriminatory algorithms) undermining public confidence, and thus is ramping up oversight in those areas to demonstrate that the technology will be managed for the public good.
Public Communication and Transparency: The U.S. government has taken steps to openly communicate about AI policy and listen to public input. For instance, when OSTP was formulating the AI Bill of Rights blueprint in 2021–2022, it held public listening sessions and put out a Request for Information that garnered feedback from civil rights groups, tech companies, academics, and everyday citizens. The final Blueprint document is written in plain language, aiming to educate people about their rights regarding AI decisions. Such outreach intends to empower the public and signal that policymakers are aware of issues like algorithmic discrimination or data misuse. President Biden himself has spoken about AI in major addresses (mentioning it in his 2023 State of the Union in context of holding Big Tech accountable). These high-profile mentions help frame AI in terms people care about – “making sure tech doesn’t harm kids, or bias doesn’t deny someone a job,” etc. Agencies also strive for transparency in their AI uses: many now list on their websites what AI tools they use. The DHS, for example, published info on where it uses facial recognition at airports and how it’s minimizing privacy impact, responding to prior criticism. The Department of Defense released an unclassified summary of its AI ethical principles and some examples of non-lethal AI projects to demystify “AI in the military” for the public. To engage the skeptical, agencies like NIST host workshops that are open to public webcast on topics like AI bias or explainability, showing the processes of standard-setting to all interested parties. Additionally, members of Congress from both parties have made AI a talking point in town halls if constituents are worried about automation or privacy, reflecting that dialogue is happening at local levels too. A noteworthy recent effort is the “AI.gov” portal (initially set up in the Trump era and maintained by the Biden admin) which centralizes information on U.S. AI initiatives, resources for workers and businesses, and FAQs about AI . It’s essentially a transparency tool so the public can see what their government is doing on AI. The philosophy is that sunlight and clarity can preempt misunderstanding or fear.
Addressing Bias, Fairness, and Civil Rights: Public trust heavily depends on AI being fair and unbiased. The U.S. has confronted several controversies – from biased facial recognition leading to wrongful arrests of Black individuals to algorithms that set higher bail for minorities. In response, the government and private sector have ramped up efforts to mitigate bias. The EEOC launched an initiative in 2021 to ensure AI hiring tools comply with equal opportunity laws, issuing guidance in 2022 that algorithms that disproportionately screen out protected groups could violate the Americans with Disabilities Act or Title VII. It set up a hotline for job applicants to report suspected AI-driven discrimination. Such visible enforcement helps reassure the public that civil rights carry into the AI era. On the industry side, companies like IBM stopped selling general facial recognition in 2020 citing bias concerns, and Microsoft limited sales to law enforcement until federal regulation is in place – these moves were publicized as taking responsibility (and indeed came after public pressure following the George Floyd protests). Academic and NGO pressure also plays a role: organizations like the Algorithmic Justice League (founded by MIT researcher Joy Buolamwini) have raised awareness with projects like the film “Coded Bias”. Policymakers have engaged these activists (Buolamwini testified to Congress, and her research was cited in state legislation banning biased facial recognition). The adoption of ethical AI principles by big tech (Google, Microsoft, etc.) came partly as a trust-building measure after internal and external uproar over uses like Project Maven or potential censorship. There’s also an emphasis on algorithmic transparency to boost trust: New York City’s law requiring hiring algorithms be audited and the results available to the public is a pioneering step. If an algorithm is certified as bias-audited, the public might trust it more. At the federal level, the AI Bill of Rights calls for explanations when AI impacts a person significantly, and even though not law, we see agencies starting to implement it (for instance, if a USDA farm loan gets an AI-determined risk score, they must notify the farmer and allow appeal). These fairness and transparency measures aim to show that AI can be managed under the rule of law just like any other decision system.
Inclusion and Bridging the Digital Divide: To ensure AI doesn’t widen existing social gaps, the U.S. has a variety of programs targeting inclusion. One component is geographic: tech innovation is concentrated in Silicon Valley, Seattle, Boston, etc., which leaves other regions behind. The NSF’s Expanding AI Innovation through Capacity Building program (2023) gave grants to universities in EPSCoR states (those that get less federal R&D) to build AI curriculum and research hubs, thereby spreading expertise beyond the coasts. Another aspect is rural vs urban: the Department of Agriculture’s AI for Rural Health initiative, for instance, pilots telemedicine AI in remote clinics to bring specialist diagnostics (like AI reading of X-rays) to places lacking doctors. Success stories like an AI that helps detect diabetic retinopathy in rural Alabama clinics are highlighted to show AI benefitting underserved communities. For marginalized urban communities, there are city-led programs: e.g., New York City and Washington D.C. partnering with Google’s Machine Learning for Good program to optimize public transit routes in low-income neighborhoods using AI analysis of ridership, thereby improving services for those residents. On education and workforce, inclusion means retraining workers whose jobs might be impacted (discussed in section 5) – e.g., a factory worker replaced by robotics might get government-sponsored AI literacy courses to transition to a maintenance or programming role. There are also specific programs for underrepresented demographics: the AI4All summer camps (supported by industry and NSF) that encourage high school girls and minority students to learn AI basics have expanded nationwide. These aim to diversify the next generation of AI professionals, addressing trust as well – a more diverse AI workforce is likely to consider a wider array of societal contexts in design, reducing blind spots that lead to mistrust by some communities. The government’s Broadening Participation in Computing grants back many historically Black colleges and universities (HBCUs) and Hispanic-serving institutions to develop AI programs, ensuring talent development is inclusive. An example outcome is that Howard University now has a well-regarded AI research center in partnership with Google. The Biden administration also often frames its tech initiatives with equity language: the Infrastructure Law’s broadband funding was promoted as bringing connectivity (and by extension, access to AI-powered services) to all Americans, likening it to rural electrification in importance. This narrative helps rural and poorer Americans see AI not just as something that Big Tech or elites play with, but something that will tangibly improve their lives (faster internet for their kids’ schooling, telehealth AI that may save their life, etc.).
Civil Society and Multi-stakeholder Engagement: Public trust is reinforced by having independent watchdogs and including their voices in policy. U.S. civil society – ACLU, Electronic Frontier Foundation, etc. – has been very active on AI issues. Rather than viewing them as adversaries, the current strategy is to engage them. For instance, the Partnership on AI includes several civil society groups alongside companies and academics, creating a forum for concerns like facial recognition bans or gig worker algorithmic accountability to be discussed and guidelines to be developed collaboratively. This multi-stakeholder approach was visible when the Interim NIST AI Risk Management Framework draft was released – hundreds of comments came in from diverse stakeholders and were integrated. Such processes help produce guidance that has buy-in, which means when companies follow it, advocacy groups trust it more (for example, if a company says “we adhere to NIST’s AI RMF”, NGOs know their voices shaped that framework to some extent). Additionally, journalism has a role in trust: investigative reports that expose AI pitfalls (like ProPublica’s 2016 piece on bias in criminal risk scores) have sparked reforms (several jurisdictions dropped or revised such tools after the outcry). Government officials now proactively include journalists in briefings about AI initiatives to ensure accurate public understanding. For instance, when the Census Bureau used AI to implement “differential privacy” to protect data in the 2020 census, it held press calls and released plain-language explanations to pre-empt fears about data accuracy. The idea is transparency up front avoids conspiracy theories or erosion of trust later. Lastly, trust is built by accountability when things go wrong: a notable example is the first known wrongful arrest due to facial recognition (Detroit, 2020). In response, the city’s police changed its policies (no more using facial recognition for sole probable cause) and the incident fed into a national call for at least moratoriums on that tech until better accuracy for all groups. The fact that there was a public reckoning and policy adjustment can restore some trust – it shows the system can correct itself. This dynamic accountability, combined with the proactive measures, contributes to an environment where the public can see that AI is being integrated carefully and with their values in mind, even if skepticism naturally remains in some quarters.
In conclusion, the U.S. views public trust and social inclusion not as soft, secondary goals but as critical enablers of its AI strategy. Without public buy-in, government AI projects would fail, and corporate innovations could face consumer rejection or harsh regulation. By actively engaging the public, addressing ethical concerns, and striving to democratize AI’s benefits, the U.S. aims to cultivate a social license for AI – a shared belief that this technology, under vigilant oversight, will serve the common good and uphold American ideals of equality and justice.
15. Evolution of the U.S. National AI Strategy (2015–2025)
This section traces the development of the United States’ national AI strategy over the past decade, highlighting key milestones, policy shifts, and emerging themes year by year. From early awareness under the Obama administration to a flurry of initiatives in the late 2010s, followed by intensified focus on competition and safety in the mid-2020s, the U.S. approach to AI has evolved significantly. The timeline underscores how bipartisan consensus on AI’s importance drove continuity, even as each administration put its own stamp on priorities – whether it be industry deregulatory emphasis, or ethical and global leadership concerns. Major external events (the COVID-19 pandemic, geopolitical tech rivalry) and breakthrough AI advancements (like the advent of deep learning dominance and generative AI) each spurred adaptations in U.S. strategy. The cumulative trajectory shows an initial period of laying foundations, a middle period of scaling up investment and coordination, and a recent period of grappling with societal implications and international norms. By mid-2025, the U.S. AI strategy is more comprehensive and institutionalized than ever, though poised to further adjust in light of rapid technological change and the upcoming presidential term.
- 2015: Laying the Groundwork & Private Sector Boom – AI burst into mainstream awareness as deep learning achievements captured headlines (e.g., image recognition surpassing human levels). The Obama White House recognized AI’s potential and risks, launching an initial wave of fact-finding. The Executive Office of the President released a pivotal report, “Preparing for the Future of Artificial Intelligence,” in October 2016, but the work began in 2015 with OSTP organizing the first public workshops on AI’s impacts. These workshops – on topics like safety, regulation, and economic effects – indicated a proactive stance. Meanwhile, 2015 saw the founding of OpenAI as a nonprofit research lab in California, backed by tech luminaries, marking a novel model to drive AI forward safely outside of Big Tech. In industry, Google open-sourced its TensorFlow AI library (Nov 2015), accelerating global AI development on U.S. software. The federal government’s AI activity in 2015 was relatively nascent, but seeds were planted: DARPA ramped up funding for AI research, and the DoD’s “Third Offset Strategy” identified autonomy and AI as core to maintaining military superiority. This year is often seen as the inflection when U.S. AI moved from academic niche to strategic priority, catalyzed by private sector breakthroughs and quiet government conceptual work.
- 2016: First Strategic Plans and Public Engagement – The U.S. government released its first formal strategic documents on AI. In addition to the Preparing for the Future report, OSTP published the National AI R&D Strategic Plan (October 2016), outlining seven focus areas for research investments. Key priorities included long-term investments in AI, developing effective methods for human-AI collaboration, and understanding ethical, legal, and societal implications. These documents, though late in the Obama presidency, set a baseline. OSTP also established the Machine Learning and AI Subcommittee under the National Science and Technology Council (NSTC) to coordinate interagency efforts. Public engagement ramped up: OSTP held a series of well-attended workshops in academic venues (Seattle, Pittsburgh, etc.), directly involving over 1,000 stakeholders in discussions that year. The high-profile victory of DeepMind’s AlphaGo (a London-based Google subsidiary) over a human Go champion in March 2016 vividly illustrated AI’s advancing capabilities and underscored the competitive stakes – it spurred China to increase investment, which in turn got U.S. attention. On the legislative side, Congress passed the American Innovation and Competitiveness Act in late 2016, touching on computing and STEM but not yet specifically targeting AI – still, it signaled bipartisan support for science that would benefit AI. By year’s end, the U.S. had in place a vision and initial framework for AI policy, although it was largely advisory. The impending administration change left questions on how these plans would be implemented.
- 2017: Transition and Defense Acceleration – The incoming Trump Administration initially took a less visible approach to AI – OSTP’s staff was downsized and there was a pause in public AI initiatives in early 2017. However, momentum in defense and intelligence picked up: the Department of Defense launched Project Maven in April 2017, an initiative to deploy AI algorithms to analyze drone surveillance footage and relieve human analysts . Maven was the first large-scale combat-zone AI deployment and a wake-up call in Silicon Valley after Google’s involvement became public, leading to employee protests by 2018. Nonetheless, Maven achieved its Phase I goals by year’s end (identifying objects in Iraq/Syria drone video), showcasing AI’s military value. Recognizing adversaries’ strides, senior U.S. defense officials made speeches calling AI a “game-changer” and established the Algorithmic Warfare Cross-Functional Team to coordinate AI across the Pentagon. Meanwhile, Congress inserted language in the FY2018 NDAA (passed late 2017) creating the National Security Commission on AI (NSCAI), a temporary independent commission to study how the U.S. should foster AI for defense – a sign of legislative concern about strategic competition. In the civilian space, 2017 was relatively quiet federally, but private sector and academia kept advancing: tech giants expanded AI research labs (e.g., Google Brain, Facebook AI Research grew significantly) and new startups flourished in autonomous driving, fintech, and healthcare AI, supported by record VC funding. The lack of early Trump admin public engagement on AI ended by the close of 2017, when the White House OSTP co-hosted the AI for American Industry summit in May 2018 (planned in late 2017) – planning for that was underway in late 2017, indicating the administration would pursue AI through an economic competitiveness lens.
- 2018: Institutional Coordination and AI Goes Mainstream – This year saw the U.S. government formally organize for AI leadership. In May 2018, the White House held a Summit on AI with over 100 industry executives, academic leaders, and government officials, reaffirming a commitment to “maintain the United States’ leadership in AI” and soliciting input. Following the summit, the Trump administration created the Select Committee on Artificial Intelligence under NSTC to coordinate federal R&D and strategy. Led by OSTP’s Dr. Lynne Parker, it brought together senior R&D officials from across agencies – effectively rebooting and elevating the Obama-era subcommittee. In parallel, Defense Secretary Mattis wrote to President Trump urging an national AI strategy, citing China’s rising investments; this influenced the administration’s thinking and was later revealed publicly. DARPA announced its ambitious “AI Next” campaign in September 2018, committing $2 billion to a portfolio of new programs (e.g., in contextual reasoning, common sense AI) . On the legislative front, bipartisan support grew: Congress established the Joint AI Center (JAIC) in the DoD through the FY2019 NDAA (mid-2018) to accelerate AI adoption in defense, with initial $1.7 billion funding over a few years. By late 2018, agencies started releasing AI strategies (e.g., DoD’s Summary of 2018 AI Strategy emphasized rapid fielding and ethics; DHS published an AI strategic plan for using AI in border security and disaster response). The idea of AI as a national priority had fully taken hold. Culturally, 2018 was the year AI firmly entered public discourse – self-driving car tests were expanding (with high-profile incidents like an Uber autonomous car fatality in March leading to discussions about safety), and tools like Amazon’s Alexa and Apple’s Siri brought rudimentary AI into daily life. This normalization increased public expectations and anxieties, prompting policymakers to consider not just promoting AI, but also managing its societal implications.
- 2019: Launch of the American AI Initiative – The U.S. federal government’s first comprehensive AI strategy was unveiled via executive action. In February 2019, President Trump signed Executive Order 13859: Maintaining American Leadership in AI, officially creating the American AI Initiative. This EO articulated priority actions: (1) invest in AI R&D (doubling funding was a stated goal), (2) unleash federal data and resources for AI (e.g., making data available on Data.gov, providing AI compute on cloud), (3) set standards for AI safety and interoperability (tasking NIST to lead), (4) build AI workforce (through education grants, apprenticeships), and (5) engage internationally to promote a supportive environment. Importantly, it was an unfunded mandate but directed agencies to prioritize existing funds toward AI – which they did in subsequent budgets. Following the EO, agencies released AI plans: e.g., DOE launched an AI Technology Office, USDA used AI for crop monitoring. In April 2019, the U.S. joined the Global Partnership on AI (GPAI) as a founding member, working with G7/OECD allies. NIST quickly responded to the EO by publishing a Plan for Federal Engagement in AI Standards (August 2019) urging U.S. leadership in global standards bodies. Another major milestone: Congress passed the National AI Initiative Act as part of the FY2021 NDAA in December 2020 (technically 2020, but written in 2019), which legislated many aspects of the AI Initiative, including the formation of the National AI Initiative Office at OSTP, a National AI Advisory Committee, and NSF AI Institutes expansion . 2019 also saw public scrutiny: face recognition’s accuracy disparities were spotlighted by NIST’s study in December (finding false positive rates higher for African-Americans in some algorithms), leading lawmakers to propose bills to curb government use pending improvements. Overall, 2019 established the structural framework for U.S. AI policy and signaled to the world that the U.S. was serious about leadership, even as it grappled with ethical debates.
- 2020: Pandemic, AI Applications in Crisis, and Continuing Momentum – The COVID-19 pandemic upended the world and demonstrated AI’s utility in a crisis. The U.S. government and companies leveraged AI for vaccine and drug discovery, epidemiological modeling, and telemedicine. In March 2020, the White House OSTP organized the COVID-19 Open Research Dataset (CORD-19), a public dataset of scholarly articles on coronaviruses, and challenged AI researchers to develop text-mining tools to glean insights – within weeks, AI systems were helping scientists parse tens of thousands of papers . The pandemic also accelerated adoption of AI in supply chain logistics and medical imaging (AI systems were authorized by FDA for detecting COVID pneumonia in lung scans). Meanwhile, policy didn’t slow: in January 2020, the White House issued OMB Memo M-20-36 providing Guidance on Regulation of AI applications, advising agencies to avoid over-regulation and adhere to 10 principles like fairness, transparency, and public participation in rulemaking. This memo essentially set a light-touch, innovation-friendly posture at the federal regulatory level. In August, the U.S. joined G7 science ministers in endorsing AI principles for pandemic response, marrying AI policy with emergency response. Geopolitically, U.S.-China tech tensions grew: the Trump administration expanded export controls (adding more Chinese AI firms to the Entity List) and in August 2020 announced the intention to ban TikTok/WeChat (citing data and influence risks) – not directly AI issues but part of tech decoupling that affects AI data flows and business. The NSCAI released interim reports through 2020 warning the U.S. was still not fully prepared to compete with China in AI. Partly in response, Congress overwhelmingly passed the William (Mac) Thornberry NDAA 2021 in Dec 2020, which included the full National AI Initiative Act (mentioned above) and the AI in Government Act to boost federal agency AI capacity. By the end of 2020, the U.S. had weathered a trial by fire with AI aiding in pandemic management, and positioned itself with new laws and an incoming administration likely to further elevate AI (as Biden’s campaign hinted at major tech and R&D investments).
- 2021: New Administration, Global Alignment, and Societal Concerns – The Biden Administration took office with AI as a priority within a broader science and tech agenda. Eric Lander, as OSTP Director (elevated to Cabinet rank), spoke of a “Bill of Rights for Automated Society” early on – signaling more attention to AI ethics. In June, the U.S. joined G7 leaders in launching the Global Partnership on AI (GPAI) formally (the Trump admin had joined late 2019, but Biden admin fully embraced it). Internationally, the U.S. re-engaged in multilateral AI efforts, co-sponsoring the first UN Security Council discussion on AI (December 2021) which focused on AI in warfare and stability. Domestically, in October 2021 OSTP initiated the process for the AI Bill of Rights by issuing an RFI for public input. On the R&D front, the Infrastructure Investment and Jobs Act (Nov 2021) authorized large investments in broadband and electrification that indirectly support AI deployment; and the administration proposed the CHIPS and Science Act (though it wouldn’t pass until 2022) with tens of billions for AI-related R&D. The Pentagon in June 2021 created the Chief Digital and AI Officer (CDAO) position (filled in early 2022) to unify its data, analytics, and AI functions, showing continuity and expansion of JAIC’s mission. There was also a significant public-private milestone: in May, Google’s DeepMind (though UK-based, closely tied to Google US) solved a 50-year grand challenge, protein folding, with its AlphaFold AI – demonstrating AI’s scientific prowess and leading OSTP to host discussions on how AI can accelerate biomedical research. However, 2021 was also marked by high-profile AI incidents fueling public concern: Facebook’s whistleblower Frances Haugen testified to Congress in October that its algorithms harmed teens and sowed discord, reinforcing bipartisan calls to rein in AI-driven social media harms. This fed into FTC and Congress exploring regulations on recommendation algorithms and transparency. Thus, 2021 was a year of integrating AI into broader policy (tech competition with China, Big Tech accountability, infrastructure) and setting the stage for ethical guardrails.
- 2022: AI Legislation and First Steps on Governance – This year saw landmark funding and initial regulatory frameworks. In August, after long negotiation, Congress passed the CHIPS and Science Act of 2022, which authorized roughly $200 billion for science R&D over 5 years and $52 billion for semiconductor manufacturing, explicitly highlighting AI as a major beneficiary of the science funds. It included expansion of NSF AI Institutes, new DOE AI research programs, and STEM workforce programs – effectively supercharging the National AI Initiative with resources. On governance, the White House unveiled the Blueprint for an AI Bill of Rights in October. Though not binding, it was a White House policy document defining principles like Safe and Effective AI and Algorithmic Discrimination Protections , intended to guide federal agencies and perhaps inspire future regulation. Earlier, in January, the NIST AI Risk Management Framework project released its draft; throughout 2022 NIST worked with industry and civil society to shape this voluntary but influential framework, signaling a move to operationalize ethics and risk reduction in AI development. Agencies also exercised existing authorities: the EEOC launched investigations into AI hiring tools and the Consumer Financial Protection Bureau warned lenders that using biased AI underwriting could violate fair lending laws. The year was also big for the generative AI revolution: OpenAI released DALLE-2 (image generation) in April and ChatGPT in Nov 2022, captivating the public and policymakers alike with AI’s creative and conversational abilities. This led OSTP and others to start grappling with generative AI’s implications (misinformation, IP, education cheating, etc.), foreshadowing the flurry of 2023 actions. Internationally, the U.S. in late 2022 opposed the EU’s push to advance negotiations on a global AI treaty at the UN, preferring its multi-stakeholder approach; instead, it focused on bilateral ties like launching the U.S.-UK Atlantic Declaration in December which included AI cooperation on research and standards. By the end of 2022, with CHIPS Act passed and early AI governance frameworks introduced, the U.S. had made significant strides in both strengthening AI capabilities and addressing its risks.
- 2023: Generative AI Hype and First Comprehensive Executive Order – The emergence of generative AI as a mass-market phenomenon (ChatGPT reached 100 million users by January, the fastest ever) spurred urgent policy discussions. In May 2023, OpenAI CEO Sam Altman testified in Congress, acknowledging AI’s risks and surprisingly endorsing regulation , which added momentum to legislative efforts. While Congress held multiple hearings and Sen. Schumer convened AI insight forums (with tech CEOs, experts) to shape legislation, the Biden Administration acted with existing tools. In July, the White House negotiated Voluntary Commitments with 7 leading AI firms (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) on safety, security, and transparency measures for AI models – a stopgap governance measure. Then on October 30, 2023, President Biden issued a sweeping Executive Order on AI – the first of its kind globally. This EO instituted new requirements: companies must notify and share test results with the government when training models above a certain power threshold, federal agencies were tasked to set standards for watermarking AI-generated content, and a plethora of actions around privacy, job impact, and innovation were ordered across government. Essentially, it operationalized many ideas from the AI Bill of Rights and NSCAI within the executive branch. It also addressed national security, e.g., directing Commerce to curb exports of AI compute to foreign adversaries and DHS to manage risks of AI in critical infrastructure. On the Hill, while comprehensive AI legislation wasn’t passed in 2023, momentum was building; some narrow bills did advance (like the National AI Commission Act to create a new study commission, and aspects of generative AI in political ads being tackled in election legislation). Internationally, the U.S. participated in the UK’s Global AI Safety Summit (Nov 2023) and took leadership in launching the Code of Conduct for Advanced AI with the EU at the G7, aiming for voluntary global standards ahead of formal law. By end of 2023, AI was a visible part of everyday discourse, and the U.S. had moved from planning and capacity-building to active governance and norm-setting, without losing sight of competitiveness (the EO also talked about attracting talent and promoting innovation, not just regulation).
- 2024: Implementation, Global Leadership, and Election Spotlight – In 2024, the focus turned to implementing 2023’s policies and shaping global governance, all under the watch of a presidential election where AI itself became a subject. The Commerce Department’s Bureau of Industry and Security finalized updated export controls on AI chips (expanding the 2022 rules to cover newer GPUs and other countries mirroring them) , reinforcing hardware resilience. Federal agencies began complying with the Executive Order: NIST launched a pilot AI safety test center (as mandated), OMB issued guidelines for agencies to evaluate AI in procurement and use (building on the EO’s principles), and DHS started a “red team” unit to probe AI models for misuse potential. President Biden, in a March 2024 international conference, proposed a Global Advisory Body on AI under the UN framework, attempting to coordinate AI governance among nations – an idea that gained traction especially among G7 and G20 partners. Meanwhile, Congress continued bipartisan meetings on AI legislation, with draft bills circulating that would create a federal AI licensing regime for advanced models and establish liability for AI-caused harms. AI became a campaign topic: “deepfake” campaign ads appeared (the RNC released an AI-generated hypothetical ad against Biden in April), prompting several candidates of both parties to pledge not to use deepfakes and increasing calls for an election deepfake ban (California had passed one at state level). The FEC by mid-2024 was considering rules on AI in political ads. The campaign also elevated discussions on automation’s impact on jobs, with candidates pressured to articulate plans for managing AI-driven changes in the workforce. On the innovation front, U.S. companies rolled out even more powerful models (rumors of OpenAI’s GPT-5 training, Google’s Gemini model combining language and vision, etc.), keeping the U.S. at the cutting edge as competitors emerge (a notable open-source model from UAE’s Tech Institute made waves, reflecting globalization of capability – which U.S. strategists noted as a reason to double-down on talent and research funding). The OECD in 2024 launched an AI policy observatory, heavily influenced by U.S. data and case studies, showing the U.S. as a reference model for others. Summits like the “Summit for Democracy” included sessions on AI governance, led by the U.S., linking AI ethics to democratic values globally. By late 2024, the U.S. national AI effort was firing on all cylinders: industry still leading in tech, government guiding with a heavier yet collaborative hand, and societal discourse actively shaping both opportunities and guardrails for AI’s future.
- 2025: Consolidation and Strategic Self-Sufficiency – By mid-2025, the United States’ AI strategy emphasized consolidating gains and pursuing self-sufficiency amid global competition. Early in 2025, the National AI Advisory Committee delivered its Year 2 report, recommending the establishment of a dedicated AI Safety and Standards Agency to continually oversee high-risk AI – this is under consideration by the administration. The change of (or continuity in) administration after the January inauguration could slightly shift emphasis: a second Biden term would likely continue strong regulation and alliances, whereas a new administration might recalibrate towards deregulation and pure innovation – indeed, January 2025 saw a hypothetical Executive Order draft titled “Removing Barriers to American AI Leadership,” suggesting a rollback of some prior guidelines. Regardless, the core investment strategy persists: the CHIPS Act fabs are coming online (TSMC Arizona producing 4nm chips by 2025, Intel’s new fabs starting trial runs), moving the U.S. toward secure hardware supply. The U.S. is also pushing the frontier: a national lab-led project on quantum-enabled AI launched in 2025 with multi-agency funding to explore AI algorithms on quantum computers, reflecting foresight into post-Moore’s Law technologies. Globally, the U.S. convened the first-ever Global Summit on Generative AI in June 2025, inviting not just allies but also China (which attended warily) to discuss norms for responsible development and incident sharing – a diplomatic win that kept the U.S. at the center of rule-making. On the domestic front, early 2025 brought some legislative action: Congress, noting public pressure, passed a bipartisan bill banning AI-generated deceptive content in political ads (with criminal penalties), narrowly in time for the 2026 midterms. Implementation of federal AI use saw tangible results: a GAO audit in April found that 85% of federal agencies now have an inventory of AI applications and 68% have conducted at least one algorithmic impact assessment, up from near zero in 2020 – indicating institutionalization of thoughtful AI deployment. Economically, AI contributions were evident as inflation remained moderate partly due to AI-driven productivity gains in logistics and manufacturing, a point touted in administration economic reports. However, concerns remain: tech layoffs in late 2024, attributed to generative AI automating certain white-collar tasks, led to renewed calls for retraining programs – which the Department of Labor scaled up with a $500 million AI Resilience Workforce Fund in early 2025, supported by some of the Big Tech firms as an initiative to maintain public goodwill. Thus, in 2025 the U.S. AI strategy is characterized by bolstering resilience (secure chips, diversified talent), fine-tuning governance (from voluntary to more mandatory where needed), and doubling down on innovation to ensure the U.S. stays not just a leader but the leader in the transformative AI era, shaping it in accordance with American interests and ideals.
Strategy Evolution
2025. Safety, standards & compute alliances
White House finalizes Executive Order on AI Safety; a National AI Research Resource pilot begins; U.S. deepens AI alignment with G7 and Quad partners while tightening chip export controls to constrain adversaries.
2024. Risk frameworks & AI diplomacy
NIST’s AI Risk Management Framework goes global; U.S. co-leads Hiroshima AI Process on generative AI standards; NSF expands AI Institutes to 27 across 40 states; voluntary safety pledges signed by leading U.S. labs.
2023. Generative AI governance & commercial dominance
ChatGPT, Bard, and Claude reshape public perception; OSTP’s Blueprint for an AI Bill of Rights gains traction; executive roundtables accelerate commitments to watermarking, red-teaming, and open safety R&D.
2022. CHIPS Act & ethical guardrails
CHIPS and Science Act unlocks $280 bn for tech R&D; AI Bill of Rights introduces five principles; White House task force proposes National AI Research Resource to democratize compute access.
2021. Institutionalization & whole-of-nation strategy
National AI Initiative Office opens at OSTP; AI Advisory Committee convenes; DOD elevates JAIC into Chief Digital and AI Office (CDAO) to accelerate military integration and data readiness.
2020. Strategic alignment & regulatory scaffolding
OMB releases principles for AI regulation—transparency, fairness, non-discrimination; Congress passes the National AI Initiative Act with bipartisan support, formalizing national coordination.
2019. American AI Initiative & JAIC scaling
Executive Order 13859 lays out five pillars for AI leadership; DOD funds early AI programs including Project Maven and predictive maintenance; NIST tasked with leading global AI standards work.
2018. Defense-first pivot
Pentagon issues first AI Strategy; Joint AI Center (JAIC) is established; DARPA launches $2 bn “AI Next” campaign targeting explainability, resilience, and adversarial robustness.
2016. Foundations & foresight
OSTP releases first National AI R&D Strategic Plan; “Preparing for the Future of AI” report frames AI as general-purpose and cross-sectoral, urging ethical foresight and workforce adaptation.
Pre-2015. Basic research & quiet incubation
DARPA and NSF bankroll foundational work in NLP, robotics, and vision; AI powers early intelligence systems post-9/11; strategy remains fragmented but rooted in Cold War R&D traditions.
2025
On January 23, 2025, President Donald Trump signed an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence.” This directive aims to reinforce the United States’ position as a global leader in AI by promoting innovation free from ideological bias. The order revokes previous AI policies deemed obstructive to progress and mandates the development of an action plan within 180 days to enhance AI’s role in promoting human flourishing, economic competitiveness, and national security. Additionally, it instructs relevant agencies to review and amend existing directives to align with the new policy objectives.
2024: AI Risk Management and National Security Enhancement
- Event: In 2024, the U.S. government focuses on enhancing AI governance and addressing the risks posed by advanced AI systems. The federal government emphasizes public-private partnerships to strengthen AI safety and security in critical infrastructure, defense, and financial systems. AI standards for ethical use and transparency become more robust, with continued efforts to ensure AI applications align with U.S. democratic values.
- Document/Link: Pending release.
2023: AI Bill of Rights Released
- Event: The White House publishes an AI Bill of Rights, which outlines protections and ethical guidelines for the development and use of AI systems. It emphasizes principles such as privacy, fairness, and transparency, aiming to ensure that AI is developed in ways that protect civil liberties and do not exacerbate societal biases.
- Document/Link: AI Bill of Rights (2023).
2022: National AI Research Resource (NAIRR) Implementation
- Event: The U.S. government officially launches the National AI Research Resource (NAIRR), a shared platform designed to democratize access to AI resources for researchers across academic institutions, industries, and government agencies. NAIRR aims to accelerate AI research, innovation, and collaboration, particularly in underserved regions and sectors.
- Document/Link: NAIRR Roadmap (2022).
2021: National Artificial Intelligence Initiative Act
- Event: The National Artificial Intelligence Initiative Act becomes law in January 2021. This landmark legislation establishes a coordinated federal strategy for AI development across government agencies, research institutions, and private companies. The initiative focuses on advancing U.S. leadership in AI, promoting AI education and workforce development, and ensuring ethical and responsible AI use.
- Document/Link: National AI Initiative Act (2021).
2020: Executive Order on Promoting the Use of Trustworthy AI in Government
- Event: The Trump administration issues an Executive Order on Promoting the Use of Trustworthy AI in Government, which mandates that federal agencies adopt AI systems that are ethical, accountable, and aligned with American values. This order highlights the government's commitment to AI innovation while ensuring that public trust is maintained.
- Document/Link: Executive Order on Trustworthy AI (2020).
2019: American AI Initiative
- Event: President Trump signs the American AI Initiative through an executive order. The initiative sets out a strategy for federal agencies to prioritize AI research and development, ethical AI use, workforce development, and international collaboration. The initiative is the first comprehensive national strategy on AI, and it includes a call to action for federal agencies to improve access to AI resources and data for researchers.
- Document/Link: American AI Initiative (2019).
2018: Department of Defense AI Strategy
- Event: The U.S. Department of Defense (DoD) releases its first AI Strategy, recognizing AI as a critical tool for future military capabilities. The strategy focuses on accelerating the adoption of AI within defense systems, ensuring the ethical use of AI in military operations, and collaborating with allied nations to develop AI for defense applications.
- Document/Link: DoD AI Strategy (2018).

Summary of the 2018 Department of Defense Artificial Intelligence Strategy
Harnessing AI to Advance Our Security and Prosperity
2017: Report on Preparing for the Future of AI
- Event: The White House releases a comprehensive report titled Preparing for the Future of Artificial Intelligence, which outlines the potential benefits and challenges of AI for the U.S. economy, national security, and society at large. The report also provides recommendations for government action in AI research, education, and policy to ensure that AI is used for public good.
- Document/Link: Preparing for the Future of AI (2017).
2016: National AI and Robotics Roadmap
- Event: The National AI and Robotics Roadmap is released as part of the U.S. government’s broader strategy to promote AI and automation technologies. This roadmap provides a detailed vision for AI and robotics development in sectors such as manufacturing, healthcare, and national security, with a focus on workforce readiness and ethical AI use.
- Document/Link: National AI and Robotics Roadmap (2016).
2015: Broadening AI Research and Development
- Event: The U.S. government increases its focus on AI research, with a particular emphasis on funding AI initiatives through agencies such as the National Science Foundation (NSF) and DARPA. Several AI research programs are expanded, and public-private partnerships are encouraged to accelerate AI innovation. AI applications in healthcare, defense, and education begin to take root in government priorities.
- Document/Link: NSF AI Research Initiatives (2015).
2014: DARPA’s AI and Machine Learning Investment Surge
- Event: The Defense Advanced Research Projects Agency (DARPA) significantly increases its investments in AI and machine learning, focusing on next-generation AI technologies for defense applications. DARPA’s programs, such as the AI Next Campaign, target advances in natural language processing, autonomous systems, and AI for cybersecurity, laying the groundwork for future U.S. dominance in AI research.
- Document/Link: DARPA AI Next Campaign (2014).
https://thediplomat.com/2024/07/chinas-national-power-and-artificial-intelligence/
2023
2023: Launch of the National AI Cybersecurity Strategy
In April 2023, the Biden administration introduced the National AI Cybersecurity Strategy, which emphasized the need for strong cybersecurity measures to protect AI technologies used in critical infrastructure sectors such as energy, healthcare, and transportation. This strategy aimed to ensure that AI systems are safe from cyber threats as they become increasingly integrated into the fabric of U.S. national infrastructure.
- Event: National AI Cybersecurity Strategy launch.
- Date: April 2023
- Details: The strategy promoted collaboration between government agencies, private companies, and international partners to strengthen the security of AI systems in critical industries.
- Link: National AI Cybersecurity Strategy
2022: Blueprint for an AI Bill of Rights
In October 2022, the White House’s Office of Science and Technology Policy (OSTP) introduced the Blueprint for an AI Bill of Rights. This non-binding framework was designed to protect citizens from the potential risks posed by AI technologies, particularly around issues of privacy, bias, and fairness. It outlined five key principles to ensure that AI systems operate in a way that respects human rights.
- Event: Release of the Blueprint for an AI Bill of Rights.
- Date: October 2022
- Details: The blueprint established protections against biased decision-making, a right to privacy and transparency, and recommendations for the responsible use of AI systems.
- Link: Blueprint for an AI Bill of Rights
2021: Updated National AI R&D Strategic Plan
The National Science and Technology Council (NSTC) updated the National AI Research and Development Strategic Plan in June 2021. This revision expanded on the 2016 version by adding new objectives, including the promotion of trustworthy AI systems and the need for diversity and inclusion in AI research. It reaffirmed the role of AI in national security and economic competition.
- Event: 2021 Update to the National AI R&D Strategic Plan.
- Date: June 2021
- Details: The updated plan focused on building ethical, trustworthy AI systems and emphasized equitable participation in AI research and development.
- Link: 2021 National AI R&D Strategic Plan
2020: National AI Initiative Act of 2020
In December 2020, the National AI Initiative Act was passed by Congress and signed into law in early 2021. This legislation established a formal framework for AI development in the U.S., promoting coordination between federal agencies, supporting AI research, and focusing on AI ethics and standards. The Act also encouraged international collaboration on AI governance.
- Event: National AI Initiative Act signed into law.
- Date: December 2020
- Details: The Act called for a federal strategy to ensure the U.S. remains a leader in AI, fostering R&D, workforce development, and standards.
- Link: National AI Initiative Act of 2020
2019: Executive Order on Maintaining American Leadership in AI
In February 2019, President Donald Trump issued an Executive Order on Maintaining American Leadership in Artificial Intelligence, marking the launch of the American AI Initiative. This was a pivotal moment in the U.S. AI strategy, directing federal agencies to increase investments in AI research and development, support AI education, and foster public-private collaboration.
- Event: Executive Order on AI leadership.
- Date: February 11, 2019
- Details: The Executive Order outlined the key pillars for maintaining U.S. leadership in AI, including R&D, governance, and workforce development.
- Link: Executive Order on AI (2019)
2018: Establishment of the Joint Artificial Intelligence Center (JAIC)
In June 2018, the U.S. Department of Defense created the Joint Artificial Intelligence Center (JAIC) to accelerate AI integration into military systems. The center was tasked with managing AI initiatives across the DoD, highlighting the growing importance of AI in national security and defense modernization.
- Event: Establishment of the Joint AI Center.
- Date: June 2018
- Details: The JAIC aimed to enhance U.S. defense capabilities through AI technologies, focusing on applications such as autonomous systems and cyber defense.
- Link: Joint AI Center Announcement
2017: AI and the National Security Strategy
In December 2017, the Trump administration’s National Security Strategy explicitly recognized AI as a critical component of national defense. This marked a shift toward integrating AI into U.S. military and national security policy, signaling the technology’s growing importance for defense and intelligence operations.
- Event: AI included in the National Security Strategy.
- Date: December 2017
- Details: The National Security Strategy highlighted AI’s potential to modernize U.S. defense capabilities and maintain strategic competitiveness.
- Link: 2017 National Security Strategy
2016: National AI Research and Development Strategic Plan
In October 2016, the U.S. government published the National Artificial Intelligence Research and Development Strategic Plan. This was the first comprehensive federal effort to coordinate AI research across government agencies, with a focus on areas like human-AI collaboration, long-term AI investments, and ethical AI development. The plan served as a roadmap for future U.S. AI initiatives.
- Event: Release of the National AI R&D Strategic Plan.
- Date: October 2016
- Details: The strategic plan outlined seven key priorities for AI development, including advancing human-AI collaboration and ethical AI research.
- Link: National AI R&D Strategic Plan (2016)
2013: Initiation of AI Discussions in Federal R&D
As early as 2013, the U.S. government began emphasizing the potential of AI in its broader science and technology agendas. The National Science Foundation (NSF) began prioritizing AI-related research projects, laying the foundation for future AI developments.
- Event: AI research highlighted by NSF.
- Date: 2013
- Details: The NSF started directing increased funding towards AI-related initiatives, acknowledging the importance of AI in maintaining U.S. global competitiveness.
- Link: NSF AI Research Focus
2016