The Race Is on to Shape AI Governance and Security

Author: Andrea Little Limbago, PhD, SVP, Applied AI  

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) was released one year ago. The recent Memorandum on AI builds upon the executive order and focuses on the national security implications of AI, including innovation and leadership within a secure AI framework. At Interos, we take AI very seriously, from building a secure AI framework to launching new AI products, AI is the center point in everything we do. 

Artificial Intelligence: The Stakes Could Not be Higher 

As the Memorandum details, the timing is critical, as the world undergoes a massive paradigm shift with technological transitions accompanied by global geopolitical shifts.   

In the big race to integrate AI, organizations must understand that along with the enormous innovation potential, security and geopolitical considerations cannot be an afterthought 

This Memorandum aims to catalyze change toward a Secure AI framework that supports innovation and leadership, while protecting against adversarial misuse and harm. The stakes could not be higher. 

What’s at Stake: Innovation, Economic Growth and Democracy or Authoritarianism and Suppression 

Amidst the ongoing AI hype cycle and trillions in investments, it may be easy to forget that AI – like most technologies – is dual-use in nature.  

That is, AI can foster innovations and significant breakthroughs, while also enabling more nefarious intentions. As the Memorandum articulates, AI is powering authoritarianism, including malicious cyber behavior, censorship and human rights violations. China is emerging as an ‘AI-tocracy’, using the technology to suppress dissent and entrench regime power. Russia’s notorious bot farms are powered by AI to spread disinformation globally. Iran is similarly deploying AI for influence operations, as well as domestic surveillance and human rights violations. 

But AI is also a tool to counter digital authoritarianism. Across the globe, AI is used to pursue democratic values, including empowering political communication, circumventing authoritarian regimes, and heightening defenses against malicious cyber activity. These are just a few examples to underscore the national security imperative detailed in the Memorandum.  

The global leader in AI governance will play a critical role in tilting the balance of AI applications toward innovation, economic growth, and democracy, or toward authoritarianism and suppression. 

The AI First-Mover Advantage 

Strategic competition is front and center throughout the recent Memorandum 

Technology does not exist in a vacuum; the current geopolitical shifts and spread of digital authoritarianism elevate the necessity for the United States to expand its technological edge in this era-defining technology 

Implicit within the Memorandum is that the international order is at an inflection point; the future will not look like the past.  

In these situations, first-mover advantage is critical as countries that have garnered the power of breakthrough general purpose technologies gain hegemonic influence in shaping the global order to their advantage. 

While the AI technological edge is critical to this, AI governance leadership too often takes a backseat to it. Leadership in AI governance is critical to gaining the first-mover advantage.  

Currently, the European Union (EU)’s AI Act is the first major imitative to introduce AI regulations and guardrails. China has also introduced several rules targeting AI, such as the use of generative AI quickly following the release of ChatGPT, but it has yet to formulate a comprehensive AI regulation.  

While the US has non-binding AI governance guidelines, such as EO 14110, a comprehensive federal AI regulation does not yet exist. To fill this void, in the 2024 legislative session, 45 states introduced AI legislation, and 31 adopted resolutions or passed legislation.  

Last week’s Memorandum clearly identifies the stakes at play, and continues the drumbeat of AI guidance, including the 2022 Blueprint for an AI Bill of Rights 

The US private sector is moving ahead absent a federal framework, introducing AI governance policies at a faster pace than the public sector. The race is on to shape AI governance, and the Memorandum outlines the national security implications for the US to lead this effort, and a partnership across the public and private sectors is critical to solidifying this edge. 

Partnership and Collaboration: Protecting the AI Supply Chain 

The Memorandum details a whole-of-society approach toward AI. Specifically, the Memorandum contends, “If the United States Government does not act with responsible speed and in partnership with industry, civil society, and academia to make use of AI capabilities in service of the national security mission — and to ensure the safety, security, and trustworthiness of American AI innovation writ large — it risks losing ground to strategic competitors.” 

This partnership is critical. While the Memorandum aims to ‘catalyze change’ in how the US government addresses AI national security policy, a similar revolution is necessary in how industry, civil society, and academia approach AI.

Several critical components of the Memorandum directly impact the private sector, such as building and retaining top AI workforce talent, defending against foreign interference and cyber threats, and integrating secure AI in critical infrastructure. 

Interos similarly advocates for a Secure AI framework; supply chains and national security are intricately intertwined. This has been made very clear with the Hezbollah device attacks, which marked an inflection point in modern warfare. 

According to Interos data, the average enterprise in the S&P 500 has 1,700 direct suppliers and 1.5 million relationships through its first 3 tiers of suppliers. 99% of those companies have ties with at-risk or restricted entities. While the Hezbollah device attacks were not via a restricted company, those technology companies on restricted lists represent a more probable pathway to hardware infiltration and warrant heightened alert – illustrating the widespread vulnerabilities that could be within an organization’s supply chain.  

Interos works closely with our customers, supporting their AI governance frameworks and serving as strategic partners to guide AI governance decisions. Secure AI is front in center of our development decisions as well, understanding that different forms of AI introduce different risks, and taking those into account to optimize the implementation of AI coupled with security. 

From jailbreaking to data poisoning to algorithmic manipulation, just as supply chains must be secured, so too must the AI supply chain be protected across inputs to algorithms to outputs 

Innovation and security must go hand in hand to truly leverage the vast potential of AI, while protecting ourselves and our supply chains from the growing range of national security risks. 

Toward a Secure AI Framework 

AI is an era-defining technology. Authoritarian regimes and adversaries are adopting AI at a rapid pace, introducing significant national security threats, including military advantage, global influence, and technological advantage. US leadership is necessary to tip the AI balance toward scientific breakthroughs that support humanity, protect democracy, and empower innovation.  

In the race toward AI adoption, security must be at the forefront, not an afterthought.  

The world is changing fast; previous paradigms are ill-prepared for ensuring the safety, security, and trustworthiness of our organizations, and our supply chains. AI is both the means toward achieving greater national security, but also poses a great threat if we fail to prepare for its malicious use.  

Even without malicious intent, AI systems require greater protection. The latest Memorandum is another critical step toward advancing US leadership in AI, but more is needed.  

The public and private sectors alike must internalize the national security imperative at stake or risk ceding this once-in-a-generation technology to the competition.  

AI-Powered Supply Chain Risk Management  

At Interos, we take AI very seriously. As a global leader in AI-powered supply chain risk intelligence, we are leveraging the power of AI to revolutionize supply chain resilience at a time when global disruptions are at an all-time high.  

We recently launched our latest AI innovation, “Ask Interos” that enables organizations to identify supplier threats in real time.  It is our first step towards contextual AI. The launch comes at a crucial time when organizations are inundated with data yet struggle to separate complex supply chain noise from actionable insights. 

Get in touch to see how we are using AI to secure supply chains in real time.