On last June the US Department of Defense awarded OpenAI a $200m contract to put generative artificial intelligence (AI) to work for the US military. The San Francisco-based company will « develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains », according to the defense department’s posting of awarded contracts. The program with the defense department is the first partnership under the startup’s initiative to put AI to work in governments, according to OpenAI. The company plans to show how cutting-edge AI can vastly improve administrative operations such as how service members get healthcare and also cyber defenses, according to a blog post. The startup claims that all use of AI for the military will be consistent with OpenAI usage guidelines, which are determined by OpenAI itself. The Pentagon explored the AI software for research, but the issue is the first by a combatant command whose mission is one of killing. Less than a year after OpenAI quietly signaled it wanted to do business with the Pentagon, a procurement document obtained by « The Intercept »[1] shows U.S. Africa Command, or AFRICOM, believes access to OpenAI’s technology is « essential » for its mission. The September 30, 2024 document lays out AFRICOM’s rationale for buying cloud computing services directly from Microsoft as part of its $9 billion « Joint Warfighting Cloud Capability contract », rather than seeking another provider on the open market. « The USAFRICOM operates in a dynamic and evolving environment where IT plays a critical role in achieving mission objectives, » the document reads, including « its vital mission in support of our African Mission Partners [and] USAFRICOM joint exercises. »
The document, labeled Controlled Unclassified Information, is marked as FEDCON, indicating it is not meant to be distributed beyond government or contractors. It shows AFRICOM’s request was approved by the Defense Information Systems Agency. While the price of the purchase is redacted, the approval document notes its value is less than $15 million. Like the rest of the Department of Defense, AFRICOM — which oversees the Pentagon’s operations across Africa, including local military cooperation with U.S. allies there — has an increasing appetite for cloud computing. The Defense Department already purchases[2] cloud computing access from Microsoft via the « Joint Warfighting Cloud Capability project ». This new document reflects AFRICOM’s desire to bypass contracting red tape and buy immediately Microsoft Azure cloud services, including OpenAI software, without considering other vendors. AFRICOM states that the « ability to support advanced AI/ML workloads is crucial. This includes services for search, natural language processing, [machine learning], and unified analytics for data processing. » And according to AFRICOM, Microsoft’s Azure cloud platform, which includes a suite of tools provided by OpenAI, is the only cloud provider capable of meeting its needs.
Microsoft began selling OpenAI’s GPT-4 large language model to defense customers in June 2023. Earlier this year, following the revelation that OpenAI had changed its mind on military work, the company announced a cybersecurity collaboration with DARPA in January and said its tools would be used for an unspecified veteran suicide prevention initiative. In April, Microsoft pitched the Pentagon on using DALL-E, OpenAI’s image generation tool, for command and control software. But the AFRICOM document marks the first confirmed purchase of OpenAI’s products by a U.S. combatant command whose mission is one of killing. OpenAI’s stated corporate mission remains “to ensure that artificial general intelligence benefits all of humanity.”
The AFRICOM document marks the first confirmed purchase of OpenAI’s products by a U.S. combatant command whose mission is one of killing. The document states that “OpenAI tools” are among the “unique features” offered by Microsoft “essential to ensure the cloud services provided align with USAFRICOM’s mission and operational needs. … Without access to Microsoft’s integrated suite of AI tools and services, USAFRICOM would face significant challenges in analyzing and extracting actionable insights from vast amounts of data. … This could lead to delays in decision-making, compromised situational awareness, and decreased agility in responding to dynamic and evolving threats across the African continent.” Defense and intelligence agencies around the world have expressed a keen interest in using large language models to sift through troves of intelligence, or rapidly transcribe and analyze interrogation audio data.
Microsoft invested $10 billion in OpenAI last year and now exercises a great deal of influence over the company, in addition to reselling its technology. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit. An OpenAI spokesperson told The Intercept, “OpenAI does not have a partnership with US Africa Command” and referred questions to Microsoft. Microsoft did not immediately respond to a request for comment. Nor did a spokesperson for AFRICOM. “It is extremely alarming that they’re explicit in OpenAI tool use for ‘unified analytics for data processing’ to align with USAFRICOM’s mission objectives,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, who has previously conducted safety evaluations for OpenAI. “Especially in stating that they believe these tools enhance efficiency, accuracy, and scalability, when in fact it has been demonstrated that these tools are highly inaccurate and consistently fabricate outputs. These claims show a concerning lack of awareness by those procuring for these technologies of the high risks these tools pose in mission-critical environments.” While the AFRICOM document contains little detail about how exactly it might use OpenAI tools, the command’s regular implications in African coup d’états, civilian killings, torture, and covert warfare would seem incompatible with OpenAI’s professed national security framework. Last year, AFRICOM chief Gen. Michael Langley told the House Armed Services Committee that his command shares “core values” with Col. Mamady Doumbouya, an AFRICOM trainee who overthrew the government of Guinea and declared himself its leader in 2021.
Although U.S. military activity in Africa receives relatively little attention in comparison to U.S. Central Command, which oversees American forces in the Middle East, AFRICOM’s presence is both significant and the subject of frequent controversy. Despite claims of a “light footprint” on the continent, The Intercept reported in 2020 a formerly secret AFRICOM map showing “a network of 29 U.S. military bases that stretch from one side of Africa to another.” Much of AFRICOM’s purpose since its establishment in 2007 entails training and advising African troops, low-profile missions by Special Operations forces, and operating drone bases to counter militant groups in the Sahel, Lake Chad Basin, and the Horn of Africa in efforts to bring security and stability to the continent. The results have been dismal. Throughout all of Africa, the State Department counted a total of just nine terrorist attacks in 2002 and 2003, the first years of U.S. counterterrorism assistance on the continent. According to the Africa Center for Strategic Studies, a Pentagon research institution, the annual number of attacks by militant Islamist groups in Africa now tops 6,700 — a 74,344 percent increase.
As violence has spiraled, at least 15 officers who benefited from U.S. security assistance have been involved in 12 coups in West Africa and the greater Sahel during the war on terror, including in Niger last year. (At least five leaders of that July 2023 coup received American assistance, according to a U.S. official.) U.S. allies have also been implicated in a raft of alleged human rights abuses. In 2017, The Intercept reported a Cameroonian military base used by AFRICOM to stage surveillance drone flights had been used to torture military prisoners. Dealing with data has long been a challenge for AFRICOM. AFRICOM’s mismanagement of information has also been lethal. Following a 2018 drone strike in Somalia, AFRICOM announced it had killed “five terrorists” and destroyed one vehicle, and that “no civilians were killed in this airstrike.” A secret U.S. military investigation, obtained by The Intercept via the Freedom of Information Act, showed that despite months of “target development,” the attack on a pickup truck killed at least three, and possibly five, civilians, including Luul Dahir Mohamed and her 4-year-old daughter, Mariam Shilow Muse.
The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis — or at least the simulacrum of analysis, makes them a natural fit for the data-laden Defense Department.
While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November 2024 address, Deputy Secretary of Defense Kathleen Hicks stated that AI is “a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1,” though she cautioned that most current offerings “aren’t yet technically mature enough to comply with our ethical AI principles.” Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”
[1] « OpenAI Quietly Deletes Ban on Using ChatGPT for “Military and Warfare » The Pentagon has its eye on the leading AI company, which this week softened its ban on military use. » By Sam Biddle. January 12 2024, 2:07 p.m. https://theintercept.com/2024/01/12/open-ai-military-ban-chatgpt/
[2] « Militaries, Intelligence Agencies, and Law Enforcement Dominate » U.S. and U.K. Government Purchasing from U.S. Tech Giants https://techinquiry.org/docs/InternationalCloud.pdf