Militarisation of Artificial Intelligence Implications of civilian ai firms entering defence contracts

The rapid advancement of artificial intelligence‭ (‬AI‭) ‬has extended far beyond commercial applications‭, ‬entering the domain of military affairs with increasing momentum‭. ‬In recent years‭, ‬major American and Israeli AI companies have secured substantial contracts with their respective defence establishments‭, ‬raising profound ethical‭, ‬legal‭, ‬and security concerns‭. ‬In early 2025‭, ‬Alphabet—the parent company of Google—reversed its previous commitment not to employ AI for defence and security purposes‭, ‬opening the door to the development of autonomous weapon systems and surveillance tools‭. ‬This shift has sparked debate on whether civilian data‭, ‬collected globally for commercial AI training‭, ‬may now be repurposed for espionage or military operations‭, ‬thus risking violations of privacy and human rights‭.‬

This analysis examines the historical roots of collaboration between technology companies and the U.S‭. ‬Department of Defense‭, ‬the various forms such partnerships have taken‭, ‬the motivations driving them‭, ‬and the implications for global security and civil liberties‭, ‬as well as war crimes‭, ‬particularly if AI applications lead to an increase in civilian casualties rather than reducing‭ ‬them‭.‬

Military Roots of Silicon Valley

The origins of Silicon Valley are deeply intertwined with the U.S‭. ‬Department of Defense‭. ‬During the 1950s‭, ‬in the midst of the‭ ‬Cold War‭, ‬Washington heavily invested in emerging tech firms to counter Soviet scientific and technological advances‭. ‬Santa Clara County—later dubbed‭ ‬“Silicon Valley”‭ ‬in 1971—played a critical role in developing cutting-edge military innovations‭, ‬including radar‭, ‬the internet‭, ‬intercontinental ballistic missiles‭, ‬reconnaissance satellites‭, ‬and microelectronics‭.‬

Advanced fighter aircraft such as the F-16‭ ‬would not have been operational without transistors‭, ‬integrated circuits‭, ‬and microprocessors designed in Silicon Valley‭. ‬These components enabled real-time data processing‭, ‬seamless communications‭, ‬and precision‭-‬guided munitions‭. ‬By the Reagan era‭, ‬the county was reaping nearly‭ $‬5‭ ‬billion annually in defence contracts‭, ‬ranking third nationally among Pentagon contractors‭. ‬By the end of the Cold War‭, ‬nine of its largest firms reported over‭ $‬11‭ ‬billion in defence-related revenues‭, ‬with AMD emerging as one of the Pentagon’s top semiconductor suppliers‭.‬

Partnerships in Multiple Forms

Agencies such as the Federal Bureau of Investigation‭ (‬FBI‭), ‬the Federal Bureau of Prisons‭, ‬U.S‭. ‬Immigration and Customs Enforcement‭ (‬ICE‭), ‬the Department of Defense‭ (‬DoD‭), ‬and the Drug Enforcement Administration‭ (‬DEA‭) ‬maintain thousands of contracts with Amazon‭, ‬Dell‭, ‬Facebook‭, ‬Google‭, ‬HP‭, ‬and IBM‭. ‬Since 2016‭, ‬Microsoft alone has signed more than 5,000‭ ‬subcontracts with the Department of Defence‭. ‬Amazon and Google follow‭, ‬with around 350‭ ‬and 250‭ ‬contracts respectively‭.‬

One illustrative example of cooperation between defence agencies and technology firms is the role of the Defense Advanced Research Projects Agency‭ (‬DARPA‭), ‬alongside other U.S‭. ‬intelligence agencies‭, ‬in funding Stanford graduate students Sergey Brin and Larry Page in 1998‭ ‬to establish Google‭. ‬This example highlights how Washington directly financed the creation of Silicon Valley companies‭.‬

At the time‭, ‬the CIA and NSA hoped that the country’s leading computer scientists could use unclassified information and user data‭, ‬combining them with what would become the internet‭, ‬to launch profitable commercial ventures that also served the needs of both business and intelligence communities‭. ‬This initiative—known as the Massive Digital Data Systems‭ (‬MDDS‭) ‬project—aimed to lay the foundation for a comprehensive mass-surveillance system bridging the public and private sectors‭, ‬enabling new ways to track individuals and groups online‭.‬

The MDDS project was presented to dozens of leading computer scientists at universities including Stanford‭, ‬CalTech‭, ‬MIT‭, ‬Carnegie Mellon‭, ‬Harvard‭, ‬and others‭, ‬through a‭ ‬“white paper”‭ ‬detailing what the CIA‭, ‬NSA‭, ‬DARPA‭, ‬and related agencies hoped to achieve‭. ‬Funding and management were to be channeled primarily through non-classified scientific bodies such as the National Science Foundation‭ (‬NSF‭), ‬creating the potential to expand the project into the private sector if it succeeded‭. ‬Against this backdrop‭, ‬accusations later emerged that Google had effectively created a‭ ‬“backdoor”‭ ‬for U.S‭. ‬intelligence agencies to access its systems and user data—especially after Edward Snowden’s 2013‭ ‬revelations that the NSA had breached Google’s user databases‭.‬

A second form of cooperation emerged through the direct creation of tech companies by intelligence agencies‭, ‬later spun off into‭ ‬the Silicon Valley ecosystem‭. ‬For example‭, ‬the CIA established its own venture capital arm in Silicon Valley‭, ‬In-Q-Tel‭, ‬which funded a nominally private company‭, ‬Keyhole Corporation‭. ‬Keyhole developed the mapping technology that became Google Earth‭, ‬enabling users to zoom down to street level with layered data on roads‭, ‬bridges‭, ‬nuclear sites‭, ‬schools‭, ‬businesses‭, ‬and more‭. ‬In-Q-Tel later sold Keyhole to Google‭, ‬which launched Google Earth after the acquisition‭. ‬More recently‭, ‬Google negotiated an agreement with GeoEye Corporation for exclusive access to imagery from its‭ $‬502‭ ‬million satellite‭, ‬co-funded by the National Geospatial‭-‬Intelligence Agency‭ (‬NGA‭) ‬and the National Reconnaissance Office‭ (‬NRO‭). ‬These arrangements highlight the depth of the ties between U.S‭. ‬defence and intelligence agencies on one side and technology firms on the other‭.‬

A third dimension of the relationship has been joint investment‭. ‬In one case‭, ‬In-Q-Tel and Google Ventures—Google’s investment arm—each invested just under‭ $‬10‭ ‬million in Recorded Future‭, ‬a company whose technology monitors the internet in real time to predict future events‭. ‬Google leveraged this capability to improve its data-gathering and indexing for advertising and consumer use‭, ‬while intelligence agencies used it to enhance open-source intelligence‭ (‬OSINT‭). ‬This marked the first known instance where Google and an intelligence-linked entity invested in the same company simultaneously‭, ‬at least publicly‭.‬

Beyond providing military-support systems‭, ‬technology companies have also entered direct military partnerships‭. ‬Meta‭, ‬the parent‭ ‬company of Facebook‭, ‬partnered with U.S‭. ‬defence startup Anduril to develop advanced augmented reality and AI systems as part of a‭ $‬22‭ ‬billion Pentagon program‭. ‬The initiative aims to enhance the combat and tactical capabilities of U.S‭. ‬soldiers through wearable smart devices‭. ‬Meta is responsible for advanced AI-driven software‭, ‬while Anduril designs the hardware‭, ‬including extended-reality helmets and goggles that enable soldiers to detect drones at long range‭, ‬identify concealed targets in complex battlefields‭, ‬and operate advanced autonomous weapons systems‭.‬

In December 2024‭, ‬OpenAI also entered a partnership with Anduril to integrate its AI technology into the company’s counter-drone defence systems—its most significant step into the defence sector to date‭. ‬Anduril will rely on OpenAI’s systems to boost its ability to detect‭ ‬“aerial threats”‭ ‬from drones‭, ‬which have become central to modern warfare‭. ‬At the same time‭, ‬OpenAI intends to use Anduril’s data to further train its AI software for these defence applications‭.‬

Motives for Growing Partnerships

One of the key drivers behind the United States’‭ ‬expanding reliance on technology companies specializing in artificial intelligence is its increasing focus on developing a variety of weapons systems that incorporate AI‭. ‬A prime example is the F-35‭ ‬fifth-generation stealth fighter‭. ‬While the integration‭ ‬of electronics‭, ‬sensors‭, ‬and onboard communication systems into modern fighter jets is not new‭, ‬the F-35‭ ‬was also designed with‭ ‬a single seat in the cockpit‭. ‬This means that the role traditionally assigned to a co-pilot is instead carried out by AI algorithms‭, ‬which take on several tasks usually performed by a human operator‭.‬

Among these tasks is the fusion of disparate streams of intelligence from multiple sensors and other sources to create comprehensive situational awareness for the pilot‭. ‬This ability of software algorithms to integrate data across multiple channels is known as sensor fusion‭.‬

AI has already been successfully tested as a co-pilot on the U-2‭, ‬one of the oldest aircraft in the U.S‭. ‬Air Force‭. ‬In this experiment‭, ‬the AI handled tactical navigation and the operation of sensors‭, ‬while the human pilot was able to focus on flying the aircraft‭, ‬authorizing weapons deployment‭, ‬approving adjustments to flight plans‭, ‬and communicating with other personnel‭. ‬In effect‭, ‬one of the oldest bombers became one of the first to employ AI‭, ‬illustrating that AI can be retrofitted onto even legacy aircraft‭.‬

AI is also being employed to create new forms of human–machine teaming‭. ‬For example‭, ‬the F-35‭ ‬can be accompanied by an autonomous drone known as the Loyal Wingman‭, ‬such as the XQ-58A‭ ‬Valkyrie‭. ‬Here‭, ‬AI enables the F-35‭ ‬to control the drone‭, ‬which extends the jet’s sensor reach and combat engagement range‭. ‬The drone’s missions may include scouting ahead to detect hostile radar‭, ‬probing and confusing enemy air defences‭, ‬drawing enemy fire away‭ ‬from the fighter‭, ‬scanning the skies for threats‭, ‬relaying targeting data to manned aircraft‭, ‬conducting intelligence‭, ‬surveillance‭, ‬and reconnaissance‭ (‬ISR‭) ‬missions‭, ‬and even carrying additional weapons‭. ‬The Valkyrie itself is equipped with AI technologies‭.‬

This illustrates why defence companies worldwide are eager to forge partnerships with technology firms‭, ‬particularly those in the AI sector‭. ‬These companies possess advanced technologies that can help develop the systems modern militaries increasingly rely‭ ‬on‭, ‬especially as unmanned weapons platforms grow more central to warfare‭.‬

At the same time‭, ‬it is becoming evident that a new arms race is underway among the world’s major powers—one that now extends into the field of integrating AI into weapons systems‭. ‬For instance‭, ‬Russia is developing its fifth-generation fighter jets to operate with AI and to be networked with unmanned aerial vehicles‭. ‬In 2018‭, ‬a source in Russia’s aerospace industry confirmed that the Su-57‭ ‬would be fitted with AI software that includes an automated command system and targeting mechanisms‭, ‬effectively embedding an AI‭ ‬“co-pilot”‭ ‬to support the pilot during routine flights as well as in combat scenarios‭. ‬Russia also plans to pair the Su-57‭ ‬with the S-70‭ ‬Okhotnik unmanned combat drone‭, ‬envisioned as a wingman designed to enhance the capabilities of manned aircraft‭.‬

In this intensifying competition‭, ‬the United States seeks to secure the lead through its broad partnerships with major technology firms‭.‬

Ethical and Human Rights Concerns

There are growing fears that the expanding cooperation between AI companies and security or defence agencies could have far-reaching implications for human rights‭. ‬These include the erosion of personal privacy‭, ‬the potential for war crimes if AI is involved in lethal decision-making on the battlefield‭, ‬and the possibility that AI firms could influence electoral processes‭. ‬These concerns can be outlined as follows‭:‬

1‭. ‬Violations of Individual Privacy

As noted earlier‭, ‬there was a preliminary agreement between the U.S‭. ‬government and Google to develop mechanisms enabling the monitoring of online interactions for both security and commercial purposes‭.‬

China is often accused of employing AI to conduct large-scale surveillance‭, ‬combining facial recognition‭, ‬social media monitoring‭, ‬and camera networks to track dissidents and government critics in real time‭. ‬A functioning infrastructure already exists‭, ‬capable of integrating and analyzing vast amounts of data for state agencies‭.‬

Western democracies have also faced similar accusations‭. ‬Reports revealed that the U.S‭. ‬Department of Homeland Security monitors‭ ‬social media under contracts with private firms‭, ‬which advertise their ability to scan millions of posts and use AI to generate‭ ‬summaries for government clients‭. ‬The Department has acknowledged using such digital tools to analyze applicants for visas or permanent residency‭, ‬searching for indicators of‭ ‬“extremist speech”‭ ‬or‭ ‬“antisemitic activity‭.‬”‭ ‬This has raised questions about how such terms are defined and whether open criticism of certain states could be misclassified‭ ‬as‭ ‬“terrorist sympathies‭.‬”

Despite widespread public awareness of U.S‭. ‬mass surveillance practices‭, ‬no decisive legal reforms have curtailed them‭. ‬Many Western thinkers now argue that the perceived divide between democracies and their rivals regarding surveillance is an illusion‭: ‬under the banner of national security‭, ‬Western governments also continue to impose broad restrictions on civil liberties‭.‬

2‭. ‬Involvement in War Crimes

U.S‭. ‬technology companies have entered into joint projects with the Israeli government to provide cloud computing and AI applications‭. ‬One such project is‭ ‬“Project Nimbus”‭, ‬which the Israeli government describes as a multi-year flagship initiative—the first of its kind within Israel’s public sector‭. ‬It is led by the Accountant General of the Ministry of Finance through the Government Procurement Administration‭, ‬in collaboration with the Israel National Digital Agency‭, ‬the Israel National Cyber Directorate‭, ‬the Ministry of Defence‭, ‬the‭ ‬Israel Defense Forces‭ (‬IDF‭), ‬and other governmental partners‭. ‬In other words‭, ‬the project is military in nature‭.‬

Despite this‭, ‬Amazon and Google‭, ‬the project’s two main contractors‭, ‬claimed it was purely a civilian undertaking‭. ‬However‭, ‬an investigation by the U.S‭. ‬technology magazine‭ ‬Wired revealed that the IDF had been a central actor in Project Nimbus from the very beginning—shaping its design and becoming one of its primary users‭.‬

As part of this initiative‭, ‬Google provides AI and machine-learning technologies that enhance the Israeli military’s capabilities in facial recognition‭, ‬automated image classification‭, ‬and even sentiment analysis of photos‭, ‬speech‭, ‬and writing‭. ‬These tools significantly increase the army’s ability to conduct strict surveillance of Palestinians‭.‬

Even before the controversial Nimbus contract was signed‭, ‬Google was aware it could not control how Israel and its military would use advanced cloud technologies‭, ‬according to a leaked internal report obtained by The Intercept‭. ‬The report indicated not only that Google would be unable to monitor or prevent Israel from using its software to harm Palestinians‭, ‬but also that the contract could obligate the company to obstruct foreign criminal investigations into Israel’s use of Google technologies—a stipulation unprecedented in Google’s other government contracts‭. ‬The question of legal liability has become especially pressing as Israel enters the third year of‭ ‬what is widely regarded in Western media outlets as a genocide in Gaza‭, ‬with shareholders pressuring Google to investigate whether its technologies are enabling human-rights violations‭.‬

There is mounting evidence that Israel has deployed AI tools developed by Microsoft‭, ‬Google‭, ‬and Meta in its military operations‭ ‬against Gaza‭. ‬For instance‭, ‬Israel integrated AI into facial recognition systems to identify partially disfigured or injured individuals and to select targets for airstrikes‭. ‬It also developed an Arabic-language large language model to power a chatbot capable of analyzing text messages‭, ‬social-media posts‭, ‬and other data‭.‬

Reservists working at companies such as Google‭, ‬Microsoft‭, ‬and Meta also contributed to these initiatives in collaboration with‭ ‬Israel’s elite intelligence Unit 8200‭, ‬in what became known as‭ ‬“the Studio”—an innovation hub for developing AI projects‭.‬

Reports further suggested that Microsoft AI models‭, ‬as well as GPT-4‭ ‬developed by OpenAI‭, ‬were used within an Israeli military program to select bombing targets during the war on Gaza‭. ‬The use of these tools at times resulted in misidentifications and wrongful detentions‭, ‬as well as civilian casualties‭, ‬according to Israeli and U.S‭. ‬officials‭. ‬This raised profound ethical questions‭ ‬about the role of such technologies and the accountability of the companies that produce them‭.‬

Investigative reporting in Western outlets‭, ‬drawing on sources inside Unit 8200‭, ‬revealed that the unit had used Microsoft’s Azure platform to store recordings of millions of phone calls made daily in Gaza and the West Bank‭. ‬Intelligence derived from‭ ‬this vast archive—hosted on the cloud—was then used to identify targets for airstrikes in Gaza‭. ‬Notably‭, ‬Microsoft employees in Israel who managed the relationship with Unit 8200‭ ‬were themselves veterans or reservists of the unit‭. ‬According to leaked files obtained by The Guardian‭, ‬Microsoft‭, ‬including senior executives‭, ‬was aware that Unit 8200‭ ‬intended to transfer massive amounts of sensitive intelligence data to Azure‭. ‬By July 2025‭, ‬the volume of data stored on Azure cloud servers exceeded 11,000‭ ‬terabytes‭, ‬the equivalent of 200‭ ‬million hours of audio recordings‭.‬

Through Azure and similar AI applications‭, ‬Israel reportedly monitored calls made by Ibrahim Biari‭, ‬a senior Hamas commander in‭ ‬northern Gaza‭. ‬His voice was identified using AI‭, ‬which also provided an estimated location during the calls‭. ‬On October 31‭, ‬2023‭, ‬Israel launched airstrikes that killed Biari but also caused the deaths of more than 125‭ ‬civilians‭. ‬This incident has been cited by critics as evidence of Israel’s disregard for civilian casualties and a violation of the principle of proportionality‭, ‬which requires limiting harm to civilian populations and infrastructure during military operations‭.‬

AI was not only misused to carry out indiscriminate killings of Palestinians but also deployed to suppress documentation of war‭ ‬crimes against them‭. ‬During the ongoing war in Gaza‭, ‬for example‭, ‬Meta systematically discriminated against Palestinians by removing their content—even when it documented war crimes committed against them—while reducing the visibility of their posts and accounts through a practice known as‭ ‬“shadow-banning‭.‬”‭ ‬At the same time‭, ‬Meta violated its own policies by permitting content that incited violence and promoted hate speech against Palestinians on its platforms‭.‬

3‭. ‬Influence on Voter Behaviour‭ ‬

Much has been written about how foreign actors use social media to influence voter preferences‭. ‬However‭, ‬the 2022‭ ‬U.S‭. ‬elections‭ ‬revealed another dimension—namely‭, ‬the involvement of domestic technology companies in promoting certain candidates‭.‬

A notable example was the decision by Google and Facebook to restrict the circulation of news surrounding the scandal of Hunter‭ ‬Biden’s laptop‭, ‬the son of President Joe Biden‭, ‬in the run-up to the 2020‭ ‬elections‭. ‬The move was aimed at preventing the story from undermining Biden’s electoral chances‭. ‬Biden’s campaign even falsely claimed that the incident was part of a Russian disinformation effort‭, ‬despite the authenticity of the laptop and its files‭, ‬which implicated Hunter in corruption-related cases‭.‬

Mark Zuckerberg‭, ‬the founder of Facebook‭, ‬later admitted that he received a request from the FBI to limit the spread of the story on his platform on the grounds that it was part of a disinformation campaign‭. ‬This request came shortly before The New York Post broke the story in October 2020‭. ‬Notably‭, ‬the FBI had been in possession of Hunter’s laptop since December 2019‭, ‬meaning the bureau knowingly dismissed a true story and pressured Facebook‭, ‬which complied‭. ‬This has fueled allegations that Facebook—and Google—are sympathetic to the Democratic Party‭.‬

Research has also shown that Google sent get-out-the-vote alerts to liberal users during the 2020‭ ‬Senate runoffs in Georgia‭, ‬likely boosting voter turnout among Democrats‭. ‬Facebook has employed similar targeted voting alerts since at least 2008‭. ‬One study‭ ‬found that such alerts increased total voter turnout by 340,000‭ ‬ballots in 2010‭.‬

These findings suggest that technology companies now have the power to shift millions of votes toward candidates favored by Silicon Valley—a form of unlawful interference in the electoral process‭.‬

Conclusion

This analysis highlights how the United States is actively developing a range of AI-powered weapons systems and seeks to secure‭ ‬a leading position through extensive partnerships with the private sector in Silicon Valley‭, ‬which has long supported U.S‭. ‬military projects dating back to the Cold War‭. ‬These efforts gain added importance in light of similar programs pursued by other major powers—most notably Russia—in developing AI-driven combat systems‭. ‬At the same time‭, ‬because AI companies now possess the ability to store and analyze vast‭ ‬amounts of data and share it with the U.S‭. ‬government‭, ‬there are growing concerns that such practices may undermine civil rights and individual freedoms‭. ‬Moreover‭, ‬the integration of AI into combat operations risks enabling serious human-rights violations‭, ‬as illustrated by the ongoing Israeli war in Gaza‭.‬

By‭: ‬Dr Shadi Abdulwahab Mansour
‭(‬Associate Professor at the National Defence College‭)‬

WhatsApp
Al Jundi

Please use portrait mode to get the best view.