close
close
Do AI decision support systems “assist” humans in military decision-making about the use of force?

(Dr. Anna Nadibaidze is a researcher for the European Research Council-funded AutoNorms and AutoPractices projects at the Center for War Studies, University of Southern Denmark.)

The United Nations Future Summit, which took place in New York in September 2024, led to the adoption of the Future Pact. Among many other topics, this pact identifies a measure to “advance with urgency” debates on lethal autonomous weapons systems within the relevant UN Governmental Expert Group (GGE). For more than ten years, autonomous weapon systems (AWS) have been discussed at the United Nations – defined as weapon systems that, when activated, select and engage targets without human intervention.

In recent years, the GGE has not made significant progress in agreeing on possible new tools or measures to address the challenges associated with weapon systems that integrate AI and autonomous technologies. Additionally, the prevailing focus on AWS has overshadowed numerous other applications of AI technologies in the military sector.

Armed forces around the world are developing AI technologies not only as part of weapons systems such as drones or stray munitions, but also integrating them into targeted decision-making processes as part of AI-based decision support systems (AI DSS). Used to identify patterns in large amounts of data, predict scenarios, or recommend possible courses of action to military commanders. The complex and multidimensional process of military targeting can integrate AI systems in several phases that can directly or indirectly influence the use of force.

Reports of the use of AI-DSS come from war zones around the world, such as the recent Israel-Hamas war (2023-) and the Russia-Ukraine war (2022-). As a recent report from the AutoNorms project at the University of Southern Denmark highlights, these developments raise questions about the role of humans and machines in decision-making about the use of force and require further consideration.

The use of AI-DSS is not a formal topic in the UN debate on AWS, as in theory it is about humans making target decisions and AI-based systems “supporting” humans in the complex and multi-layered decision-making process about the use of AI. Force.

However, the mere presence of a human does not guarantee a high level of human involvement and context-appropriate judgment in targeted decisions. The dynamics of human-machine interaction – the role of humans and AI DSS in decision-making about the use of force – raise legal, ethical, humanitarian and security concerns that lead to the question: Do AI systems “support” the human decision making? in a positive sense?

Possible uses of AI in military decision-making

The topic of AI in military decision-making is not new. The U.S. Department of Defense has long had an interest in using automated and AI technologies to make the intelligence analysis that informs targeting decisions more efficient. A prominent example of this trend is the Algorithmic Warfare Cross-Functional Team, also called Project Maven. Project Maven, launched in 2017, aimed to analyze large amounts of video footage collected by US drones using machine learning algorithms. Currently operated by the National Geospatial-Intelligence Agency (NGA), it now integrates various types of data presented in the Maven Smart System interface, which then highlights potential targets based on Maven’s data analysis and extrapolation.

The Ukrainian Armed Forces employ multiple AI DSS in their battlefield decision-making to defend Ukraine against Russia’s illegal, all-out invasion. Some domestically developed systems such as Kropyva or GIS Arta are nicknamed “artillery Uber” because they integrate data from radars, drone footage and other sources to calculate and share information about the positions of Russian forces in real time with Ukrainian artillery units. Other types of AI-based decision-making software are provided by foreign companies such as Palantir Technologies, which is “responsible for the majority of attacks in Ukraine,” according to its CEO Alex Karp.

While the Israel Defense Forces (IDF) deployed AI decision-making systems before Hamas’ October 7 attacksThDecember 2023, the publication of investigative reports on the IDF’s use of several AI-based systems in Gaza attracted significant media and academic attention. The use of AI DSS like Gospel and Lavender – often intertwined in a complex network of systems and sensors – has enabled the IDF to generate an unprecedented number of targets at unprecedented speed. While Gospel and Lavender may have been intended as repositories or databases for intelligence analysts, in practice they appear to have been used as targeting “validation tools” as part of Israeli military operations in Gaza. These trends raise concerns about the role of humans in vetting and verifying targets in an already escalating humanitarian crisis.

The development of AI-DSS appears to be a global and long-lasting trend, as militaries (in collaboration with some private companies) plan to integrate AI to process significant amounts of data collected via surveillance, satellite imagery and other sources more quickly and quickly Scale. NGA Director Frank Whitworth said at a conference organized by Palantir that the Maven Smart System is a “decision-making tool, not a decision-maker.” This “assistance” to humans in decision-making is, in some battlefield contexts, associated with efficiency and gaining a strategic advantage over adversaries. However, the perceived “need for speed” in military decision-making should be examined alongside various concerns regarding human interaction with AI DSS.

Exercising agency in human-machine interactions

Both humans and AI-based systems have their own biases and assumptions when making decisions. It is important to consider how these interact as part of a socio-technical system, rather than a dichotomy between the human and “the machine that made it cold-blooded”. This includes considering the diversity of cognitive, data and systems biases involved; Questions of trust, whether too much trust (automation bias) or too little trust (algorithmic aversion); and institutional and policy contexts surrounding the development and use of AI DSS, such as: B. Targeting doctrines and corresponding rules of engagement.

All of these aspects risk impacting the way people exercise their agency, or the ability to understand and reasonably anticipate the impact of the AI ​​system within the context of making decisions and on those decisions respond in a way that ensures responsibility and accountability.

The exercise of human agency in decisions about the use of force is important for compliance with international humanitarian law, which requires the attribution of conduct in warfare to humans. It also needs to be maintained humanity– not just humans – in the complex decision-making process for targeting, especially in urban warfare contexts with a high risk of hitting civilians. As Charli Carpenter writes, the issue is not about AI technologies replacing humans in decision-making, but rather about “roboticizing human decision-making, essentially turning human operators into ‘killer robots’ themselves.”

While the AI ​​DSSs being developed around the world are not inherently unlawful, the way in which they are reportedly being used suggests that people are at risk of not having the opportunity to exercise the necessary level of freedom of choice. For example, reports that the IDF is using AI-DSS in a way that prioritizes the number of targets or in which humans appear to “label” targets within seconds indicate that this does not appear to be the case in many contexts of use human decision making is positive “supported” by AI systems.

The debate at the United Nations remains focused on AWS, but maintaining and enhancing the human role is a concern in all military applications of AI, even in cases where humans formally make the decision. People involved in using AI DSS, including analysts and operators, need the time and space necessary to think about the impact of an AI system in a specific context. Practical ways to ensure this exercise of human action in human-machine interaction should therefore be at the center of the global discussion about the “responsible” use of AI in the military sector.

Print friendly, PDF and email

Leave a Reply

Your email address will not be published. Required fields are marked *