Munkh-Orgil Tuvdendarjaa
Deputy Director, Dean of the Institute for Defense Studies
Ulaanbaatar, Mongolia
Abstract: Artificial Intelligence (AI) is revolutionizing the domain of international peacekeeping by enabling more responsive, data-driven operations. This paper explores the technological capabilities AI offers, the ethical and legal challenges it raises, and the disparities in its implementation across regions. Drawing on recent practical examples such as MONUSCO’s use of AI in the Democratic Republic of the Congo and conflict zones such as Ukraine and Gaza, the analysis illustrates both the transformative potential and critical limitations of AI in maintaining peace and security.
Keywords: artificial intelligence, peacekeeping, United Nations
Introduction
The nature of contemporary conflict has evolved, requiring peacekeeping operations to adapt accordingly. Artificial Intelligence (AI), with its capacity for large-scale data analysis and predictive modeling, is emerging as a powerful enabler in these efforts. By facilitating real-time threat detection, early warning systems, and enhanced operational planning, AI offers a new paradigm in conflict prevention and resolution. This paper investigates the multifaceted implications of AI integration into peacekeeping missions, with a focus on its operational applications, ethical and legal considerations, and regional disparities. The Action for Peacekeeping “Plus” (A4P+) priorities of 2021, as well as the Security Council and the General Assembly’s Special Committee on Peacekeeping Operations (C34), acknowledged the need to better integrate the use of new technologies for the purposes of increasing safety and security, improving situational awareness, enhancing field support and facilitating substantive mandate implementation.
The Final Report of the Expert Panel on Technology and Innovation in UN Peacekeeping (2015), the UN Secretary-General’s Strategy on New Technologies (2018), and the Strategy for the Digital Transformation of UN Peacekeeping (2021) offer valuable insight into the evolving approaches, priorities, and operational shifts within United Nations peacekeeping. These documents collectively illustrate how the UN is adapting to the rise of new technologies, particularly artificial intelligence (AI), and how these technologies are reshaping peace operations in both opportunities and challenges.
To date, the United Nations has not adopted a unified, binding policy governing the use of AI in peacekeeping missions. However, a series of strategic frameworks and initiatives provides guidance for its emerging role. These include:
- The Secretary-General’s Roadmap for Digital Cooperation (2020): This roadmap promotes a human-centered approach to AI, especially in fragile and conflict-affected settings. It calls for the application of AI in ways that uphold international law, respect human rights, and foster global digital solidarity.
- The United Nations Innovation Network (UNIN): Through its ethical guidelines, UNIN advocates for the responsible development and deployment of AI technologies. Core principles include transparency, human oversight, accountability, and fairness, ensuring AI systems are aligned with the values and norms of the UN.
- The Department of Peace Operations (DPO): As part of its Digital Transformation Strategy, the DPO supports the testing and implementation of AI-driven tools in the field. A prominent example is the use of the SAGE platform in the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO), which aids in situational awareness and data analysis to enhance operational decision-making.
Despite the growing incorporation of AI tools in peacekeeping environments, the UN maintains a firm position against the development or deployment of fully autonomous lethal weapon systems. AI is currently restricted to non-lethal, supportive functions, such as data analysis, pattern recognition, predictive modeling, and logistical support.
Capabilities of AI in Peacekeeping
Artificial Intelligence (AI) technologies—encompassing machine learning, computer vision, natural language processing (NLP), and data fusion—have increasingly been integrated into peacekeeping and humanitarian operations. These technologies enable peacekeepers to analyze satellite imagery, open-source intelligence, and multisensory data to derive actionable insights and improve operational decision-making.
A critical challenge, however, lies in the growing disparity between the volume of insights generated and the capacity of human operators to effectively interpret and apply them. The automation of insight exploitation through AI is increasingly viewed as essential in addressing this gap. Artificial Intelligence could be important. Based on the positive feedback from contingents, some field missions will continue to enhance security functions, combining threat detection with situational awareness systems in real time across operational activities.
One prominent example is the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO), which employs the Situational Awareness Geospatial Enterprise (SAGE) platform. SAGE integrates geospatial data, social media feeds, and field sensor inputs to identify potential conflict zones and detect emerging threats. This system facilitates proactive deployments, targeted peacekeeper positioning, and more efficient delivery of humanitarian assistance, such as using drones, which was the only possible means to study the inaccessible locations where the Hema victims were allegedly killed. With the drones, UNPOL was able to conduct a survey of extensive areas, up to 12 hectares, in 15 minutes.
Beyond UN-led missions, AI applications have emerged in active conflict zones through both military and humanitarian channels. In Ukraine, the government, supported by private sector partnerships, has implemented AI for satellite image analysis, drone navigation, and targeting assessments. Open-source intelligence platforms powered by AI are used to verify attacks on civilian infrastructure and track potential violations of international humanitarian law. These technologies have supported international humanitarian responses and accountability initiatives.
In the Gaza conflict, Israel deployed AI-powered target-generating systems, such as “Lavender” and “The Gospel (Habsora)[1]” to automate the generation of bombing or elimination targets in Gaza. These systems, designed to increase operational efficiency, have raised significant ethical and legal concerns, particularly regarding proportionality, civilian protection, and transparency in targeting decisions. AI-powered facial recognition and decision-support tools have also been used to predict protest activities and identify individuals, further amplifying debates around surveillance, autonomy, and the boundaries of acceptable use in armed conflict.
In addition to security applications, AI technologies contribute significantly to logistics and operational support. Algorithms can optimize patrol routes, predict supply chain disruptions, and facilitate dynamic troop redeployment. In Syria, humanitarian organizations have employed AI-driven predictive models to anticipate displacement patterns and guide the allocation of humanitarian resources. These efforts have also served to test the performance of AI technologies in complex conflict environments, indirectly informing the development of AI-enabled weapons systems.
The Tigray conflict, which lasted from November 2020 to November 2022, resulted in a devastating humanitarian toll, with estimated fatalities ranging from 162,000 to 600,000 due to direct violence, famine, and inadequate access to healthcare. During the conflict, AI analyzes satellite imagery and monitors land use changes, offering valuable insights into on-the-ground humanitarian conditions. Furthermore, social media played a critical role in this conflict’s progression. This is shown in data from various sources, including media reports, NGOs and international organizations, government reports and statements, social media and online platforms, academic literature, eyewitness accounts, and local sources.
Similarly, the use of drones in MONUSCO facilitated access to otherwise inaccessible areas, enabling rapid assessments of the alleged massacre site, surveying 12 hectares in 15 minutes.
However, the deployment and scalability of AI in peacekeeping operations are uneven due to significant digital divides. In Sub-Saharan Africa, limited digital infrastructure, insufficient connectivity, and a lack of technical capacity continue to hinder the widespread adoption of AI systems. Despite promising pilot initiatives, such as those in MONUSCO, the region still faces systemic barriers, including data scarcity and inadequate training resources.
Challenges
Despite its benefits, AI introduces a suite of ethical and legal dilemmas. Central among these is algorithmic bias, which can stem from non-representative training datasets. In peacekeeping contexts, such bias could misclassify groups as threats or fail to detect localized conflict dynamics, exacerbating tensions.
The ineffective international response to the Syrian conflict also hampered the universalization of legislation on the misuse of AI in conflict and on populations at risk of mass atrocities. Moreover, the opacity of many AI models undermines transparency and accountability. In situations where AI-generated insights inform critical operational decisions, the inability to audit or explain those decisions raises profound concerns.
From a legal standpoint, AI use in peacekeeping must adhere to international humanitarian law (IHL) and international human rights law (IHRL). Ensuring that AI-driven actions uphold principles such as proportionality, necessity, and distinction is essential. Yet, current regulatory frameworks remain underdeveloped, necessitating international cooperation to establish robust guidelines.
These concerns have come to the forefront in high-intensity conflict zones. In Ukraine, automated targeting systems have been scrutinized for potential violations of IHL. In Gaza, allegations of AI-assisted strikes lacking human oversight have raised alarms among human rights organizations. In Myanmar, AI tools used to monitor activist communications were reportedly leveraged by state actors to suppress dissent, illustrating the dangers of misuse.
Best practices identified include embedding human-in-the-loop protocols, ensuring diverse and context-sensitive training data, and incorporating ethical impact assessments during system development. The United Nations Innovation Network has proposed an ethical framework for AI use in peace operations, emphasizing transparency, fairness, and human rights protection.
Troop-contributing countries (TCCs) face unique challenges in integrating AI into peacekeeping missions. Although TCCs deploy technologically advanced tools like drones, many peacekeepers are not trained to use them because of the lack of technological infrastructure, skilled personnel, and financial resources needed to develop and deploy AI tools effectively. This creates disparities in operational effectiveness and can contribute to asymmetric capabilities within joint missions.
Moreover, there are concerns related to training and interoperability. AI-enhanced peacekeeping operations require specialized knowledge in data analytics, cybersecurity, and digital ethics. TCCs may struggle to provide adequate training for their contingents, leading to uneven application of AI tools in the field. This also raises coordination issues when integrating forces from diverse technological backgrounds.
Security and data sovereignty concerns further complicate AI adoption. TCCs may be reluctant to share sensitive operational data with centralized UN systems or foreign technology providers, fearing espionage or misuse. This limits the scale and scope of AI systems that rely on comprehensive, shared datasets.
Political and institutional constraints also play a role. National policies regarding the use of AI, data protection laws, and civil-military relations can either support or hinder AI integration in peacekeeping contingents. In some cases, domestic skepticism about automation and surveillance impedes investment in emerging technologies. This persistent issue, worsened by complex and militarized missions, requires greater transparency on resource shortages, training gaps, and logistical challenges limiting effectiveness.
However, as of now, there is no publicly available information confirming that many of the United Nations Mission has implemented artificial intelligence (AI) technologies in its operations.
Conclusion
Despite growing recognition of AI’s potential in enhancing United Nations peacekeeping operations, the development of a formal, unified policy framework remains fraught with several significant challenges. These challenges are both political and technical in nature and reflect the complexities of applying emerging technologies in highly sensitive and diverse operational environments.
- Divergent National Perspectives and Political Sensitivities. A major obstacle to establishing a comprehensive AI policy within the UN system is the lack of consensus among Member States regarding the acceptable scope and purpose of AI deployment. Nations differ considerably in their political priorities, ethical stances, and strategic interests concerning surveillance technologies, military applications of AI, and the degree of permissible autonomy in decision-making systems.
- Surveillance and Privacy: While some states advocate for robust AI surveillance tools for enhancing situational awareness and early warning systems, others raise concerns about infringements on privacy, data protection, and potential misuse for political ends.
- Militarization of AI: Countries with advanced military AI programs may push for more aggressive adoption of autonomous systems, while others, especially those in the Global South, may emphasize peace-oriented, humanitarian, or civilian-centric uses, fearing the escalation of AI-driven warfare.
- Normative Disagreements: These differing views make it difficult to formulate a shared normative framework, especially concerning transparency, accountability, and control mechanisms.
- Legal Ambiguity in Relation to International Humanitarian Law (IHL). There remains considerable legal uncertainty regarding the integration of AI tools into peacekeeping mandates under international law, particularly in relation to International IHL and International Human Rights Law (IHRL).
- Compliance Questions: The deployment of AI systems—especially in functions like surveillance, targeting assistance, and decision support—raises concerns about how such systems can comply with core IHL principles, including distinction, proportionality, and military necessity.
- Attribution of Responsibility: When AI systems are involved in operational decisions, it becomes legally and ethically ambiguous who holds accountability for unintended outcomes or rights violations—whether it is the software developer, the UN mission, the host state, or contributing Member States.
- Precedent Gaps: As the use of AI in peacekeeping is relatively novel, there is a lack of legal precedent or jurisprudence to guide policy formulation, making risk-averse actors hesitant to endorse formal rules without clearer legal foundations.
- Risks of Misuse and Operational Vulnerabilities. AI tools deployed in peacekeeping missions may be vulnerable to misuse, manipulation, or repurposing, especially in environments characterized by weak governance, limited oversight, or active hostilities.
- Weaponization Risks: In unstable contexts, there is a danger that AI-enabled systems (e.g., facial recognition, drone surveillance, or communication monitoring) could be co-opted or hijacked by hostile non-state actors or corrupt elements within host governments.
- Data Exploitation: The collection and processing of large-scale operational data—particularly when unencrypted or poorly secured—could lead to data breaches or the weaponization of personal or community-level information, exacerbating tensions or triggering reprisals.
- Trust Deficits: The potential misuse of AI in fragile settings undermines local trust in peacekeeping missions, potentially worsening relationships with affected populations and compromising mission legitimacy.
In sum, AI holds transformative potential for modern peacekeeping, offering tools that can improve operational foresight, resource allocation, and conflict mitigation. However, ethical pitfalls, legal ambiguities, implementation inequalities, and capacity challenges faced by troop-contributing countries must be proactively addressed. A concerted international effort is essential to ensure AI technologies are harnessed responsibly and equitably in the pursuit of global peace and security. Addressing these issues will require inclusive multilateral dialogue, adaptive regulatory frameworks, and robust accountability mechanisms that are sensitive to the diverse operational environments in which UN peacekeeping operates.
The most important concern is that AI technologies and increasingly autonomous tools in peacekeeping operations are inherently data-intensive, relying on the continuous collection, processing, and analysis of large and complex datasets. These datasets typically include satellite imagery, ground sensor inputs, drone footage, social media streams, and internal mission communications. The volume and velocity of this data surpass the capacity of conventional, localized infrastructure typically available in the field. As a result, cloud computing services have become essential to enabling real-time analytics, situational awareness, and decision support functions in modern peacekeeping operations.
However, many United Nations missions in Africa lack the digital infrastructure and cloud integration capacity required to fully leverage these technologies. Factors such as limited connectivity, weak data management systems, and constrained technical expertise pose significant barriers to the scalable use of AI and cloud-based surveillance platforms. Unlike missions with access to more advanced technological ecosystems—such as in the Middle East or Eastern Europe—African missions often operate in low-bandwidth environments where reliable internet access and secure data storage remain major challenges.
For example, while missions such as MONUSCO in the Democratic Republic of the Congo have piloted platforms like SAGE, their effectiveness has been limited by intermittent connectivity, insufficient local technical support, and dependence on external data processing partners. This digital divide not only constrains operational efficiency but also raises concerns about unequal technological deployment, potentially leading to inconsistencies in protection mandates and situational awareness across different regional contexts.
Based on the current state of affairs, it can be concluded that the institutionalization of artificial intelligence within United Nations peacekeeping operations is unlikely to be achieved in the immediate future. Persistent challenges—ranging from political divergence among Member States and legal uncertainties to infrastructural limitations in field missions—continue to hinder the development of a coherent and implementable AI framework. As such, while pilot initiatives demonstrate potential, comprehensive and standardized integration of AI into peacekeeping remains a long-term objective rather than an imminent reality.
[1] Abraham, Y.,’A mass assassination factory’: Inside Israel’s calculated bombing of Gaza, 972Mag, 30
Published: May 8, 2025
Category: Perspectives
Volume: 26 - 2025
Author: Munkh-Orgil Tuvdendarjaa