Military AI: Harnessing the Benefits, Mitigating the Risks

In Maryland, USA, delegates from signatory countries of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy met at a closed-door conference. The meeting, consisting of a mix of military officers and civilian officials, convened to discuss every military application of artificial intelligence – from unmanned drones to backend systems.

Following last year’s A.I. Safety Summitt in the United Kingdom, Australia joined a list of 53 countries in signing the US led declaration. Despite its non-binding status, the declaration seeks to at least form a consensus between states on best practices for A.I. in military applications.

The potential usage of A.I. in military situations is extensive. Navigation systems could employ A.I. to make operations less reliant on human input. A.I. could leverage swarm intelligence for drones, with a swarm having an overarching objective, but individual drones able to act independently to ‘creatively’ achieve their end. A.I. could almost instantly analyse large amounts of data, and in high-stress situations help to give input to strategic decision making. Generative A.I. can manufacture and think of new training situations and scenarios. A.I. can aid in making target recognition more accurate in combat environments, as well as increase situational awareness of potential threats.

The use of A.I. in these contexts is not hypothetical. The Russo-Ukrainian war has already seen extensive use of A.I. on the battlefield. Indeed, a crucial aspect of this war has been the rapid evolution of combat technologies. For instance, Ukraine has been open about its cutting-edge geospatial intelligence techniques in target and object recognition. Computers combine and analyse publicly available information such as social media content in combination with ground-level photos and footage from numerous drones to geolocate and identity Russian soldiers, their movements, and the locations of Russian weapons, systems, and more. 

The US government wants this conference to be the first of an indefinite series. It is hoped that cross-pollination between nations on A.I. usage with help shape policies and form international norms on responsible use in war settings. Indeed, at the forefront of the discussions are commitments to international humanitarian law, and appropriate testing and evaluation of systems before use.

A.I. is evolving at a fast pace, and the ‘worst’ parts of A.I. in use on the battlefield may still be unknown. A.I. is not human. At best A.I. might operate according to the cultural values of its host country (of which might be very different to the ideas of ethics held by Australia), at worst the systems might be a quasi-free-for-all, programed to do all it must to achieve its objective.

Source: U.S. Department of State, Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, Endorsing States
28 March 2024 | Authored by Connor Andreatidis, Consultant, Precision Public Affairs

Related Posts

Oh Yes for Osteoporosis Treatment!

PPA was proud to support Amgen Australia and Healthy Bones Australia this week, with their patient and clinicians briefing with the Parliamentary Friends of Pain Management, Co-Chaired by Tasmanian Senators Wendy Askew and Helen Polley, both born and bred Launcestonians.

PPA Attends Deputy Prime Minister’s National Press Club

Precision Public Affairs congratulates the Deputy Prime Minister and Minister for Defence, the Hon Richard Marles MP, on the release of the 2024 National Defence Strategy and 2024 Integrated Investment Program, and was delighted to join him and the Minister for Defence Industry, the Hon Pat Conroy, at the National Press Club of Australia today.

PPA attends AIDN Annual Gala Dinner

PPA was honoured to join the Australian Industry & Defence Network (AIDN National) National Board at the National Arboretum Canberra last week for the AIDN Annual Gala Dinner.