She calls it 'mind-blowing' how quickly military applications of artificial intelligence have progressed in recent years. When Guangyu Qiao-Franco began studying international efforts to regulate autonomous weapons in 2019, this type of warfare was mainly something for the future. 'Now it is being deployed on the battlefields in Ukraine and Gaza. Its use has already become more or less normal.'
A swarm of drones
What is the current state of affairs? A robot that decides on life and death, as in the Terminator films, does not yet exist. However, there are weapon systems that can already do a great deal on their own, explains the Chinese researcher. 'There is an ammunition system that can track down a target, circle around it and decide when to strike. There is also a system that can select a target at lightning speed based on many different sources of information. Drones can then strike in a matter of seconds. Furthermore, systems using drone swarms have already been developed, with one ‘mothership’ drone controlling the others. No humans are involved in this process."
Divergent views
Since 2016, the United Nations has been discussing and negotiating the regulation of AI-controlled weapon systems. Qiao-Franco is following this closely, as well as similar meetings outside the UN, and sees that it is very difficult to bring countries closer together. ‘Everyone wants to maintain human control over all weapons, but what exactly does that mean? For countries such as the United States, Japan, the United Kingdom and Canada, this means that AI is allowed to press the button to attack. According to them, control can reside in the weapon development and testing process. For other countries, human control means that only commanders are allowed to press the button. They cannot agree on the definition.’
There are many conflicting approaches to autonomous weapons, observes Qiao- Franco. Some countries and organisations consider their use unethical. Machines should not be able to decide on human lives. Others consider them more ethical than human warfare because machines are more precise. Opinions also differ on whether AI weapons can prevent wars. Qiao-Franco: 'Some say that AI can prevent conflicts and also reduce casualties. Others fear it could encourage recklessness, lowering the threshold for attacks. With AI, operators are psychologically more removed from the act of killing, which may reduce accountability and moral hesitation in the decision to use lethal force.' The actual effects are unknown. 'We still know too little to be able to say anything about that.'
She notes that the different views can be roughly divided between the Global North and the Global South. Countries in the Global North generally support the further development of AI-controlled weapon systems. ‘Their rationale emphasizes cost-efficiency, saving human lives, enhanced decision-making processes and the ability to conduct operations more quickly.’ Countries in the Global South usually advocate a complete ban on the use and development of autonomous weapons. 'Wars are more common in these countries, making them potential testing grounds for new technologies. These countries themselves are not yet that far along in the development of these weapons. That renders them very vulnerable to the strategic advantages such systems confer.'
Cautiously positive
Her own view of her research topic has shifted over time, particularly as a result of the expert dialogues she attended and analysed. ‘I have become cautiously positive, because it is very clear that no country wants machines to make life-and-death decisions. Last year, Biden and Xi also decided that AI should not be allowed to control nuclear weapons. All countries are working together on legislation, although the resulting agreements are likely to establish only minimal requirements.’
Qiao-Franco has also come to realise that competitive dynamics among states influence their approach to regulation. ‘Overly stringent legislation can be counterproductive, as it may limit the exploration of AI’s potential.’ Ultimately, effective regulation depends on trust, she argues. 'When you establish basic principles, you can never be sure that other countries will follow suit. You don't know what they are keeping secret about their military applications, nor do you know how they will act in conflict situations. Without mutual trust, regulatory frameworks risk being meaningless.'
She is particularly concerned about the political tensions between the United States and China. 'We are not seeing a slowdown. Rather, the developments suggest a de facto arms race in advanced military technologies.'