Responsibly deploying and mythbusting AI: REAIM Summit
“AI has the potential to revolutionise the way wars are fought and won. But it also poses significant risks and challenges. To prevent abuses, we need to establish international guidelines. It is crucial that we take action, now.” This is how Wopke Hoekstra, the Dutch minister of Foreign Affairs, opened the REAIM summit about AI in the Defence domain… To then reveal that the intro of his opening statement was written by ChatGPT. AI is everywhere, changing the way we work – including in the military. The Netherlands (and Thales!) believes that the responsible development, deployment, and use of artificial intelligence (AI) in the military domain must be given a higher priority on the international agenda. We sat down with two of our innovation experts to look back at the summit, the use of AI in the military domain, and our own expertise.
REAIM (Responsible AI in the Military Domain) took place at the World Forum in the Hague, and provided a platform for global stakeholders (governments, industry, civil society, academia and think tanks) to forge a common understanding of the opportunities, dilemmas and vulnerabilities associated with military AI. All workshops, sessions and talks were based around one of three major relevant themes:
1. Mythbusting AI: breaking down the characteristics of AI – what do we need to know about the technical aspects of AI to understand how it can be applied responsibly in a military context?
2. Responsible deployment and use of AI: what do military applications of AI mean in practice? What are the main benefits and vulnerabilities?
3. Governance frameworks: which frameworks exist to ensure AI is applied responsibly in the military domain? What additional instruments and tools could strengthen governance frameworks, and how can stakeholders contribute?
Terminator frame
Thales Research and Technology director Dr. Michel Varkevisser and senior AI researcher Dr. Gregor Pavlin were quick to point out a pet peeve shared by almost every AI-professional present: the ‘Terminator frame’ most often used by the press. Presenting AI as killer robots gets really tiring, since it only covers one, extreme end of the AI-spectrum. This particular frame hinders the general understanding of AI, and how you actually encounter it in day-to-day life. It’s in your search engines, TikTok filters and self-driving cars – and not a sentient being. Gregor explains: “The over proportional focus on fully autonomous fighting machines might divert the necessary attention and resources away from the proper treatment of decision support systems, where AI can have a huge impact on the critical decisions with potentially detrimental effects.”
Many defence experts who presented at the summit emphasised the need to look at the use of AI in decision support systems as well, as it is more relevant in (and beneficial to) the foreseeable future. Gregor, also a panellist in a REAIM session on Decision-Support Systems and Human-Machine interaction, continues: “In fact, AI is already used at different levels of decision-making processes in different military domains, proving its value in automation of back-office tasks, logistics, military intelligence processing, and many more. This requires our focus, and multidisciplinary approaches in properly addressing operational, technical, ethical, and legal challenges. As Thales, we are proactively contributing methods and tools that facilitate responsible and efficient life-cycles of trustable and technically sound AI-based solutions for decision support systems.”
Multi-disciplinary approach to AI
The most heard call-to-action throughout the REAIM summit was for a multi-disciplinary approach in the development of AI (support) systems by-design, and considering the entire life-cycle of such systems. Michel agrees: “By-design means that all stakeholders (customer, end-users, legal officers, ethics officers, designers/developers) discuss what problem(s) will be solved in what context. Both a good problem definition and contextual embedding will increase the likelihood of developing and using AI-systems for the reason(s) they are intended. On top of that, once an AI (support) system is properly trained and rolled out, the use and re-use of these systems needs to be monitored and, if need be, the systems might have to be revaluated on their performance.”
We asked Gregor and Michel what, apart from REAIM, they feel should be discussed and researched more often, regarding AI in Defence. Both highlighted the need for input from all possible stakeholders, and establishing proper protocols to look at AI inventions from every angle. “Nearly all AI-systems in the near future will have the operator-in-the-loop and will only provide support in either the sense-making or decision-making process. It was emphasised throughout the summit that in the foreseeable future, human operators will still make the final decisions, in particular during combat. Additionally, we obviously also need to think about accountability and governability of fully autonomous systems. ‘Responsible AI in the Military Domain’ is part of an emerging field that is extremely important to exploit the power of AI in proper ways.”
Naturalistic decision-making process
Thales is fully aware of the legitimate questions being raised, and is always working to develop artificial intelligence that is responsible, capable of helping the right decisions to be made, whilst securing means of control and responsibility. Our focus lies on the TrUE AI approach, which stands for Transparent AI, where users can see the data used to arrive at a conclusion. Understandable AI, that can explain and justify the results and finally an Ethical AI, that follows objective standards protocols, laws, and human rights.
At Thales Research and Technology we have recently finalised a number of interesting AI projects for military purposes, and are still working on a number of innovative ways of AI use. Michel: “AI solutions have been delivered for mission planning, classification of objects, heterogeneous fusion of information, and cyber threat detection. Current innovations are developed for track analysis, such as prediction and anomaly detection. In the near future, these types of developments will have a more complex character when vehicles or for instance ships from different countries need to collaborate by sharing information, and by making use of each other’s assets. In this context, various unmanned systems will also be brought into the loop for surveillance and defence purposes.”
Gregor adds that Thales’ Research and Technology teams focus increasingly on ethical and trustable AI-based solutions, and recognise that those require proper life-cycles that facilitate development, on-boarding, and operation of such technologies. “The key to such solutions is to understand the naturalistic decision-making process, in which AI is embedded to automate certain steps while the majority of tasks is carried out by people. With this comprehensive approach, combining principles from human factors, AI and sound software development, Thales is in a better position to supply the customers with powerful AI-based decision support functions that adhere to strict ethical, safety, and legal principles.”