Add a bookmark to get started

7 March 20249 minute read

Building a smarter smartbomb: The Government responds to the House of Lords AI in Weapon Systems Committee

The sci-fi fascination with AI is well established, long in the tooth and a lazy way of looking at the opportunities (and moreover the threats) presented by AI. But perhaps these once-fantastical ideas are rapidly becoming technological reality. That legislators, including the UK’s House of Lords, are looking seriously at the implications of such technologies is a welcome preparatory action. On 21 February, 2024, the Government published its Response to a report by the House of Lords AI in Weapon Systems Committee, which advised caution in any integration of artificial intelligence in the country’s autonomous weapon systems (AWS).

AI-enabled AWS have the potential to revolutionise defence technology and the Ministry of Defence committed to adopting such technology in a way compliant with International Humanitarian Law (IHL) in its June 2022 Defence AI Strategy. Readers will recall that this had the strapline “Ambitious: safe and responsible use of AI” and the strategy included a set of ethical principles developed in partnership with the Centre for Data Ethics and Innovation to guide how the military and defence sector would adopt AI. It also included the ambition to “generate an informed community…enshrining an understanding of AI and Data as a fundamental component of professional development at all levels…we will mandate that all senior leaders across Defence must have foundational and strategic understanding of AI”.

The House of Lords Committee’s report, amongst other recommendations, offers four key proposals to ensure the Government is able to meet its ambition of delivering these defence capabilities in a way that is “ambitious, safe and responsible”.

While these four proposals are sensible standards for AWS as part of defence strategy, perhaps the bigger concern is how commodity technologies (consumer drones, 3D printing, mobile phones, single-board computers etc.) together with ‘unlocked’ large language models or computer vision models could place ‘home-made’ AWS within reach of those outside traditional military chains of command. Use of consumer technology within recent conflicts in various parts of the world illustrate how effective such repurposed tech can be in conflict zones, or potentially for use by terrorist organisations. Separate controls and safety systems to try to reduce risks in this area would be required.

1. Lead on international engagement of autonomous weapons

First, the Committee called for the Government to work towards an international consensus on the criteria necessary for AI-enabled AWS to be deemed compliant with IHL, and to lead efforts to develop “an effective international instrument”. The Committee commended the creation of the Bletchley Declaration of November 2023, and encouraged the Government to apply its principles of human-centric, trustworthy, and responsible design and deployment of AI to the military domain.

Throughout its Response, the Government defended the steps it has taken to broker and support international consensus on AI use in AWS. For example, the UK joined 51 states in endorsing a US-led Political Declaration requiring use of AI in defence to “accord with States’ obligations under international humanitarian law, including its fundamental principles”, and the MoD is “actively supporting” NATO's Data and Artificial Intelligence Review Board in its development of an AI certification standard.

2. Prohibition of AI in nuclear systems

Second, the Committee called on the Government to show international leadership in prohibiting AI use in nuclear command, control and communications, despite acknowledging that AI could augment the “detection capabilities of early warning systems” and support human analysts in warding off cyberattacks on nuclear systems. The Committee warned that the use of AI in nuclear systems could ignite an arms race and an escalation to nuclear responses during crises. Such a risk would only be heightened by the potential for miscommunication and misunderstanding resulting from the compressed decision-making times which AI-enabled AWS would deliver. Further, hostile actors could seek to interfere with AI tools or its training data with catastrophic nuclear consequences. Anyone familiar with the plotlines of Stanley Kubrick’s 1964 dark satire for Peter Sellers ‘Dr. Strangelove’, or 1983’s cold-war techno-thriller ‘Wargames’ starring a young Matthew Broderick will immediately recognise the risks. The fact remains, however, that drones and other advanced technology is already being used, in theatre, as reported recently in Ukraine, Israel and Syria.

The Government’s Response fell short of a commitment to an absolute prohibition of AI in nuclear systems, promising only to maintain “human political control of our nuclear weapons” and to encourage other nuclear states to adopt the same approach. Depending on your interpretation, this still leaves room for Terminator 2’s “Skynet moment”, when a machine launches nuclear weapons without any human-originated order to do so.

3. Adoption of operational definitions of an autonomous weapon

Third, the Committee encouraged the Government to adopt an operational definition of an ‘autonomous weapon’, given that no term is universally accepted, in order to facilitate the creation of meaningful policy and more effective discussion in international fora. The Committee argued that the creation of a future-proofed definition is possible, despite the rapid pace of technological development in AI. The Committee proposed defining ‘fully’ and ‘partially’ AWS as follows:

  • ‘Fully’ autonomous weapon systems: Systems that, once activated, can identify, select, and engage targets with lethal force without further intervention by an operator.
  • ‘Partially’ autonomous weapon systems: Systems featuring varying degrees of decision-making autonomy in critical functions such as identification, classification, interception and engagement.

In its Response, the Government refused to adopt an official or operative definition of AWS, despite conceding that doing so would aid policy making. The Government argued the concerns of the Committee are already captured in international law, so a further definition would not result in greater compliance with IHL. Additionally, the Government stressed its “priority is to maximise our military capability in the face of growing threats” and cited fears that adoption of a definition would form the basis of a legal instrument restricting the use of certain types of AWS. In the Government’s view, such an instrument runs contrary to national defence interests: adversarial nations would apply diplomatic pressure to constrain the UK’s legitimate research and development of AI-enabled AWS while independently pursuing dangerous and non-compliant use cases.

4. Maintaining human control

Fourth, the Committee stressed the importance of embedding human control across both the design and deployment of AWS to ensure compliance with IHL and to enshrine human moral agency. The level of human involvement required would depend on the system’s design, mission objective, and the operational context in which the AI system would be used.

The June 2022 Defence AI Strategy commits to “context-appropriate human involvement in weapons which identify, select and attack targets”, which could be achieved by real-time human supervision and human setting of the operational parameters of an AWS. In its Response, the Government reiterated this commitment and stated that the level of human involvement required would depend on factors such as “purpose of use, physical and digital environment, nature of possible threats, risks associated with system behaviour [and] regulatory environment”. By way of example, AI may be deployed with less human oversight during maritime engagements or in defensive air systems, while use in complex urban environments would require greater human engagement.

What comes next?

The Committee notes the growing sense that AI will come to shape the dynamics of warfare as nation states and the private sector race to develop AI-enabled capabilities. As of yet, governments’ high-level commitment to compliance with IHL has not translated into concrete and specific norms or regulations governing the safe and responsible integration of AI in AWS, much to the frustration of lobbying organisations such as Human Rights Watch and Stop Killer Robots.

Lord Tim Clement-Jones commented: “Action in this area, and serious analysis leading to concrete measures is essential. Pandora’s box is already open, but it is not too late to agree international treaties akin to those regarding nuclear non-proliferation, in order to provide safeguards. These issues will continue to be debated; in particular in the House of Lords in April and we will argue for the report’s conclusions to be taken forward in full”.

The relationship between AI and defence systems may be revisited in upcoming safety summits hosted by The Republic of Korea and France; those summits being the agreed follow-on meetings from Bletchley in 2023. International and public pressure may in time force governments to develop legal frameworks on AWS which specify the types of systems which are prohibited, and which clearly define the mechanisms through which human control and oversight over these systems must be maintained.

There are also wider AI safety concerns in relation to adjacent domains involving weapons. The potential for AI systems to assist those looking to build weapons or cause public disorder is real and should be separately addressed. Just as easily as a chatbot might provide recipe ideas for dinner given a list of items in your fridge or cupboard, that same chatbot (absent any safety controls) could advise how to build weapons using common household, garage or garden shed chemicals and items. Where the October 2023 US Executive Order on AI Safety and Security specifically addresses AI as an information / instruction / assistance source for those looking to create weapons, UK controls in those areas will need separate and specific consideration. The House of Lords’ wider report into LLMs urged for a more positive vision of AI to spur UK adoption, but this was one area where specific concerns regarding AI safety were targeted.

Find out more

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors of this alert: Guy Mathews (with contributions from Mark O’Conor, Parman Dhillon, Gareth Stokes, Lord Tim Clement-Jones) or your usual DLA Piper AI team member.

Print