Bishop of Oxford asks about autonomous weapons

The Bishop of Oxford received the following written answers on 5th September 2022:

The Lord Bishop of Oxford asked Her Majesty’s Government, further to their policy paper Ambitious, Safe, Responsible: Our approach to the delivery of AI enabled capability in Defence, published on 15 June, which says that “We do not rule out incorporating AI within weapon systems” and that real-time human supervision of such systems “may act as an unnecessary and inappropriate constraint on operational performance”, when this would be seen as a constraint; and whether they can provide assurance that the UK’s weapon systems will remain under human supervision at the point when any decision to take a human life is made.

Baroness Goldie (Con): The ‘Ambitious, Safe, Responsible’ policy sets out that the Ministry of Defence opposes the creation and use of AI enabled weapon systems which operate without meaningful and context-appropriate human involvement throughout their lifecycle. This involvement could take the form of real-time human supervision, or control exercised through the setting of a system’s operational parameters.

We believe that Human-Machine teaming delivers the best outcomes in terms of overall effectiveness. However, in certain cases it may be appropriate to exert rigorous human control over AI-enabled systems through a range of safeguards, process and technical controls without always requiring some form of real-time human supervision. For example, in the context of defending a maritime platform against hypersonic weapons, defensive systems may need to be able to detect incoming threats and open fire faster than a human could react.

In all cases, human responsibility for the use of AI must be clearly established, and that responsibility underpinned by a clear and consistent articulation of the means by which human control is exercised across the system lifecycle, including the nature and limitations of that control.

Hansard


The Lord Bishop of Oxford asked Her Majesty’s Government, further to their policy paper Ambitious, Safe, Responsible: Our approach to the delivery of AI enabled capability in Defence, published on 15 June, what assessment they have made of the specific ethical problems raised by autonomous weapons that are used to target humans and which have been raised by the International Committee of the Red Cross.

Baroness Goldie (Con): We’re very aware of the ethical concerns raised by numerous stakeholders including the ICRC around the potential misuse of AI in Defence, including its impact on humans and the potential use of autonomous systems in ways which might violate international law. We published the Ambitious, Safe, Responsible specifically in order to ensure clarity and support ongoing conversations around the UK approach.

With respect to autonomous weapons systems: the UK’s focus is on setting clear international norms for the safe and responsible development and use of AI, to ensure compliance with International Humanitarian Law through meaningful and context-appropriate levels of human control. We propose development of a compendium of good practice mapped against a weapon systems’ lifecycle which would provide a clear framework for the operationalisation of the eleven guiding principles agreed by the UN Group of Government Experts on Certain Conventional Weapons 2017-19.

We are keen to continue extensive discussions on this issue with the international community and NGOs on this issue, including through discussions at the UN.

Hansard


The Lord Bishop of Oxford: To ask Her Majesty’s Government, further to their policy paper Ambitious, Safe, Responsible: Our approach to the delivery of AI enabled capability in Defence, published on 15 June, which states that weapons that identify, select and attack targets without context-appropriate human involvement “are not acceptable”, whether they will be supporting the negotiation of a legally binding international instrument that both (1) prohibits autonomous weapons that identify, select and attack targets without context-appropriate human involvement, and (2) regulates other autonomous weapons systems to ensure meaningful human control over the use of force.

Baroness Goldie (C0n): The UK does not support calls for further legally binding rules that prohibit autonomous weapons that identify, select and attack targets without context-appropriate human involvement and regulate other autonomous systems. International Humanitarian Law already provides a robust, principle-based framework for the regulation of development and use of all weapons systems including weapons that contain autonomous functions.

Without international consensus on the definitions or characteristics of weapons with levels of autonomy, a legal instrument would have to ban undefined systems, which would present difficulties in the application of any such ban and which could severely impact legitimate research and development of AI or autonomous technologies.

Hansard

%d bloggers like this: