On 12th February 2020 the House of Lords debated a motion from Lord Clement-Jones, “To ask Her Majesty’s Government what steps they have taken to assess the full implications of decision-making and prediction by algorithm in the public sector.” The Bishop of Oxford, Rt Revd Steven Croft, asked a follow-up question:
The Lord Bishop of Oxford: My Lords, I declare an interest as a board member of the CDEI and a member of the Ada Lovelace Institute’s new Rethinking Data project. I am also a graduate of the AI Select Committee. I am grateful to the noble Lord, Lord Clement-Jones, for this important debate.
Almost all those involved in this sector are aware that there is an urgent need for creative regulation that realises the benefits of artificial intelligence while minimising the risks of harm. I was recently struck by a new book by Brad Smith, the president of Microsoft, entitled Tools and Weapons—that says it all in one phrase. His final sentence is a plea for exactly this kind of creative regulation. He writes:
“Technology innovation is not going to slow down. The work to manage it needs to speed up.”
Noble Lords are right to draw attention to the dangers of unregulated and untested algorithms in public sector decision-making. As we have heard, information on how and where algorithms are used in the public sector is relatively scant. We know that their use is being encouraged by government and that such use is increasing. Some practice is exemplary, while some sectors have the feel of the wild west about them: entrepreneurial, unregulated and unaccountable.
The CDEI is the Government’s own advisory body on AI and ethics, and is committed to addressing and advising on these questions. A significant first task has been to develop an approach founded on clear, high-level ethical principles to which we can all subscribe. The Select Committee called for this principle-centred approach in our call for an AI code, and at the time we suggested five clear principles. The Committee on Standards in Public Life has now affirmed the need for this high-level ethical work and has called for greater clarity on these core principles. I support this call. Only a principled approach can ensure consistency across a broad and diverse range of applications. The debate about those principles takes us to the heart of what it means to be human and of human flourishing in the machine age. But which principles should undergird our work?
Last May the UK Government signed up to the OECD principles on artificial intelligence, along with all other member countries. The CDEI has informally adopted these principles in our own work. They are very powerful and, I believe, need to become our reference point in every piece of work. They are: AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being; AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity; AI should be transparent so that people understand AI-based outcomes and can challenge them; AI systems must function in a robust, secure and safe way; and organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning.
In our recent recommendations to the Government on online targeting, the CDEI used the OECD principles as a lens to identify the nature and scale of the ethical problems with how AI is used to shape people’s online experiences. The same principles will flow through our second major report on bias in algorithmic decision-making, as the noble Baroness, Lady Rock, described.
Different parts of the public sector have codes of ethics distinctive to them. Developing patterns of regulation for different sectors will demand the integration of these five central principles with existing ethical codes and statements in, for example, policing, social work or recruitment.
The application of algorithms in the public sector is too wide a set of issues for a single regulator or to be left unregulated. We need core values to be translated into effective regulation, standards and codes of practice. I join others in urging the Government to work with the CDEI and others to clarify and deploy the crucial principles against which the public-centred use of AI is to be assessed, and to expand the efforts to hold public bodies and the Government themselves to account.
Lord Holmes of Richmond (Con): ..I am neither a bishop nor a boffin but I believe this: if we can harness all the positivity and all the potential of algorithms, of all the elements of the fourth industrial revolution, not only will we be able to make an incredible impact on the public good but I truly believe that we will be able to unite sceptics and evangelists behind ethical AI...
The Parliamentary Under-Secretary of State, Department for Digital, Culture, Media and Sport (Baroness Barran) (Con):…The right reverend Prelate the Bishop of Oxford expressed the need for a set of principles and an ethical basis for all our work. Noble Lords will be aware of the development of the data ethics framework, which includes a number of those principles. We are currently working on refreshing that framework to make it as up to date as possible for public servants who work with data…
..In closing, I will go back to two points. One is on the potential of the use of artificial intelligence, which PricewaterhouseCoopers has estimated could contribute almost $16 trillion to the global economy; obviously the UK is one of the top three countries providing that, so that would be a huge boost to our economy. However, I also go back to what the right reverend Prelate the Bishop of Oxford said about what it means to be human. We can harness that potential in a way that enhances, rather than erodes, our humanity.