The House of Lords debated the AI in the UK (Liaison Committee Report) in grand committee on 25th May 2022. The Bishop of Oxford spoke in the debate:
The Lord Bishop of Oxford: My Lords, it is a pleasure to follow the noble Lord, Lord Evans, and thank him in this context for his report, which I found extremely helpful when it was published and subsequently. It has been a privilege to engage with the questions around AI over the last five years through the original AI Select Committee so ably chaired by the noble Lord, Lord Clement-Jones, in the Liaison Committee and as a founding board member for three years of the Centre for Data Ethics and Innovation. I thank the noble Lord for his masterly introduction today and other noble Lords for their contributions.
There has been a great deal of investment, thought and reflection regarding the ethics of artificial intelligence over the last five years in government, the National Health Service, the CDEI and elsewhere—in universities, with several new centres emerging, including in the universities of Oxford and Oxford Brookes, and by the Church and faith communities. Special mention should be made of the Rome Call for AI Ethics, signed by Pope Francis, Microsoft, IBM and others at the Vatican in February 2020, and its six principles of transparency, inclusion, accountability, impartiality, reliability and security. The most reverend Primate the Archbishop of Canterbury has led the formation of a new Anglican Communion Science Commission, drawing together senior scientists and Church leaders across the globe to explore, among other things, the impact of new technologies.
Despite all this endeavour, there is in this part of the AI landscape no room for complacency. The technology is developing rapidly and its use for the most part is ahead of public understanding. AI creates enormous imbalances of power with inherent risks, and the moral and ethical dilemmas are complex. We do not need to invent new ethics, but we need to develop and apply our common ethical frameworks to rapidly developing technologies and new contexts. The original AI report suggested five overarching principles for an AI code. It seems appropriate in the Moses Room to say that there were originally 10 commandments, but they were wisely whittled down by the committee. They are not perfect, in hindsight, but they are worth revisiting five years on as a frame for our debate.
The first is that artificial intelligence should be developed for the common good and benefit of humanity; as the noble Lord, Lord Holmes, eloquently said, the debate often slips straight into the harms and ignores the good. This principle is not self-evident and needs to be restated. AI brings enormous benefits in medicine, research, productivity and many other areas. The role of government must be to ensure that these benefits are to the common good—for the many, not the few. Government, not big tech, must lead. There must be a fair distribution of the wealth that is generated, a fair sharing of power through good governance and fair access to information. This simply will not happen without national and international regulation and investment.
The second principle is that artificial intelligence should operate on principles of intelligibility and fairness. This is much easier to say than to put into practice. AI is now being deployed, or could be, in deeply sensitive areas of our lives: decisions about probation, sentencing, employment, personal loans, social care—including of children—predictive policing, the outcomes of examinations and the distribution of resources. The algorithms deployed in the private and public sphere need to be tested against the criteria of bias and transparency. The governance needs to be robust. I am sure that an individualised, contextualised approach in each field is the right way forward, but government has a key co-ordinating role. As the noble Lord, Lord Clement-Jones, said, we do not yet have that robust co-ordinating body.
Thirdly, artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. As a society, we remain careless of our data. Professor Shoshana Zuboff has exposed the risks of surveillance capitalism and Frances Haugen, formerly of Meta, has exposed the way personal data is open to exploitation by big tech. Evidence was presented to the online safety scrutiny committee of the effects on children and adolescents of 24/7 exposure to social media. The Online Safety Bill is a very welcome and major step forward, but the need for new regulation and continual vigilance will be essential.
Fourthly, all citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. It seems to me that of these five areas, the Government have been weakest here. A much greater investment is needed by the Department for Education and across government to educate society on the nature and deployment of AI, and on its benefits and risks. Parents need help to support children growing up in a digital world. Workers need to know their rights in terms of the digital economy, while fresh legislation will be needed to promote good work. There needs to be even better access to new skills and training. We need to strive as a society for even greater inclusion. How do the Government propose to offer fresh leadership in this area?
Finally, the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence, as others have said. This final point highlights a major piece of unfinished business in both reports: engagement with the challenging and difficult questions of lethal autonomous weapons systems. The technology and capability to deploy AI in warfare is developing all the time. The time has come for a United Nations treaty to limit the deployment of killer robots of all kinds. This Government and Parliament, as the noble Lord, Lord Browne, eloquently said, urgently need to engage with this area and, I hope, take a leading role in the governance of research and development.
AI can and has brought many benefits, as well as many risks. There is great openness and willingness on the part of many working in the field to engage with the humanities, philosophers and the faith communities. There is a common understanding that the knowledge brought to us by science needs to be deployed with wisdom and humility for the common good. AI will continue to raise sharp questions of what it means to be human, and to build a society and a world where all can flourish. As many have pointed out, even the very best examples of AI as yet come nowhere near the complexity and wonder of the human mind and person. We have been given immense power to create but we are ourselves, in the words of the psalmist, fearfully and wonderfully created.
Extracts from the speeches that followed:
Lord Bilimoria (CB): My Lords, the report Growing the Artificial Intelligence Industry in the UK was published in October 2017. It started off by saying:
“We have a choice. The UK could stay among the world leaders in AI in the future, or allow other countries to dominate.”
It went on to say that the increased use of AI could
“bring major social and economic benefits to the UK. With AI, computers can analyse and learn from information at higher accuracy and speed than humans can. AI offers massive gains in efficiency and performance to most or all industry sectors, from drug discovery to logistics. AI is software that can be integrated into existing processes, improving them, scaling them, and reducing their costs, by making or suggesting more accurate decisions through better use of information.”
It estimated at that time that AI could add £630 billion to the UK economy by 2035.
Even at that stage, the UK had an exceptional record in key AI research. We should be proud of that, but it also highlighted the importance of inward investment. We as a country need to be continually attractive to inward investment and be a magnet for it. We have traditionally between the second or third-largest recipient of inward investment. But will that continue to be the case when we have, for example, the highest tax burden in 71 years?
AI of course has great potential for increasing productivity; it helps our firms and people use resources more efficiently and it can help familiar tasks to be done in a more efficient manner. It enables entirely new business models and new approaches to old problems. It can help companies and individual employees be more productive. We all know its benefits. It can reduce the burden of searching large datasets. I could give the Committee example after example of how artificial intelligence can complement or exceed our abilities, of course taking into account what the right reverend Prelate the Bishop of Oxford so sensibly just said. It can work alongside us and even teach us. It creates new opportunities for creativity and innovation and shows us new ways to think.
Lord McNally (LD): I come to this subject not with any of the recent experience that has been on show. This might send a shiver down the Committee’s spine but in 2010 I was appointed Minister for Data Protection in the coalition Government, and it was one of the first times when I had come across some of these challenges. We had an advisory board on which, although she was not then in the Lords, the noble Baroness, Lady Lane-Fox, made a great impression on me with her knowledge of these problems.
I remember the discussion when one of our advisers urged us to release NHS data as a valuable creator of new industries, possible new cures and so on. Even before we had had time to consider it, there was a campaign by the Daily Mail striking fear into everyone that we were about to release everyone’s private medical records, so that hit the buffers.
At that time, I was taken around one of the HM Government facilities to look at what we were doing with data. I remember seeing various things that had been done and having them explained to me. I said to the gentlemen showing me around, “This is all very interesting, but aren’t there some civil liberties aspects to what you are doing?” “Oh no, sir,” he said, “Tesco knows a lot more about you than we do.” However, that was 10 years ago.
I should probably also confess that another of my responsibilities related to the earlier discussion on GDPR. I also served before that, in 2003, on the Puttnam Committee on the Communications Act. It is very interesting in two respects. We did not try to advise on the internet, because we had no idea at that time what kind of impact the internet would have. I think the Online Safety Bill, nearly 20 years later, shows how there is sometimes a time lag—I am sure the same will apply with AI. One thing we did recommend was to give Ofcom special responsibility for digital education, and I have to say, although I think Ofcom has been a tremendous success as a regulator, it has lagged behind in picking up that particular ball. We still have a lot to do and I am glad that the right reverend Prelate the Bishop of Oxford and others placed such emphasis on this.
Lord Parkinson of Whitley Bay (Con): Key to promoting public trust in AI is having in place a clear, proportionate governance framework that addresses the unique challenges and opportunities of AI, which brings me to another of the key themes of this evening’s debate: ethics and regulation. The UK has a world-leading regulatory regime and a history of innovation-friendly approaches to regulation. We are committed to making sure that new and emerging technologies are regulated in a way that instils public confidence in them while supporting further innovation. We need to make sure that our regulatory approach keeps pace with new developments in this fast-moving field. That is why, later this year, the Government will publish a White Paper on AI governance, exploring how to govern AI technologies in an innovation-friendly way to deliver the opportunities that AI promises while taking a proportionate approach to risk so that we can protect the public.
We want to make sure that our approach is tailored to context and proportionate to the actual impact on individuals and groups in particular contexts. As noble Lords, including the right reverend Prelate the Bishop of Oxford, have rightly set out, those contexts can be many and varied. But we also want to make sure our approach is coherent so that we can reduce unnecessary complexity or confusion for businesses and the public. We are considering whether there is a need for a set of cross-cutting principles which guide how we approach common issues relating to AI, such as safety, and looking at how to make sure that there are effective mechanisms in place to ensure co-ordination across the regulatory landscape.
You must be logged in to post a comment.