On 22nd March 2024, the Bishop of Worcester spoke in a debate on the Artificial Intelligence (Regulation) Bill (a private members bill tabled by Lord Holmes of Richmond), supporting the aims of the bill and calling for robust approach to AI regulation:
The Lord Bishop of Worcester: My Lords, I guarantee that this is not an AI-generated speech. Indeed, Members of the House might decide after five minutes that there is not much intelligence of any kind involved in its creation. Be that as it may, we on these Benches have engaged extensively with the impacts and implications of new technologies for years—from contributions to the Warnock committee in the 1980s through to the passage of the Online Safety Bill through this House last year. I am grateful to the noble Lord, Lord Holmes, for this timely and thoughtful Bill and for his brilliant introduction to it. Innovation must be enthusiastically encouraged, as the noble Baroness, Lady Moyo, has just reminded us. It is a pleasure to follow her.
That said, I will take us back to first principles for a moment: to Christian principles, which I hope all of good will would want to support. From these principles arise two imperatives for regulation and governance, whatever breakthroughs new technologies enable. The first is that a flourishing society depends on respecting human dignity and agency. The more any new tool threatens such innate dignity, the more carefully it should be evaluated and regulated. The second imperative is a duty of government, and all of us, to defend and promote the needs of the nation’s weak and marginalised —those who cannot always help themselves. I am not convinced that the current pro-innovation and “observe first, intervene later” approach to AI get this perennial balance quite right. For that reason, I support the ambitions outlined in the Bill.
There are certainly aspects of last year’s AI White Paper that get things in the right order: I warmly commend the Government for including fairness, accountability and redress among the five guiding principles going forward. Establishing an AI authority would formalise the hub-and-spoke structure the Government are already putting in place, with the added benefit of shifting from a voluntary to a compulsory basis, and an industry-funded regulatory model of the kind the Online Safety Act is beginning to implement.
The voluntary code of practice on which the Government’s approach currently depends is surely inadequate. The track record of the big tech companies that developed the AI economy and are now training the most powerful AI models shows that profit trumps users’ safety and well-being time and again. “Move fast and break things” and “act first, apologise later” remains the lodestar. Sam Altman’s qualities of character and conduct while at the helm of OpenAI have come under considerable scrutiny over the last few months. At Davos in January this year, the Secretary-General of the United Nations complained:
“Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact.”
How can it be right that the richest companies in history have no mandatory duties to financially support a robust safety framework? Surely, it should not be for the taxpayer alone to shoulder the costs of an AI digital hub to find and fix gaps that lead to risks or harm. Why should the taxpayer shoulder the cost of providing appropriate regulatory sandboxes for testing new product safety?
The Government’s five guiding principles are a good guide for AI, but they need legal powers underpinning them and the sharpened teeth of financial penalties for corporations that intentionally flout best practice, to the clear and obvious harm of consumers.
I commend the ambitions of the Bill. A whole-system, proportional and legally enforceable approach to regulating AI is urgently needed. Balancing industry’s need to innovate with its duty to respect human dignity and the vulnerable in society is vital if we are safely to navigate the many changes and challenges not just over the horizon but already in plain sight.
Extracts from the speeches that followed:
Lord Davies of Brixton (Lab): AI provides the opportunity to revolutionise industries, enhance our daily lives and solve some of the most pressing problems we face today—from healthcare to climate change—and solutions that are not available in other ways. However, with greater power comes greater responsibility. The rapid advance of AI technology has outpaced our regulatory frameworks, leading to innovation without adequate oversight, ethical consideration or accountability, so we undoubtedly need a regulator. I take the point that it has to be focused and simple. We need rigorous ethical standards and transparency in AI development to ensure that these technologies serve the good of all, not just commercial interests. We cannot wait for these forces to play out before deciding what needs to be done. I very much support the remarks of the previous speaker, the right reverend Prelate the Bishop of Worcester, who set out the position very clearly.
We need to have a full understanding of the implications of AI for employment and the workforce. These technologies will automate tasks previously performed by humans, and we face significant impacts on the labour market. The prevailing model for AI is to seek the advantage for the developers and not so much for the workers. This is an issue we will need to confront. We will have to debate the extent to which that is the job of the regulator.
Baroness Twycross (Lab): Around the world, countries and regions are already beginning to draft rules for AI. As the noble Lord, Lord Kirkhope, said, this does not need to stifle innovation. The Government’s White Paper on AI regulation adopted a cross-sector and outcome-based framework, underpinned by its five core principles. Unfortunately, there are no proposals in the current White Paper for introducing a new AI regulator to oversee the implementation of the framework. Existing regulators, such as the Information Commissioner’s Office, Ofcom and the FCA have instead been asked to implement the five principles from within their respective domains. As a number of noble Lords referred to, the Ada Lovelace Institute has expressed concern about the Government’s approach, which it has described as “all eyes, no hands”. The institute says that, despite
“significant horizon-scanning capabilities to anticipate and monitor AI risks … it has not given itself the powers and resources to prevent those risks or even react to them effectively after the fact”.
The Bill introduced by the noble Lord, Lord Holmes, seeks to address these shortcomings and, as he said in his opening remarks: if not now, when? Until such time as an independent AI regulator is established, the challenge lies in ensuring its effective implementation across various regulatory domains. This includes data protection, competition, communications and financial services. A number of noble Lords mentioned the multitude of regulatory bodies involved. This means that effective governance between them will be paramount. Regulatory clarity, which enables business to adopt and scale investment in AI, will bolster the UK’s competitive edge. The UK has so far been focusing on voluntary measures for general-purpose AI systems. As the right reverend Prelate the Bishop of Worcester said, this is not adequate: human rights and privacy must also be protected.

You must be logged in to post a comment.