Moved by Lord Holmes of Richmond That the Bill be now read a second
time. Lord Holmes of Richmond (Con) My Lords, I declare my
technology interests as adviser to Boston Ltd. I thank all noble
Lords who have signed up to speak; I eagerly anticipate all their
contributions and, indeed, hearing from my noble friend the
Minister. I also thank all the organisations that got in contact
with me and other noble Lords for their briefings, as well as those
that took time...Request free trial
Moved by
That the Bill be now read a second time.
(Con)
My Lords, I declare my technology interests as adviser to Boston
Ltd. I thank all noble Lords who have signed up to speak; I
eagerly anticipate all their contributions and, indeed, hearing
from my noble friend the Minister. I also thank all the
organisations that got in contact with me and other noble Lords
for their briefings, as well as those that took time to meet me
ahead of this Second Reading debate. Noble Lords and others who
would like to follow this on social media can use #AIBill
#AIFutures.
If we are to secure the opportunities and control the challenges
of artificial intelligence, it is time to legislate and to lead.
We need something that is principles-based and outcomes-focused,
with input transparent, permissioned and wherever applicable paid
for and understood.
There are at last three reasons why we should legislate on this:
social, democratic and economic. On reason one, the social
reason, some of the greatest benefits we could secure from AI
come in this space, including truly personalised education for
all, and healthcare. We saw only yesterday the exciting early
results from the NHS Grampian breast-screening AI programme. Then
there is mobility and net zero sustainability.
Reason two is about our democracy and jurisdiction. With 40% of
the world’s democracies going to the polls this year, with
deepfakes, cheap fakes, misinformation and disinformation, we are
in a high-threat environment for our democracy. As our 2020
Democracy and Digital Technologies Select Committee report put
it, with a proliferation of misinformation and disinformation,
trust will evaporate and, without trust, democracy as we know it
will simply disappear.
On our jurisdiction and system of law, the UK has a unique
opportunity at this moment in time. We do not have to fear being
in the first mover spotlight—the EU has taken that with its Act,
in all its 892 pages. The US has had the executive order but is
still yet to commit fully to this phase. The UK, with our
common-law tradition, respected right around the world, has such
an opportunity to legislate in a way that will be adaptive,
versatile and able to develop through precedent and case law.
On reason three, our economy, PwC’s AI tracker says that by 2030,
there will be a 14% increase in global GDP worth $15.7 trillion.
The UK must act to ensure our share of that AI boom. To take just
one technology, the chatbot global market grew tenfold in just
four years. The Alan Turing Institute report on AI in the public
sector, which came out just this week, says that 84% of
government services could benefit from AI automation in over 200
different services. Regulated markets perform better. Right-sized
regulation is good for innovation and good for inward
investment.
Those are the three reasons. What about three individual impacts
of AI right now? What if we find ourselves on the wrong end of an
AI decision in a recruitment shortlisting, the wrong end of an AI
decision in being turned down for a loan, or, even worse, the
wrong end of an AI decision when awaiting a liver transplant? All
these are illustrations of AI impacting individuals, often when
they would not even know that AI was involved. We need to put
paid to the myth, the false dichotomy, that you must have heavy,
rules-based regulation or a free hand—that we have to pay tribute
to the cry of the frontierists in every epoque: “Don’t fence me
in”. Right-sized regulation is good socially, democratically and
economically. Here is the thing: AI is to human intellect what
steam was to human strength. You get the picture. Steam literally
changed time. It is our time to act, and that is why I bring this
Bill to your Lordships’ House today.
In constructing the Bill, I have sought to consult widely, to be
very cognisant of the Government’s pro-innovation White Paper, of
all the great work of BCS, technology, industry, civil society
and more. I wanted the Bill to be threaded through with the
principles of transparency and trustworthiness; inclusion and
innovation; interoperability and international focus;
accountability and assurance.
Turning to the clauses, Clause 1 sets up an AI authority. Lest
any noble Lord suddenly feels that I am proposing a do-it-all,
huge, cumbersome regulator, I am most certainly not. In many
ways, it would not be much bigger in scope than what the DSIT
unit is proposing: an agile, right-sized regulator, horizontally
focused to look across all existing regulators, not least the
economic regulators, to assess their competency to address the
opportunities and challenges presented by AI and to highlight the
gaps. And there are gaps, as rightly identified by the excellent
Ada Lovelace Institute report. For example, where do you go if
you are on the wrong end of that AI recruitment shortlisting
decision? It must have the authority, similarly, to look across
all relevant legislation—consumer protection and product safety,
to name but two—to assess its competency to address the
challenges and opportunities presented by AI.
The AI authority must have at its heart the principles set out in
Clause 2: it must be not just the custodian of those principles,
but a very lighthouse for them, and it must have an educational
function and a pro-innovation purpose. Many of those principles
will be very recognisable; they are taken from the Government’s
White Paper but put on a statutory footing. If they are good
enough to be in the White Paper, we should commit to them,
believe in them and know that they will be our greatest guides
for the positive path forward, when put in a statutory framework.
We must have everything inclusive by design, and with a
proportionality thread running through all the principles, so
none of them can be deployed in a burdensome way.
Clause 3 concerns sandboxes, so brilliantly developed in the UK
in 2016 with the fintech regulatory sandbox. If you want a
measure of its success, it is replicated in well over 50
jurisdictions around the world. It enables innovation in a safe,
regulated, supported environment: real customers, real market,
real innovations, but in a splendid sandbox concept.
Clause 4 sets up the AI responsible officer, to be conceived of
not as a person but as a role, to ensure the safe, ethical and
unbiased deployment of AI in her or his organisation. It does not
have to be burdensome, or a whole person in a start-up; but that
function needs to be performed, with reporting requirements under
the Companies Act that are well understood by any business.
Again, crucially, it must be subject to that proportionality
principle.
Clause 5 concerns labelling and IP, which is such a critical part
of how we will get this right with AI. Labelling: so that if
anybody is subject to a service or a good where AI is in the mix,
it will be clearly labelled. AI can be part of the solution to
providing this labelling approach. Where IP or third-party data
is used, that has to be reported to the AI authority. Again, this
can be done efficiently and effectively using the very technology
itself. On the critical question of IP, I met with 25
organisations representing tens of thousands of our great
creatives: the people that make us laugh, make us smile,
challenge us, push us to places we never even knew existed; those
who make music, such sweet music, where otherwise there may be
silence. It is critical to understand that they want to be part
of this AI transformation, but in a consented, negotiated,
paid-for manner. As Dan Guthrie, director-general of the Alliance
for Intellectual Property, put it, it is extraordinary that
businesses together worth trillions take creatives’ IP without
consent and without payment, while fiercely defending their own
intellectual property. This Bill will change that.
Clause 6 concerns public engagement. For me, this is probably the
most important clause in the Bill, because without public
engagement, how can we have trustworthiness? People need to be
able to ask, “What is in this for me? Why should I care? How is
this impacting my life? How can I get involved?” We need to look
at innovative ways to consult and engage. A good example, in
Taiwan, is the Alignment Assemblies, but there are hundreds of
novel approaches. Government consultations should have millions
of responses, because this is both desirable and now, with the
technology, analysable.
Clause 7 concerns interpretation. At this stage, I have drawn the
definitions of AI deliberately broadly. We should certainly
debate this, but as set out in Clause 7, much would and should be
included in those definitions.
Clause 8 sets out the potential for regulating for offences and
fines thereunder, to give teeth to so much of what I have already
set out and, rightly, to pay the correct respect to all the
devolved nations. So, such regulations would have to go through
the Scottish Parliament, Senedd Cymru and the Northern Ireland
Assembly.
That brings us to Clause 9, the final clause, which makes this a
UK-wide Bill.
So, that is the Bill. We know how to do this. Just last year, the
Electronic Trade Documents Act showed that we know how to
legislate for the possibilities of these new technologies; and,
my word, we know how to innovate in the UK—Turing, Lovelace,
Berners-Lee, Demis at DeepMind, and so many more.
If we know how to do this, why are we not legislating? What will
we know in, say, 12 months’ time that we do not know now about
citizens’ rights, consumer protection, IP rights, being
pro-innovation, labelling and the opportunity to transform public
engagement? We need to act now, because we know what we need to
know—if not now, when? The Bletchley summit last year was a
success. Understandably, it focused on safety, but having done
that it is imperative that we stand up all the other elements of
AI already impacting people’s lives in so many ways, often
without their knowledge.
Perhaps the greatest and finest learning from Bletchley is not so
much the safety summit but what happened there two generations
before, when a diverse team of talent gathered and deployed the
technology of their day to defeat the greatest threat to our
civilisation. Talent and technology brought forth light in one of
the darkest hours of human history. As it was in Bletchley in the
1940s, so it is in the United Kingdom in the 2020s. It is time
for human-led, human-in-the-loop, principle-based artificial
intelligence. It is time to legislate and to lead; for
transparency and trustworthiness, inclusion and innovation,
interoperability and international focus, accountability and
assurance; for AI developers, deployers and democracy itself; for
citizens, creatives and our country—our data, our decisions,
#ourAIfutures. That is what this Bill is all about. I beg to
move.
10.22am
(Lab)
My Lords, there can have been few Private Members’ Bills that
have set out to address such towering issues as this Bill from
the noble Lord, . He has been an
important voice on the opportunities and challenges arising from
generative AI in this House and outside it. This Bill and his
powerful introduction to it are only the latest contributions to
the vital issue of regulating AI to ensure that the social and
financial interests and security of consumers are protected as a
first priority.
The noble Lord also contributed to a wide-ranging discussion on
the regulation of AI in relation to misinformation and
disinformation convened by the Thomson Foundation, of which, as
recorded in the register, I am chair. Disinformation in news has
existed and grown as a problem since long before the emergence of
generative AI, but each iteration of AI makes the disinformation
that human bad actors promote even harder to detect and
counteract.
This year a record number of people in the world will go to the
polls in general elections, as the noble Lord said. The Thomson
Foundation has commissioned research into the incidence of
AI-fuelled disinformation in the Taiwanese presidential elections
in mid-January, conducted by Professor Chen-Ling Hung of National
Taiwan University. Your Lordships may not be surprised that the
preliminary conclusions of the work, which will be continued in
relation to other elections, confirms the concerns that the noble
Lord voiced in his introduction. Generative AI’s role in
exacerbating misinformation and disinformation in news and the
impact that can have on the democratic process are hugely
important, but this is only one of a large number of areas where
generative AI is at the same time an opportunity and a
threat.
I strongly support this well-judged and balanced Bill, which
recognises the fast-changing, dynamic nature of this
technology—Moore’s law on steroids, as I have previously
suggested—and sets out a logical and coherent role for the
proposed AI authority, bringing a transparency and clarity to the
regulation of AI for its developers and users that is currently
lacking.
I look forward to the Minister’s winding up, but with my
expectations firmly under control. The Prime Minister’s position
seems incoherent. On the one hand he says that generative AI
poses an existential threat and on the other that no new
regulatory body is needed and the technology is too fast-moving
for a comprehensive regulatory framework to be established. That
is a guarantee that we will be heaving to close a creaking stable
door as the thoroughbred horse disappears over the horizon. I
will not be surprised to hear the Minister extol the steps taken
in recent months, such as the establishment of the AI unit, as a
demonstration that everything is under control. Even if these
small initiatives are welcome, they fall well short of
establishing the transparency and clarity of regulation needed to
engender confidence in all parties—consumers, employers, workers
and civil society.
If evidence is needed to make the case for a transparent,
well-defined regulatory regime rather than ad hoc, fragmented
departmental action, the Industry and Regulators Committee, of
which I am privileged to be a member, today published a letter to
the Secretary of State for Levelling Up about the regulation of
property agents. Five years ago, a working group chaired by the
noble Lord, , recommended that the sector
should be regulated, yet despite positive initial noises from the
Government, nothing has happened. Even making allowance for the
impact of the pandemic during this time, this does not engender
confidence in their willingness and ability to grasp regulatory
nettles in a low-tech industry, let alone in a high-tech one.
It is hard not to suspect that this reflects an ideological
suspicion within the Conservative Party that regulation is the
enemy of innovation and economic success rather than a necessary
condition, which I believe it is. Evidence to the Industry and
Regulators Committee from a representative of Merck confirmed
that the life sciences industry thrives in countries where there
is strong regulation.
I urge the Government to give parliamentary time to this Bill to
allow it to go forward. I look forward to addressing its detailed
issues then.
10.28am
(Con)
My Lords, one of the advantages of sitting every day between my
noble friends Lord Holmes and Lord Kirkhope is that their
enthusiasm for a subject on which they have a lot of knowledge
and I have very little passes by a process of osmosis along the
Bench. I commend my noble friend on his Bill and his speech. I
will add a footnote to it.
My noble friend’s Bill is timely, coming after the Government
published their consultation outcome last month, shortly after
the European Commission published its Artificial Intelligence Act
and as we see how other countries, such as the USA, are
responding to the AI challenge. Ideally, there should be some
global architecture to deal with a phenomenon that knows no
boundaries. The Prime Minister said as much in October:
“My vision, and our ultimate goal, should be to work towards a
more international approach to safety where we collaborate with
partners to ensure AI systems are safe”.
However, we only have to look at the pressures on existing
international organisations, like the United Nations and the WTO,
to see that that is a big ask. There is a headwind of
protectionism, and at times nationalism, making collaboration
difficult. It is not helped by the world being increasingly
divided between democracies and autocracies, with the latter
using AI as a substitute for conventional warfare.
The most pragmatic approach, therefore, is to go for some lowest
common denominators, building on the Bletchley Declaration which
talks about sharing responsibility and collaboration. We want to
avoid regulatory regimes that are incompatible, which would lead
to regulatory arbitrage and difficulties with compliance.
The response to the consultation refers to this in paragraphs 71
and 72, stating:
“the intense competition between companies to release
ever-more-capable systems means we will need to remain highly
vigilant to meaningful compliance, accountability, and effective
risk mitigation. It may be the case that commercial incentives
are not always aligned with the public good”.
It concludes:
“the challenges posed by AI technologies will ultimately require
legislative action in every country once understanding of risk
has matured”.
My noble friend’s Private Member’s Bill is a heroic first shot at
what that legislation might look like. To simplify, there is a
debate between top-down, as set out in the Bill, and bottom-up,
as set out in the Government’s response, delegating regulation to
individual regulators with a control function in DSIT. At some
point, there will have to be convergence between the two
approaches.
There is one particular clause in my noble friend’s Bill that I
think is important: Clause 1(2)(c), which states that the
function of the AI authority is to,
“undertake a gap analysis of regulatory responsibilities in
respect of AI”.
The White Paper and the consultation outcome have numerous
references to regulators. What I was looking for and never found
was a list of all our regulators, and what they regulate. I
confess I may have missed it, but without such a comprehensive
list of regulators and what they regulate, any strategy risks
being incomplete because we do not have a full picture.
My noble friend mentioned education. We have a shortage of
teachers in many disciplines, and many complain about paperwork
and are thinking of leaving. There is a huge contribution to be
made by AI. But who is in charge? If you put the question into
Google, it says,
“the DFE is responsible for children’s services and
education”.
Then there is Ofsted, which inspects schools; there is Ofqual,
which deals with exams; and then there is the Office for
Students. The Russell group of universities have signed up to a
set of principles ensuring that pupils would be taught to become
AI literate.
Who is looking at the huge volume of material which AI companies
are drowning schools and teachers with, as new and more
accessible chatbots are developed? Who is looking at AI for
marking homework? What about AI for adaptive testing? Who is
looking at AI being used for home tuition, as increasingly used
by parents? Who is looking at AI for marking papers? As my noble
friend said, what happens if they get it wrong?
The education sector is trying to get a handle on this
technological maelstrom and there may be some bad actors in
there. However, the same may be happening elsewhere because the
regulatory regimes lack clarity. Hence, should by any chance my
noble friend’s Bill not survive in full, Clause 1(2)(c)
should.
10.33am
(CB)
My Lords, I also warmly support the Bill introduced by the noble
Lord, . I support it
because it has the right balance of radicalism to fit the
revolution in which we are living. I will look at it through
eight points—that may be ambitious in five minutes, but I think I
can do it.
There is a degree of serious common ground. First, we need fair
standards to protect the public. We need to protect privacy,
security, human rights, fraud and intellectual property. We also
need to protect, however, rights to access, like data and the
processes by which artificial intelligence makes decisions in
respect of you. An enforcement system is needed to make that
work. If we have that, we do not need the elaborate mechanism of
the EU by regulating individual products.
Secondly, it is clear there has to be a consistency of standards.
We cannot have one rule for one market, and one rule for another
market. If you look back at the 19th century, when we underwent
the last massive technological revolution, the courts sometimes
made the mistake of fashioning rules to fit individual markets.
That was an error, and that is why we need to look at it
comprehensively.
Thirdly, we have got to protect innovation. I believe that is
common ground, but the points to which I shall come in a moment
show the difficulties.
Fourthly, we have got to produce a system that is interoperable.
The noble Lord, Lord Holmes, referred to the trade documents
Bill, which was the product of international development. We have
adapted the common law to fit it and other countries’ systems
will do it. That is a sine qua non.
I believe all those points are common ground, but I now come to
four points that I do not think are common ground. The first is
simplicity. When you look at Bills in this House, I sometimes
feel we are making the law unworkable by its complexity. There
can be absolutely no doubt that regulation is becoming unworkable
because of the complexity. I can quite understand why innovators
are horrified at the prospect of regulation, but they have got
the wrong kind of regulation. They have got what we have created,
unfortunately; it is a huge burden and is not based on simplicity
and principles. If we are to persuade people to regulate, we need
a radically different approach, and this Bill brings it
about.
Secondly, there needs to be transparency and accountability. I do
not believe that doing this through a small body within a
ministry is the right way; it has to be done openly.
Thirdly—and this is probably highly controversial—when you look
at regulation, our idea is of the statutory regulator with its
vast empire created. Do we need that? Look back at the 19th
century: the way in which the country developed was through
self-regulation supported by the courts, Parliament and
government. We need to look at that again. I see nothing wrong
with self-regulation. It has itself a shocking name, as a result
of what happened in the financial markets at the turn of the
century, but I believe that we should look at it again. Effective
self-regulation can be good regulation.
Finally, the regulator must be independent. There is nothing
inconsistent with self-regulation and independence.
We need a radical approach, and the Bill gives us that. No one
will come here if we pretend we are going to set up a
regulator—like the financial markets regulator, the pensions
regulator and so on—because people will recoil in horror. If we
have this Bill, however, with its simplicity and emphasis on
comprehensiveness, we can do it. By saying that, it seems to me
that the fundamental flaw in what the Government are doing is
leaving the current regulatory system in place. We cannot afford
to do that. We need to face the new industrial revolution with a
new form of regulation.
10.38am
(Con)
My Lords, it is a great pleasure to follow the noble and learned
Lord, Lord Thomas, and his interesting speech. I remind noble
Lords that the Communications and Digital Committee, which I have
the privilege to chair, published our report Large Language
Models and Generative AI only last month. For anyone who has not
read it, I wholeheartedly recommend it, and I am going to draw
heavily on it in my speech.
It is a pleasure to speak in a debate led by my noble friend Lord
Holmes, and I congratulate him on all that he does in the digital
and technology space. As he knows, I cannot support his Bill
because I do not agree with the concept of an AI authority, but I
have listened carefully to the arguments put forward by the noble
and learned Lord, Lord Thomas, a moment ago. But neither would I
encourage the Government to follow the Europeans and rush to
develop overly specific legislation for this general-purpose
technology.
That said, there is much common ground on which my noble friend
and I can stand when it comes to our ambitions for AI, so I will
say a little about that and where I see danger with the
Government’s current approach to this massively important
technological development.
As we have heard, AI is reshaping our world. Some of these
changes are modest, and some are hype, but others are genuinely
profound. Large language models in particular have the potential
to fundamentally reshape our relationship with machines. In the
right hands, they could drive huge benefits to our economy,
supporting ground-breaking scientific research and much more.
I agree with my noble friend Lord Holmes about how we should
approach AI. It must be developed and used to benefit people and
society, not just big tech giants. Existing regulators must be
equipped and empowered to hold tech firms to account as this
technology operates in their own respective sectors, and we must
ensure that there are proper safety tests for the riskiest
models.
That said, we must maintain an open market for AI, and so any
testing must not create barriers to entry. Indeed, one of my
biggest fears is an even greater concentration of power among the
big tech firms and repeating the same mistakes which led to a
single firm dominating search, no UK-based cloud service, and a
couple of firms controlling social media. Instead, we must ensure
that generative AI creates new markets and, if possible, use it
to address the existing market distortions.
Our large language model report looked in detail at what needs to
happen over the next three years to catalyse AI innovation
responsibly and mitigate risks proportionately. The UK is
well-placed to be among the world leaders of this technology, but
we can only achieve that by being positive and ambitious. The
recent focus on existential sci-fi scenarios has shifted
attention towards too narrow a view of AI safety. On its own, a
concentration on safety will not deliver the broader capabilities
and commercial heft that the UK needs to shape international
norms. However, we cannot keep up with international competitors
without more focus on supporting commercial opportunities and
academic excellence. A rebalance in government strategy and a
more positive vision is therefore needed. The Government should
improve access to computing power, increase support for digital,
and do more to help start-ups grow out of university
research.
I do not wish to downplay the risks of AI. Many need to be
addressed quickly—for example, cyberattacks and synthetic child
sexual abuse, as well as bias and discrimination, which we have
already heard about. The Government should scale up existing
mitigations, and ensure industry improves its own guard-rails.
However, the overall point is about balance. Regulation should be
thoughtful and proportionate, to catalyse rather than stifle
responsible innovation, otherwise we risk creating extensive
rules that end up entrenching incumbents’ market power, and we
throttle domestic industry in the process. Regulatory capture is
a real danger that our inquiry highlighted.
Copyright is another danger, and this is where there is a clear
case for government action now. The point of copyright is to
reward innovation, yet tech firms have been exploiting rights
holders by using works without permission or payment. Some of
that is starting to change, and I am pleased to see some firms
now striking deals with publishers. However, these remain small
steps, and the fundamental question about respecting copyright in
the first place remains unresolved.
The role for government here is clear: it should endorse the
principles behind copyright and uphold fair play, and should then
update legislation. Unfortunately, the current approach remains
unclear and inadequate. It has abandoned the IPO-led process, but
apparently without anything more ambitious in its place. I hope
for better news in the Government’s response to our report,
expected next month, and it would be better still if my noble
friend the Minister could say something reassuring today.
In the meantime, I am grateful to my noble friend Lord Holmes for
providing the opportunity to debate such an important topic.
10.44am
(CB)
My Lords, I too congratulate the noble Lord, Lord Holmes, on his
wonderful speech. I declare my interests as an adviser to the
Oxford Institute for Ethics in AI and the UN Secretary-General’s
AI Advisory Body.
When I read the Bill, I asked myself three questions. Do we need
an AI regulation Bill? Is this the Bill we need? What happens if
we do not have a Bill? It is arguable that it would be better to
deal with AI sector by sector—in education, the delivery of
public services, defence, media, justice and so on—but that would
require an enormous legislative push. Like others, I note that we
are in the middle of a legislative push, with digital markets
legislation, media legislation, data protection legislation and
online harms legislation, all of which resolutely ignore both
existing and future risk.
The taxpayer has been asked to make a £100 million investment in
launching the world’s first AI safety institute, but as the Ada
Lovelace Institute says:
“We are concerned that the Government’s approach to AI regulation
is ‘all eyes, no hands’”,
with plenty of “horizon scanning” but no
“powers and resources to prevent those risks or even to react to
them effectively after the fact”.
So yes, we need an AI regulation Bill.
Is this the Bill we need? Perhaps I should say to the House that
I am a fan of the Bill. It covers testing and sandboxes, it
considers what the public want, and it deals with a very
important specific issue that I have raised a number of times in
the House, in the form of creating AI-responsible officers. On
that point, the CEO of the International Association of Privacy
Professionals came to see me recently and made an enormously
compelling case that, globally, we need hundreds of thousands of
AI professionals, as the systems become smarter and more
ubiquitous, and that those professionals will need standards and
norms within which to work. He also made the case that the UK
would be very well-placed to create those professionals at
scale.
I have a couple of additions. Unless the Minister is going to
make a surprise announcement, I think we are allowed to consider
that he is going to take the Bill on in full. In addition, under
Clause 2, which sets out regulatory principles, I would like to
see consideration of children’s rights and development needs;
employment rights, concerning both management by AI and job
displacement; a public interest case; and more clarity that
material that is an offence—such as creating viruses, CSAM or
inciting violence—is also an offence, whether created by AI or
not, with specific responsibilities that accrue to users,
developers and distributors.
The Stanford Internet Observatory recently identified hundreds of
known images of child sexual abuse material in an open dataset
used to train popular AI text-to-image models, saying:
“It is challenging to clean or stop the distribution of publicly
distributed datasets as it has been widely disseminated. Future
datasets could use freely available detection tools to prevent
the collection of known CSAM”.
The report illustrates that it is very possible to remove such
images, but that it did not bother, and now those images are
proliferating at scale.
We need to have rules upon which AI is developed. It is poised to
transform healthcare, both diagnosis and treatment. It will take
the weight out of some of the public services we can no longer
afford, and it will release money to make life better for many.
However, it brings forward a range of dangers, from fake images
to lethal autonomous weapons and deliberate pandemics. AI is not
a case of good or bad; it is a question of uses and abuses.
I recently hosted Geoffrey Hinton, whom many will know as the
“godfather of AI”. His address to parliamentarians was as
chilling as it was compelling, and he put timescales on the
outcomes that leave no time to wait. I will not stray into his
points about the nature of human intelligence, but he was utterly
clear that the concentration of power, the asymmetry of benefit
and the control over resources—energy, water and hardware—needed
to run these powerful systems would be, if left until later, in
so few hands that they, and not we, would be doing the rule
setting.
My final question is: if we have no AI Bill, can the Government
please consider putting the content of the AI regulation Bill
into the data Bill currently passing through Parliament and deal
with it in that way?
10.50am
(Con)
My Lords, I thought that this would be one of the rare debates
where I did not have an interest to declare, but then I heard the
noble Lord, Lord Young, talking about AI and education and
realised that I am a paid adviser to Common Sense Media, a large
US not-for-profit that campaigns for internet safety and has
published the first ever ratings of AI applications used in
schools. I refer the noble Lord to its excellent work in this
area.
It is a pleasure to speak in the debate on this Bill, so ably put
forward by the noble Lord, Lord Holmes. It is pretty clear from
the reaction to his speech how much he is admired in this House
for his work on this issue and so many others to do with media
and technology, where he is one of the leading voices in public
affairs. Let me say how humiliating it is for me to follow the
noble Baronesses, Lady Stowell and Lady Kidron, both of whom are
experts in this area and have done so much to advance public
policy.
I am a regulator and in favour of regulation. I strongly
supported the Online Safety Act, despite the narrow byways and
culs-de-sac it ended up in, because I believe that platforms and
technology need to be accountable in some way. I do not support
people who say that the job is too big to be attempted—we must
attempt it. What I always say about the Online Safety Act is that
the legislation itself is irrelevant; what is relevant is the
number of staff and amount of expertise that Ofcom now has, which
will make it one of the world’s leaders in this space.
We talk about AI now because it has come to the forefront of
consumers’ minds through applications such as ChatGPT, but large
language models and the use of AI have been around for many
years. As AI becomes ubiquitous, it is right that we now consider
how we could or should regulate it. Indeed, with the approaching
elections, not just here in the UK but in the United States and
other areas around the world, we will see the abuse of artificial
intelligence, and many people will wring their hands about how on
earth to cope with the plethora of disinformation that is likely
to emerge.
I am often asked at technology events, which I attend
assiduously, what the Government’s policy is on artificial
intelligence. To a certain extent I have to make it up, but to a
certain extent I think that, broadly speaking, I have it right.
On the one hand, there is an important focus on safety for
artificial intelligence to make it as safe as possible for
consumers, which in itself begs the question of whether that is
possible; on the other, there is a need to ensure that the UK
remains a wonderful place for AI innovation. We are rightly proud
that DeepMind, although owned by Google, wishes to stay in the
UK. Indeed, in a tweet yesterday the Chancellor himself bigged up
Mustafa Suleyman for taking on the role of leading AI at
Microsoft. It is true that the UK remains a second-tier nation in
AI after China and the US, but it is the leading second-tier
nation.
The question now is: what do we mean by regulation? I do not
necessarily believe that now is the moment to create an AI safety
regulator. I was interested to hear the contribution of the noble
and learned Lord, Lord Thomas, who referred to the 19th century.
I refer him to the late 20th century and the early 21st century:
the internet itself has long been self-regulated, at least in
terms of the technology and the global standards that exist, so
it is possible for AI to proceed largely on the basis of
self-regulation.
The Government’s approach to regulation is the right one. We
have, for example, the Digital Regulation Cooperation Forum,
which brings together all the regulators that either obviously,
such as Ofcom, or indirectly, such as the FCA, have skin the game
when it comes to digital. My specific request to the Minister is
to bring the House up to date on the work of that forum and how
he sees it developing.
I was surprised by the creation of the AI Safety Institute as a
stand-alone body with such generous funding. It seems to me that
the Government do not need legislation to do an examination of
the plethora of bodies that have sprung up over the last 10 or 15
years. Many of them do excellent work, but where their
responsibilities begin and end is confusing. They include the Ada
Lovelace Institute, the Alan Turing Institute, the AI Safety
Institute, Ofcom and DSIT, but how do they all fit together into
a clear narrative? That is the essential task that the Government
must now undertake.
I will pick up on one remark that the noble Baroness, Lady
Stowell, made. While we look at the flashy stuff, if you like,
such as disinformation and copyright, she is quite right to say
that we have to look at the picks and shovels as AI becomes more
prevalent and as the UK seeks to maintain our lead. Boring but
absolutely essential things such as power networks for data
centres will be important, so they must also be part of the
Government’s task.
10.56am
(UUP)
My Lords, like other Members, I congratulate the noble Lord, Lord
Holmes, on what he has been doing.
The general public have become more aware of AI in very recent
times, but it is nothing new; people have been working on it for
decades. Because it is reaching such a critical mass and getting
into all the different areas of our lives, it is now in the
public mind. While I do not want to get into the minutiae of the
Bill—that is for Committee—speaking as a non-expert, I think that
the general public are now at a stage where they have a right to
know what legislators think. Given the way things have developed
in recent years, the Government cannot stand back and do
nothing.
Like gunpowder, AI cannot be uninvented. The growing capacity of
chips and other developments, and the power of a limited number
of companies around the world, ensure that such a powerful tool
will now be in the hands of a very small number of corporations.
The Prime Minister took the lead last year and indicated that he
wished to see the United Kingdom as a world leader in this field,
and he went to the United States and other locations. I rather
feel that we have lost momentum and that nothing is currently
happening that ought to be happening.
As with all developments, there are opportunities and threats. In
the last 24 hours, we have seen both. As the noble Lord, Lord
Holmes, pointed out, studies on breast cancer were published
yesterday, showing that X-rays, CT scans, et cetera were
interpreted more accurately by AI than by humans. How many cases
have we had in recent years of tests having to be recalled by
various trusts, causing anxiety and stress for thousands upon
thousands of patients? It is perfectly clear that, in the field
of medicine alone, AI could not only improve treatment rates but
relieve people of a lot of the anxieties that such inaccuracies
cause. We also saw the threats on our television screens last
night. As the noble Lord referred to, a well-known newscaster
showed that she was presented by AI in a porn movie—she had it
there on the screens for us to see last night. So you can see the
threats as well as the opportunities.
So the question is: can Parliament, can government, stand by and
just let things happen? I believe that the Government cannot idly
stand by. We have an opportunity to lead. Yes, we do not want to
create circumstances where we suffocate innovation. There is an
argument over regulation between what the noble Viscount, , said, what the noble and
learned Lord, Lord Thomas, said, and what I think the
Government’s response will be. However, bolting bits on to
existing regulators is not necessarily the best way of doing
business. You need a laser focus on this and you need the people
with the capacity and the expertise. They are not going to be
widely available and, if you have a regulator with too much on
its agenda, the outcome will be fairly dilute and feeble.
In advance of this, I said to the Minister that we have all seen
the “Terminator” movies, and I am sure that the general public
have seen them over the years. The fact is that it is no longer
as far-fetched as it once was. I have to ask the Minister: what
is our capacity to deal with hacking? If it gets into weapons
systems, never mind utilities, one can see straight away a huge
potential for danger.
So, once again, we are delighted that the Bill has been brought
forward. I would like to think that ultimately the Government
will take this over, because that is the only way that it will
become law, and it does need refinement. A response from the
Minister, particularly on the last point, which creates huge
anxiety, would be most beneficial.
11.01am
(CB)
My Lords, I too am very grateful to the noble Lord, , for introducing
this important Artificial Intelligence (Regulation) Bill. In my
contribution today, I will speak directly to the challenges and
threats posed to visual artists by generative AI and to the need
for regulatory clarity to enable artists to explore the creative
potential of AI. I declare my interest as having a background in
the visual arts.
Visual artists have expressed worries, as have their counterparts
in other industries and disciplines, about their intellectual
property being used to train AI models without their consent,
credit or payment. In January 2024, lists containing the names of
more than 16,000 non-consenting artists whose works were
allegedly used to train the Midjourney generative AI platform
were accidentally leaked online, intensifying the debate on
copyright and consent in AI image creation even further.
The legality of using human artists’ work to train generative AI
programmes remains unclear, but disputes over documents such as
the Midjourney style list, as it became known, provide insight
into the real procedures involved in turning copyrighted artwork
into AI reference material. These popular AI image-generator
models are extremely profitable for their owners, the majority of
whom are situated in the United States. Midjourney was valued at
around $10.5 billion in 2022. It stands to reason that, if
artists’ IP is being used to train these models, it is only fair
that they be compensated, credited and given the option to opt
out.
DACS, the UK’s leading copyright society for artists, of which I
am a member, conducted a survey that received responses from
1,000 artists and their representatives, 74% of whom were
concerned about their own work being used to train AI models.
Two-thirds of artists cited ethical and legal concerns as a
barrier to using such technology in their creative practices.
DACS also heard first-hand accounts of artists who found that
working creatively with AI has its own set of difficulties, such
as the artist who made a work that included generative AI and
wanted to distribute it on a well-known global platform. The
platform did not want the liabilities associated with an
unregistered product, so it asked for the AI component to be
removed. If artists are deterred from using AI or face legal
consequences for doing so, creativity will suffer. There is a
real danger that artists will miss out on these opportunities,
which would worsen their already precarious financial situation
and challenging working conditions.
In the same survey, artists expressed fear that human-made
artworks will have no distinctive or unique value in the
marketplace in which they operate, and that AI may thereby render
them obsolete. One commercial photographer said, “What’s the
point of training professionally to create works for clients if a
model can be trained on your own work to replace you?” Artists
rely on IP royalties to sustain a living and invest in their
practice. UK artists are already low-paid and two-thirds are
considering abandoning the profession. Another artist remarked in
the survey, “Copyright makes it possible for artists to dedicate
time and education to become a professional artist. Once
copyright has no meaning any more, there will be no more
possibility to make a living. This will be detrimental to society
as a whole”.
It is therefore imperative that we protect their copyright and
provide fair compensation to artists whose works are used to
train artificial intelligence. While the Bill references IP,
artists would have welcomed a specific clause on remuneration and
an obligation for owners of copyright material used in AI
training to be paid. To that end, it is therefore critical to
maintain a record of every work that AI applications use,
particularly to validate the original artist’s permission. It is
currently not required by law to reveal the content that AI
systems are trained on. Record-keeping requirements are starting
to appear in regulatory proposals related to AI worldwide,
including those from China and the EU.
The UK ought to adopt a similar mandate requiring companies using
material in their AI systems to keep track of the works that they
have learned and ingested. To differentiate AI-generated images
from human-composed compositions, the Government should make sure
that any commercially accessible AI-generated works are branded
as such. As the noble Lord, Lord Holmes, has already mentioned,
labelling shields consumers from false claims about what is and
is not AI-generated. Furthermore, given that many creators work
alone, every individual must have access to clear, appropriate
redress mechanisms so that they can meaningfully challenge
situations where their rights have been misused. Having said
that, I welcome the inclusion in the Bill that any training data
must be preceded by informed consent. This measure will go some
way to safeguarding artists’ copyright and providing them with
the necessary agency to determine how their work is used in
training, and on what terms.
In conclusion, I commend the noble Lord, Lord Holmes, for
introducing this Bill, which will provide much-needed regulation.
Artists themselves support these measures, with 89% of
respondents to the DACS survey expressing a desire for more
regulation around AI. If we want artists to use AI and be
creative with new technology, we need to make it ethical and
viable.
11.07am
(Con)
My Lords, I join other noble Lords in commending the noble Lord,
Lord Holmes, for bringing forward this Bill.
I come to this debate with the fundamental belief that supporting
innovation and investment must be embedded in all regulation, but
even more so in the regulation of artificial intelligence. After
all, this wave of artificial intelligence is being billed as a
catalyst that could propel economic growth and human progress for
decades to come. The United Kingdom should not miss this
supercycle and the promise of a lengthy period of economic
expansion—the first of its kind since deglobalisation and
deregulation 40 years ago.
With this in mind, in reading the AI regulation Bill I am struck
by the weight of emphasis on risk mitigation, as opposed to
innovation and investment. I must say that right at this moment,
notwithstanding the fact that I realise that the Government,
through other routes, including the pro-innovation stance that we
talked about, are looking into innovation in investment. Even so,
I feel that, on balance, the weight here is more on risk
mitigation than innovation. I am keen that, in the drafting and
execution of the artificial intelligence authority’s mandate in
particular, and in the evolution of this Bill in general, the
management of risk does not deter investment in this
game-changing innovation.
I am of course reassured that innovation or opportunity are
mentioned at least two times in the Bill. For example, Clause
6(a) signals that the public engagement exercise will
consider
“the opportunities and risks presented by AI”.
Perhaps more pointedly, Clause 1(2)(e) states that the list of
functions of the Al Authority are to include support for
innovation. However, this mandate is at best left open to
interpretation and at worst downgrades the importance and
centrality of innovation.
My concern is that the new AI authority could see support for
innovation as a distant or secondary objective, and that
risk-aversion and mitigation become the cultural bedrock of the
organisation. If we were to take too heavy-handed a
risk-mitigation approach to AI, what opportunities could be
missed? In terms of economic growth, as my noble friend Lord
Holmes mentioned, PricewaterhouseCoopers estimates that AI could
contribute more than $15 trillion to the world economy by 2030.
In this prevailing era of slow economic growth, AI could
meaningfully alter the growth trajectory.
In terms of business, AI could spur a new start-up ecosystem,
creating a new generation of small and medium-sized enterprises.
Furthermore, to underscore this point, AI promises to boost
productivity gains, which could help generate an additional $4.4
trillion in annual profits, according to a 2023 report by
McKinsey. To place this in context, this annual gain is nearly
one and a half times larger than the UK’s annual GDP.
On public goods such as education and healthcare, the Chancellor
in his Spring Budget a few weeks ago indicated the substantial
role that a technology upgrade, including the use of AI, could
play in improving delivery and access and in unlocking up to £35
billion of savings.
Clearly, a lot is at stake. This is why it is imperative that
this AI Bill, and the way it is interpreted, strikes the right
balance between mitigating risk and supporting investment and
innovation.
I am very much aware of the perennial risks of malevolent state
actors and errant new technologies, and thus, the need for
effective regulation is clear, as the noble and learned Lord,
Lord Thomas, stressed. This is unambiguous, and I support the
Bill. However, we must be alert to the danger of regulation
becoming a synonym for risk-management. This would overshadow the
critical regulatory responsibility of ensuring a competitive
environment in which innovation can thrive and thereby attract
investment.
11.12am
The Lord
My Lords, I guarantee that this is not an AI-generated speech.
Indeed, Members of the House might decide after five minutes that
there is not much intelligence of any kind involved in its
creation. Be that as it may, we on these Benches have engaged
extensively with the impacts and implications of new technologies
for years—from contributions to the Warnock committee in the
1980s through to the passage of the Online Safety Bill through
this House last year. I am grateful to the noble Lord, Lord
Holmes, for this timely and thoughtful Bill and for his brilliant
introduction to it. Innovation must be enthusiastically
encouraged, as the noble Baroness, Lady Moyo, has just reminded
us. It is a pleasure to follow her.
That said, I will take us back to first principles for a moment:
to Christian principles, which I hope all of good will would want
to support. From these principles arise two imperatives for
regulation and governance, whatever breakthroughs new
technologies enable. The first is that a flourishing society
depends on respecting human dignity and agency. The more any new
tool threatens such innate dignity, the more carefully it should
be evaluated and regulated. The second imperative is a duty of
government, and all of us, to defend and promote the needs of the
nation’s weak and marginalised —those who cannot always help
themselves. I am not convinced that the current pro-innovation
and “observe first, intervene later” approach to AI get this
perennial balance quite right. For that reason, I support the
ambitions outlined in the Bill.
There are certainly aspects of last year’s AI White Paper that
get things in the right order: I warmly commend the Government
for including fairness, accountability and redress among the five
guiding principles going forward. Establishing an AI authority
would formalise the hub-and-spoke structure the Government are
already putting in place, with the added benefit of shifting from
a voluntary to a compulsory basis, and an industry-funded
regulatory model of the kind the Online Safety Act is beginning
to implement.
The voluntary code of practice on which the Government’s approach
currently depends is surely inadequate. The track record of the
big tech companies that developed the AI economy and are now
training the most powerful AI models shows that profit trumps
users’ safety and well-being time and again. “Move fast and break
things” and “act first, apologise later” remains the lodestar.
Sam Altman’s qualities of character and conduct while at the helm
of OpenAI have come under considerable scrutiny over the last few
months. At Davos in January this year, the Secretary-General of
the United Nations complained:
“Powerful tech companies are already pursuing profits with a
reckless disregard for human rights, personal privacy, and social
impact.”
How can it be right that the richest companies in history have no
mandatory duties to financially support a robust safety
framework? Surely, it should not be for the taxpayer alone to
shoulder the costs of an AI digital hub to find and fix gaps that
lead to risks or harm. Why should the taxpayer shoulder the cost
of providing appropriate regulatory sandboxes for testing new
product safety?
The Government’s five guiding principles are a good guide for AI,
but they need legal powers underpinning them and the sharpened
teeth of financial penalties for corporations that intentionally
flout best practice, to the clear and obvious harm of
consumers.
I commend the ambitions of the Bill. A whole-system, proportional
and legally enforceable approach to regulating AI is urgently
needed. Balancing industry’s need to innovate with its duty to
respect human dignity and the vulnerable in society is vital if
we are safely to navigate the many changes and challenges not
just over the horizon but already in plain sight.
11.17am
(Lab)
My Lords, I speak not as an expert in AI but as a user, and I
make no apology for the fact that I use it to do my work here in
this Chamber. Your Lordships can form your own judgment as to
which bits of my following remarks were written by me, and which
are from ChatGPT.
I very much welcome the Bill. The Noble Lord, , gave us an
inspirational speech which was totally convincing on the need for
legislation. The Bill is obviously the first step on that way.
The promise of artificial intelligence is undeniable. There is a
large degree of hype from those with vested interests, and there
is, to a significant extent, a bubble. Nevertheless, even if that
is true, we still need an appropriate level of regulation.
AI provides the opportunity to revolutionise industries, enhance
our daily lives and solve some of the most pressing problems we
face today—from healthcare to climate change—and solutions that
are not available in other ways. However, with greater power
comes greater responsibility. The rapid advance of AI technology
has outpaced our regulatory frameworks, leading to innovation
without adequate oversight, ethical consideration or
accountability, so we undoubtedly need a regulator. I take the
point that it has to be focused and simple. We need rigorous
ethical standards and transparency in AI development to ensure
that these technologies serve the good of all, not just
commercial interests. We cannot wait for these forces to play out
before deciding what needs to be done. I very much support the
remarks of the previous speaker, the right reverend Prelate the
, who set out the
position very clearly.
We need to have a full understanding of the implications of AI
for employment and the workforce. These technologies will
automate tasks previously performed by humans, and we face
significant impacts on the labour market. The prevailing model
for AI is to seek the advantage for the developers and not so
much for the workers. This is an issue we will need to confront.
We will have to debate the extent to which that is the job of the
regulator.
As I indicate, I favour a cautious approach to AI development. We
should be focusing on meaningful applications that prioritise
human well-being and benefits to society over corporate profit.
Again, how this fits in with the role of the regulator is for
discussion, but a particular point that needs to be made here is
that we need to understand the massive amounts of energy that
even simple forms of AI consume. This needs to be borne in mind
in any approach to developing this industry.
In the Bill, my attention was caught by the use of the undefined
term “relevant regulators”. Perhaps the noble Lord, Lord Holmes,
could fill that in a bit more; it is a bit of a catch-all at the
moment. My particular concern is the finance industry, which will
use this technology massively, not necessarily to the benefit of
consumers. The noble and learned Lord, , emphasised the
problem of regulatory arbitrage. We need a consistent layer of
regulation. Another concern is mental health: there will be AI
systems that claim to offer benefits to those with mental health
problems. Again, this will need severe regulation.
To conclude, I agree with my noble friend that regulation is necessarily
the enemy of economic success. There is a balance to be drawn
between gaining all the benefits of technology and the potential
downsides. I welcome the opportunity to discuss how this should
be regulated.
11.22am
(Con)
My Lords, I too congratulate my noble friend Lord Holmes on
bringing forward this AI regulation Bill, in the context of the
continuing failure of the Government to do so. At the same time,
I declare my interest as a long-term investor in at least one
fund that invests in AI and tech companies.
A year ago, one of the so-called godfathers of AI, Geoffrey
Hinton, cried “fire” about where AI was going and, more
importantly, when. Just last week, following the International
Dialogue on AI Safety in Beijing, a joint statement was issued by
leading western and Chinese figures in the field, including
Chinese Turing award winner Andrew Yao, Yoshua Bengio and Stuart
Russell. Among other things, that statement said:
“Unsafe development, deployment, or use of AI systems may pose
catastrophic or even existential risks to humanity within our
lifetimes … We should immediately implement domestic registration
for AI models and training runs above certain compute or
capability thresholds”.
Of course, we are talking about not only extinction risks but
other very concerning risks, some of which have been mentioned by
my noble friend Lord Holmes: extreme concentration of power,
deepfakes and disinformation, wholesale copyright infringement
and data-scraping, military abuse of AI in the nuclear area, the
risk of bioterrorism, and the opacity and unreliability of some
AI decision-making, to say nothing of the risk of mass
unemployment. Ian Hogarth, the head of the UK AI Safety
Institute, has written in the past about some of these concerns
and risks.
Nevertheless, despite signing the Center for AI Safety statement
and publicly admitting many of these serious concerns, the
leading tech companies continue to race against each other
towards the holy grail of artificial general intelligence. Why is
this? Well, as they say, “It’s the money, stupid”. It is
estimated that, between 2020 and 2022, $600 billion in total was
invested in AI development, and much more has been since. This is
to be compared with the pitifully small sums invested by the AI
industry in AI safety. We have £10 million from this Government
now. These factors have led many people in the world to ask how
it is that they have accidentally outsourced their entire futures
to a few tech companies and their leaders. Ordinary people have a
pervading sense of powerlessness in the face of AI
development.
These facts also raise the question of why the Government
continue to delay putting in place proper and properly funded
regulatory frameworks. Others, such as the EU, US, Italy, Canada
and Brazil, are taking steps towards regulation, while, as noble
Lords have heard, China has already regulated and India plans to
regulate this summer. Here, the shadow IT Minister has indicated
that, if elected, a new Labour Government would regulate AI.
Given that a Government’s primary duty is to keep their country
safe, as we so often heard recently in relation to the defence
budget, this is both strange and concerning.
Why is this? There is a strong suspicion in some quarters that
the Prime Minister, having told the public immediately before the
Bletchley conference that AI brings national security risks that
could end our way of life, and that AI could pose an extinction
risk to humanity, has since succumbed to regulatory capture. Some
also think that the Government do not want to jeopardise
relations with leading tech companies while the AI Safety
Institute is gaining access to their frontier models. Indeed, the
Government proudly state that they
“will not rush to legislate”,
reinforcing the concern that the Prime Minister may have gone
native on this issue. In my view, this deliberate delay on the
part of the Government is seriously misconceived and very
dangerous.
What have the Government done to date? To their credit, they
organised and hosted Bletchley, and importantly got China to
attend too. Since then, they have narrowed the gap between
themselves and the tech companies—but the big issues remain,
particularly the critical issue of regulation versus
self-regulation. Importantly, and to their credit, the Government
have also set up the UK AI Safety Institute, with some impressive
senior hires. However, no one should be in any doubt that this
body is not a regulator. On the critical issue of the continuing
absence of a dedicated unitary AI regulator, it is simply not
good enough for the Government to say that the various relevant
government bodies will co-operate together on oversight of AI. It
is obvious to almost everyone, apart from the Government
themselves, that a dedicated, unitary, high-expertise and very
well-funded UK AI regulator is required now.
The recent Gladstone AI report, commissioned by the US
Government, has highlighted similar risks to US national security
from advanced AI development. Against this concerning background,
I strongly applaud my noble friend Lord Holmes for bringing
forward the Bill. It may of course be able to be improved, but
its overall intention and thrust are absolutely right.11.28am
(CB)
My Lords, I entirely agree with those last sentiments, which will
get us thinking about what on earth we do about this. An awful
lot of nonsense is talked, and a lot of great wisdom is talked.
The contributions to the debate have been useful in getting
people thinking along the right lines.
I will say something about artificial general intelligence, which
is very different, because it may well aim to control people or
the environment in which we live, rather than generative AI or
large language models, which I think people are thinking of:
ChatGPT, Llama, Google Gemini, and all those bits and pieces.
They are trawling through large amounts of information incredibly
usefully and producing a good formatted epitome of what is in
there. Because you do not have time to read, for instance, large
research datasets, they can find things in them that you have not
had time to trawl through and find. They can be incredibly useful
for development there.
AI could start to do other things: it could control things and we
could make it take decisions. Some people suggest that it could
replace the law courts and a lot of those sorts of things. But
the problem with that is that we live in a complex world and
complex systems are not deterministic, to use a mathematical
thing. You cannot control them with rules. Rules have unintended
consequences, as is well known—the famous butterfly effect. You
cannot be certain about what will happen when you change one
little bit. AI will not necessarily be able to predict that
because, if you look at how it trains itself, you do not know
what it has learned—it is not done by algorithm, and some AI
systems can modify their own code. So you do not know what it is
doing and you cannot regulate for the algorithms or any of
that.
I think we have to end up regulating, or passing laws on, the
outcomes. We always did this in common law: we said, “Thou shalt
not kill”, and then we developed it a bit further, but the
principle of not going around killing people was established. The
same is true of other simple things like “You shan’t nick
things”. It is what comes out of it that matters. This applies
when you want to establish liability, which we will have to do in
the case of self-driving cars, for instance, which will take over
more and more as other things get clogged up. They will crash
less, kill fewer people and cause fewer accidents. But, because
it is a machine doing it, it will be highly psychologically
unacceptable—with human drivers, there will be more accidents.
There will have to be changes in thought on that.
Regulation or legislation has to be around the outcomes rather
than the method, because we cannot control where these things go.
A computer does not have an innate sense of right and wrong or
empathy, which comes into human decisions a lot. We may be able
to mimic it, and we could probably train computers up on models
to try to do that. One lot of AI might try to say whether another
lot of AI is producing okay outcomes. It will be very
interesting. I have no idea how we will get there.
Another thing that will be quite fun is when the net-zero people
get on to these self-training models. An LLM trawling through
data uses huge amounts of energy, which will not help us towards
our net-zero capabilities. However, AI might help if we put it in
charge of planning how to get electricity from point A to point B
in an acceptable fashion. But on the other hand people will not
trust it, including planners. I am sorry—I am trying to
illustrate a complex system. How on earth can you translate that
into something that you can put on paper and try to control? You
cannot, and that is what people have to realise. It is an
interesting world.
I am glad that the Bill is coming along, because it is high time
we started thinking about this and what we expect we can do about
it. It is also transnational—it goes right across all borders—so
we cannot regulate in isolation. In this new interconnected and
networked world, we cannot have a little isolated island in the
middle of it all where we can control it—that is just not going
to happen. Anyway, we live in very interesting times.
11.33am
(Con)
My Lords, as has been illustrated this morning, we stand on the
cusp of a technological revolution. We find ourselves at the
crossroads between innovation and responsibility. Artificial
intelligence, a marvel of modern science, promises to reshape the
world. Yet with great power comes great responsibility, and it is
therefore imperative that we approach this with caution.
Regulation in the realm of AI is not an adversary to innovation;
rather, it is the very framework within which responsible and
sustainable innovation must occur. Our goal should not be to
stifle the creative spirit but to channel it, ensuring that it
serves the common good while safeguarding our societal values and
ethical standards.
However, we must not do this in isolation. In the digital domain,
where boundaries blur, international collaboration becomes not
just beneficial but essential. The challenges and opportunities
presented by AI do not recognise national borders, and our
responses too must be global in perspective. The quest for
balance in regulation must be undertaken with a keen eye on
international agreements, ensuring that the UK remains in step
with the global community, not at odds with it. In our pursuit of
this regulatory framework suitable for the UK, we must consider
others. The European Union’s AI Act, authored by German MEP Axel
Voss, offers valuable insights and, by examining what works
within the EU’s and other approaches, as well as identifying
areas for improvement, we can learn from the experiences of our
neighbours to forge a path that is distinctly British, yet
globally resonant.
Accountability stands as a cornerstone in the responsible
deployment of AI technologies. Every algorithm and every
application that is released into the world must have a clearly
identifiable human or corporate entity behind it. This is where
the regulatory approach must differ to that inherent in the
general data protection regulations, which I had the pleasure of
helping to formulate in Brussels. This accountability is crucial
for ethical, legal and social reasons, ensuring that there is
always a recourse and a responsible party when AI systems
interact with our world.
Yet, as we delve into the mechanics of regulation and oversight,
we must also pause to reflect on the quintessentially human
aspect of our existence that AI can never replicate: emotion. The
depth and complexity of emotions that define our humanity remain
beyond the realm of AI and always will. These elements, intrinsic
to our being, highlight the irreplaceable value of the human
touch. While AI can augment, it can never replace human
experience. The challenge before us is to foster an environment
where innovation thrives within a framework of ethical and
responsible governance. We must be vigilant not to become global
enforcers of compliance at the expense of being pioneers of
innovation.
The journey we embark on with the regulation of AI is not one
that ends with the enactment of laws; that is merely the
beginning. The dynamic nature of AI demands that our regulatory
frameworks be agile and capable of adapting to rapid advancements
and unforeseen challenges. So, as I have suggested on a number of
occasions, we need smart legislation—a third tier of legislation
behind the present primary and secondary structures—to keep up
with these things.
In the dynamic landscape of AI, the concept of sandboxes stands
out as a forward-thinking approach to innovation in this field.
This was referred to by my noble friend in introducing his Bill.
They offer a controlled environment where new technologies can be
tested and refined without the immediate pressures and risks
associated with full-scale deployment.
I emphasise that support for small and medium-sized enterprises
in navigating the regulatory landscape is of paramount
importance. These entities, often the cradles of innovation, must
be equipped with the tools and knowledge to flourish within the
bounds of regulation. The personnel in our regulatory authorities
must also be of the highest calibre—individuals who not only
comprehend the technicalities of AI but appreciate its broader
implications for society and the economy.
At this threshold of a new era shaped by AI, we should proceed
with caution but also with optimism. Let us never lose sight of
the fact that at the heart of all technological advancement lies
the indomitable spirit of human actions and emotions, which no
machine or electronic device can create alone. I warmly welcome
my noble friend Lord Holmes’s Bill, which I will fully support
throughout its process in this House.
11.38am
(CB)
My Lords, I am most grateful to the noble Lord, Lord Holmes, for
the meeting he arranged to explain his Bill in detail and to
answer some of the more naive questions from some Members of this
House. Having gone through the Bill, I cannot see how we can
ignore the importance of this, going forwards. I am also grateful
to my noble friend Lady Kidron for the meeting that she
established, which I think educated many of us on the realities
of AI.
I want to focus on the use of AI in medicine because that is my
field. The New England Journal of Medicine has just launched NEJM
AI as a new journal to collate what is happening. AI use is
becoming widespread but across the NHS tends to be small-scale.
People hope that AI will streamline administrative tasks which
are burdensome, improve communication with patients and do even
simple things such as making out-patient appointments more
suitable for people and arranging transport better.
For any innovations to be used, however, the infrastructure needs
to be set up. I was struck yesterday at a meeting on emergency
medicine where the consultant explained that it now takes longer
to enter patient details in the computer system than it used to
using old-fashioned pen and paper—the reason being that the
computer terminals are not in the room where the consultation is
happening so it takes people away.
A lot of people in medicine are tremendously enthusiastic—we see
the benefits in diagnostics for images of cells and X-rays and so
on—but there has to be reliability. Patients and people in the
general population are buying different apps to diagnose things
such as skin cancers, but the reliability of these apps is
unproven. What we need is the use of AI to improve diagnostic
accuracy. Currently, post-mortems show about a 5% error in what
is written on the death certificate; in other words, at
post-mortem, people are found to have died of something different
from the disease or condition they were being treated for. So we
have to improve diagnostics right across the piece.
But the problem is that we have to put that in a whole system.
The information that goes in to train and teach these diagnostic
systems has to be of very high quality, and we need audit in
there to make sure that high quality is maintained. Although we
are seeing fantastic advances in images such as mammograms,
supporting radiologists, and in rapid diagnosis of strokes and so
on, there is a need to ensure quality control in the system, so
that it does not go wild on its own, that the input is being
monitored and, as things change in the population, that that
change is also being detected.
Thinking about this Bill, I reflected on the Covid experience,
when the ground-glass appearance on X-rays was noted to be
somewhat unique and new in advanced Covid lung disease. We have a
fantastic opportunity, if we could use all of that data properly,
to follow up people in the long term, to see how many have got
clotting problems and how many later go on to develop
difficulties or other conditions of which we have been unaware.
There could be a fantastic public health benefit if we use the
technology properly.
The problem is that, if it is not used properly, it will lose
public trust. I noted that, in his speech introducing the Bill,
the noble Lord, Lord Holmes, used the word “trust” very often. It
seems that a light-touch regulator that goes across many domains
and areas is what we will need. It will protect copyright,
protect the intellectual property rights of the people who are
developing systems, keep those investments in the UK and foster
innovation in the UK. Unless we do that, and unless we develop
trust across the board, we will fail in our developments in the
longer term. The real world that we live in today has to be safe,
and the world that AI takes us into has to be safe.
I finish with a phrase that I have often heard resonate in my
ears:
“Trust arrives on foot and leaves on horseback”.
We must not let AI be the horses that take all the trust in the
developments away.11.44am
of Northwood (Con)
My Lords, are we ready for the power of artificial intelligence?
With each leap in human ability to invent and change what we can
achieve, we have utilised a new power, a new energy that has
redefined the boundaries of imagination: steam and the Industrial
Revolution; electricity and the age of light; and so, again, we
stand on the precipice of another seismic leap.
However, the future of AI is not just about what we can do with
it but about who will have access to control its power. So I
welcome the attempt made by my noble friend Lord Holmes via this
Bill to encourage an open public debate on democratic oversight
of AI, but I do have some concerns. Our view of AI at this early
stage is heavily coloured by how this power will deliver
automation and the potential reduction of process-reliant jobs
and how those who hold the pen on writing the algorithms behind
AI could exert vast power and influence on the masses via media
manipulation. We fear that the AI genie is out of the bottle and
we may not be able to control it. The sheer, limitless potential
of AI is intimidating.
If, like me, you are from a certain generation, these seeds of
fear and fascination at the power of artificial intelligence have
long been planted by numerous Hollywood movies picking on our
hopes, dreams and fears of what AI could do to us. Think of the
unnerving subservience of HAL in Stanley Kubrick’s “2001: A Space
Odyssey” made in 1968, the menacing and semi-obedient robot
Maximilian from the 1979 Disney production “The Black Hole”, the
fantasy woman called Lisa created by the power of 80s home
computing in “Weird Science” from 1985, and, of course, the
ultimate hellish future of machine intelligence taking over the
world in the form of Skynet in “The Terminator” made in 1984.
These and many other futuristic interpretations of AI helped to
fan the flames in the minds of engineers, computer scientists and
super-geeks, many of whom created and now run the biggest tech
firms in the world.
But where are we now? The advancement in processing power,
coupled with vast amounts of big data and developments such as
large language models, have led to the era of commercialisation
of AI. Dollops of AI are available in everyday software
programmes via chatbots and automated services. Obviously, the
emergence of ChatGPT turbocharged the public awareness and usage
of the technology. We have poured algorithms into machines and
made them “think”. We have stopped prioritising trying to get
robots to look and feel like us, and focused instead on the
automation of systems and processes, enabling them to do more
activities. We have moved from the pioneering to the application
era of AI.
With all this innovation, with so many opportunities and benefits
to be derived by its application, what should we fear? My answer
is not from the world of Hollywood science fiction; it relates
not to individuals losing control to machines but, rather, to how
we will ensure that this power remains democratic and accessible
and benefits the many. How will we ensure that control does not
fall into the hands of the few, that wealth does not determine
the ability to benefit from innovation and that a small set of
organisations do not gain ultimate global control or influence
over our lives? How, also, will we ensure that Governments and
bureaucracies do not end up ever furthering the power and control
of the state through well-intentioned regulatory control? This is
why we must appreciate the size of this opportunity, think about
the long-term future, and start to design the policy frameworks
and new public bodies that will work in tandem with those who
will design and deliver our future world.
But here is the rub: I do not believe we can control, manage or
regulate this technology through a single authority. I am
extremely supportive of the ambitions of my noble friend Lord
Holmes to drive this debate. However, I humbly suggest that the
question we need to focus on will be how we can ensure that the
innovations, outcomes and quality services that AI delivers are
beneficial and well understood. The Bill as it stands may be
overambitious for the scope of this AI authority: to act as
oversight across other regulators; to assess safety, risks and
opportunities; to monitor risks across the economy; to promote
interoperability and regulatory frameworks; and to act as an
incubator to innovation. To achieve this and more, the AIA would
need vast cross-cutting capability and resources. Again, I
appreciate what my noble friend Lord Holmes is trying to achieve
and, as such, I would say that we need to consider with more
focus the questions that we are trying to answer.
I wholeheartedly believe and agree that the critical role will be
to drive public education, engagement and awareness of AI, and
where and how it is used, and to clearly identify the risks and
benefits to the end-users, consumers, customers and the broader
public. However, I strongly suggest that we do not begin this
journey by requiring labelling, under Clause 5(1)(a)(iii), using
“unambiguous health warnings” on AI products or services. That
would not help us to work hand in hand with industry and trade
bodies to build trust and confidence in the technology.
I believe there will eventually be a need for some form of future
government body to help provide guidance to both industry and the
public about how AI outcomes, especially those in delivering
public sector services, are transparent, fair in design and
ethical in approach. Such a body will need to take note of the
approach of other nations and will need to engage with local and
global businesses to test and formulate the best way forward. So,
although I am sceptical of many of the specifics of the Bill, I
welcome and support the journey that it, my noble friend Lord
Holmes and this debate are taking us on.
11.50am
(LD)
My Lords, I congratulate the noble Lord, Lord Holmes, on his
inspiring introduction and on stimulating such an extraordinarily
good and interesting debate.
The excellent House of Lords Library guide to the Bill warns us
early on:
“The bill would represent a departure from the UK government’s
current approach to the regulation of AI”.
Given the timidity of the Government’s pro-innovation AI White
Paper and their response, I would have thought that was very much
a “#StepInTheRightDirection”, as the noble Lord, Lord Holmes,
might say.
There is clearly a fair wind around the House for the Bill, and I
very much hope it progresses and we see the Government adopt it,
although I am somewhat pessimistic about that. As we have heard
in the debate, there are so many areas where AI is and can
potentially be hugely beneficial, despite the rather dystopian
narratives that the noble Lord, , so graphically outlined.
However, as many noble Lords have emphasised, it also carries
risks, not just of the existential kind, which the Bletchley Park
summit seemed to address, but others mentioned by noble Lords
today, such as misinformation, disinformation, child sexual
abuse, and so on, as well as the whole area of competition,
mentioned by the noble Lord, Lord Fairfax, and the noble
Baroness, Lady Stowell—the issue of the power and the asymmetry
of these big tech AI systems and the danger of regulatory
capture.
It is disappointing that, after a long gestation of national AI
policy-making, which started so well back in 2017 with the
Hall-Pesenti review, contributed to by our own House of Lords
Artificial Intelligence Committee, the Government have ended up
by producing a minimalist approach to AI regulation. I liked the
phrase used by the noble Lord, , “lost momentum”, because it
certainly feels like that after this period of time.
The UK’s National AI Strategy, a 10-year plan for UK investment
in and support of AI, was published in September 2021 and
accepted that in the UK we needed to prepare for artificial
general intelligence. We needed to establish public trust and
trustworthy AI, so often mentioned by noble Lords today. The
Government had to set an example in their use of AI and to adopt
international standards for AI development and use. So far, so
good. Then, in the subsequent AI policy paper, AI Action Plan,
published in 2022, the Government set out their emerging
proposals for regulating AI, in which they committed to
develop
“a pro-innovation national position on governing and regulating
AI”,
to be set out in a subsequent governance White Paper. The
Government proposed several early cross-sectoral and overarching
principles that built on the OECD principles on artificial
intelligence: ensuring safety, security, transparency, fairness,
accountability and the ability to obtain redress.
Again, that is all good, but the subsequent AI governance White
Paper in 2023 opted for a “context-specific approach” that
distributes responsibility for embedding ethical principles into
the regulation of AI systems across several UK sector regulators
without giving them any new regulatory powers. I thought the
analysis of this by the noble Lord, Lord Young, was interesting.
There seemed to be no appreciation that there were gaps between
regulators. That approach was confirmed this February in the
response to the White Paper consultation.
Although there is an intention to set up a central body of some
kind, there is no stated lead regulator, and the various
regulators are expected to interpret and apply the principles in
their individual sectors in the expectation that they will
somehow join the dots between them. There is no recognition that
the different forms of AI are technologies that need a
comprehensive cross-sectoral approach to ensure that they are
transparent, explainable, accurate and free of bias, whether they
are in an existing regulated or unregulated sector. As noble
Lords have mentioned, discussing existential risk is one thing,
but going on not to regulate is quite another.
Under the current Data Protection and Digital Information Bill,
data subject rights regarding automated decision-making—in
practice, by AI systems—are being watered down, while our
creatives and the creative industries are up in arms about the
lack of support from government in asserting their intellectual
property rights in the face of the ingestion of their material by
generative AI developers. It was a pleasure to hear what the
noble Lord, , had to say on that.
For me, the cardinal rules are that business needs clarity,
certainty and consistency in the regulatory system if it is to
develop and adopt AI systems, and we need regulation to mitigate
risk to ensure that we have public trust in AI technology. As the
noble Viscount, , said, regulation is not
necessarily the enemy of innovation; it can be a stimulus. That
is something that we need to take away from this discussion. I
was also very taken with the idea of public trust leaving on
horseback.
This is where the Bill of the noble Lord, Lord Holmes, is an
important stake in the ground, as he has described. It provides
for a central AI authority that has a duty of looking for gaps in
regulation; it sets out extremely well out the safety and ethical
principles to be followed; it provides for regulatory sandboxes,
which we should not forget are an innovation invented in the UK;
and it provides for AI responsible officers and for public
engagement. Importantly, it builds in a duty of transparency
regarding data and IP-protected material where they are used for
training purposes, and for labelling AI-generated material, as
the noble Baroness, Lady Stowell, and her committee have
advocated. By itself, that would be a major step forward, so, as
the noble Lord knows, we on these Benches wish the Bill very
well, as do all those with an interest in protecting intellectual
property, as we heard the other day at the round table that he
convened.
However, in my view what is needed at the end of the day is the
approach that the interim report of the Science, Innovation and
Technology Committee recommended towards the end of last year in
its inquiry into AI governance: a combination of risk-based
cross-sectoral regulation and specific regulation in sectors such
as financial services, applying to both developers and adopters,
underpinned by common trustworthy standards of risk assessment,
audit and monitoring. That should also provide recourse and
redress, as the Ada Lovelace Institute, which has done so much
work in the area, asserts, and as the noble Lord, Lord Kirkhope,
mentioned.
That should include the private sector, where there is no
effective regulator for the workplace, as the noble Lord, Lord
Davies, mentioned, and the public sector, where there is no
central or local government compliance mechanism; no transparency
yet in the form of a public register of use of automated
decision-making, despite the promised adoption of the algorithmic
recording standard; and no recognition by the Government that
explicit legislation and/or regulation for intrusive AI
technologies used in the public sector, such as live facial
recognition and other biometric capture, is needed. Then, of
course, we need to meet the IP challenge. We need to introduce
personality rights to protect our artists, writers and
performers. We need the labelling of AI-generated material
alongside the kinds of transparency duties contained in the noble
Lord’s Bill.
Then there is another challenge, which is more international.
This was mentioned by the noble Lords, Lord Kirkhope and Lord
Young, the noble and learned Lord, , and the noble
Earl, Lord Erroll. We have world-beating AI researchers and
developers. How can we ensure that, despite differing regulatory
regimes—for instance, between ourselves and the EU or the
US—developers are able to commercialise their products on a
global basis and adopters can have the necessary confidence that
the AI product meets ethical standards?
The answer, in my view, lies in international agreement on common
standards such as those of risk and impact assessment, testing,
audit, ethical design for AI systems, and consumer assurance,
which incorporate what have become common internationally
accepted AI ethics. Having a harmonised approach to standards
would help provide the certainty that business needs to develop
and invest in the UK more readily, irrespective of the level of
obligation to adopt them in different jurisdictions and the
necessary public trust. In this respect, the UK has the
opportunity to play a much more positive role with the Alan
Turing Institute’s AI Standards Hub and the British Standards
Institution. The OECD.AI group of experts is heavily involved in
a project to find common ground between the various
standards.
We need a combination of proportionate but effective regulation
in the UK and the development of international standards, so, in
the words of the noble Lord, Lord Holmes, why are we not
legislating? His Bill is a really good start; let us build on
it.
12.01pm
(Lab)
My Lords, like others, I congratulate the noble Lord, , on his Private
Member’s Bill, the Artificial Intelligence (Regulation) Bill. It
has been a fascinating debate and one that is pivotal to our
future. My noble friend apologises for his absence and
I am grateful to the Government Benches for allowing me, in the
absence of an AI-generated hologram of my noble friend, to take
part in this debate. If the tone of my comments is at times his,
that is because my noble friend is supremely organised and I will
be using much of what he prepared for this debate. Like the noble
Lord, Lord Young, I am relying heavily on osmosis; I am much more
knowledgeable on this subject now than two hours ago.
My first jobs were reliant on some of the now-defunct
technologies, although I still think that one of the most useful
skills I learned was touch-typing. I learned that on a
typewriter, complete with carbon paper and absolutely no use of
Tipp-Ex allowed. However, automation and our continued and
growing reliance on computers have improved many jobs rather than
simply replacing them. AI can help businesses save money and
increase productivity by adopting new technologies; it can also
release people from repetitive data-entry tasks, enabling them to
focus on creative and value-added tasks. New jobs requiring
different skills can be created and, while this is not the whole
focus of the debate, how we achieve people being able to take up
new jobs also needs to be a focus of government policy in this
area.
As many noble Lords have observed, we stand on the brink of an AI
revolution, one that has already started. It is already changing
the way we live, the way we work and the way we relate to one
another. I count myself in the same generation of film viewers as
the noble Lord, . The rapidly approaching tech
transformation is unlike anything that humankind has experienced
in its speed, scale and scope: 20th-century science fiction is
becoming commonplace in our 21st-century lives.
As the noble Baroness, Lady Moyo, said, it is estimated that AI
technology could contribute up to £15 trillion to the world
economy by 2030. As many noble Lords mentioned, AI also presents
government with huge opportunities to transform public services,
potentially delivering billions of pounds in savings and
increasing the service to the public. For example, it could help
with the workforce crisis in health, particularly in critical
health diagnostics, as highlighted by the noble Lord, . The noble Baroness, Lady
Finlay, highlighted the example of how diagnosis of Covid lung
has benefited through the use of AI, but, as she said, that
introduces requirements for additional infrastructure. My noble
friend Lord Davies also noted that AI can help to contribute to
how we tackle climate change.
The use of AI by government underpins Labour’s missions to revive
our country’s fortunes and ensure that the UK thrives and is at
the forefront of the coming technological revolution. However, we
should not and must not overlook the risks that may arise from
its use, nor the unease around AI and the lack of confidence
among the public around its use. Speaking as someone who
generally focuses on education from these Benches, this is not
least in the protection of children, as the noble Baroness, Lady
Kidron, pointed out. AI can help education in a range of ways,
but these also need regulation. As the noble Baroness said, we
need rules to defend against the potential abuses.
Goldman Sachs predicts that the equivalent of 300 million
full-time jobs globally will be replaced; this includes around a
quarter of current work tasks in the US and Europe. Furthermore,
as has been noted, AI can damage our physical and mental health.
It can infringe upon individual privacy and, if not protected
against, undermine human rights. Our collective response to these
concerns must be as integrated and comprehensive as our embracing
of the potential benefits. It should involve all stakeholders,
from the public and private sectors to academia and civil
society. Permission should and must be sought by AI developers
for the use of copyright-protected work, with remuneration and
attribution provided to creators and rights holders, an issue
highlighted by the noble Lord, . Most importantly,
transparency needs to be delivered on what content is used to
train generative AI models. I found the speech of the noble Earl,
Lord Erroll, focusing on outcomes, of particular interest.
Around the world, countries and regions are already beginning to
draft rules for AI. As the noble Lord, Lord Kirkhope, said, this
does not need to stifle innovation. The Government’s White Paper
on AI regulation adopted a cross-sector and outcome-based
framework, underpinned by its five core principles.
Unfortunately, there are no proposals in the current White Paper
for introducing a new AI regulator to oversee the implementation
of the framework. Existing regulators, such as the Information
Commissioner’s Office, Ofcom and the FCA have instead been asked
to implement the five principles from within their respective
domains. As a number of noble Lords referred to, the Ada Lovelace
Institute has expressed concern about the Government’s approach,
which it has described as “all eyes, no hands”. The institute
says that, despite
“significant horizon-scanning capabilities to anticipate and
monitor AI risks … it has not given itself the powers and
resources to prevent those risks or even react to them
effectively after the fact”.
The Bill introduced by the noble Lord, Lord Holmes, seeks to
address these shortcomings and, as he said in his opening
remarks: if not now, when? Until such time as an independent AI
regulator is established, the challenge lies in ensuring its
effective implementation across various regulatory domains. This
includes data protection, competition, communications and
financial services. A number of noble Lords mentioned the
multitude of regulatory bodies involved. This means that
effective governance between them will be paramount. Regulatory
clarity, which enables business to adopt and scale investment in
AI, will bolster the UK’s competitive edge. The UK has so far
been focusing on voluntary measures for general-purpose AI
systems. As the right reverend Prelate the said, this is not
adequate: human rights and privacy must also be protected.
The noble Lord, Lord Kirkhope, noted that AI does not respect
national borders. A range of international approaches to AI
safety and governance are developing, some of which were
mentioned by the noble Lord, Lord Fairfax. The EU has opted for a
comprehensive and prescriptive legislative approach; the US is
introducing some mandatory reporting requirements; for example,
for foundation models that pose serious national or economic
security risks.
Moreover, a joint US-EU initiative is drafting a set of voluntary
rules for AI businesses—the AI code of conduct. In the short
term, these may serve as de facto international standards for
global firms. Can the Minister tell your Lordships’ House whether
the Government are engaging with this drafting? The noble Lord,
, suggested that the Government
have lost momentum. Can the Minister explain why the Government
are allowing the UK to lose influence over the development of
international AI regulation?
The noble Lord, , noted that the Library
briefing states that this Bill marks a departure from government
approach. The Government have argued that introducing legislation
now would be premature and that the risks and challenges
associated with AI, the regulatory gaps and the best way to
address them must be better understood. This cannot be the case.
Using the horse analogy adopted by the noble Baroness earlier, we
need to make sure that we do not act after the horse has
bolted.
I pay tribute, as others have done, to the work of the House of
Lords Communications and Digital Committee. I found the points
highlighted by its chair and her comments very helpful. We are
facing an inflection point with AI. It is regrettable that the
government response is not keeping up with the change. Why are
the Government procrastinating while all other G7 members are
adopting a different, more proactive approach? A Labour
Government would act decisively and not delay. Self-regulation is
simply not enough.
The honourable Member for Hove, the shadow Secretary of State for
Science, Innovation and Technology, outlined Labour’s plans
recently at techUK’s annual conference. He said:
“Businesses need fast, clear and consistent regulation … that …
does not unnecessarily slow down innovation”—
a point reflected in comments by the noble and learned Lord, Lord
Thomas. We also need regulation that encourages risk taking and
finding new ways of working. We need regulation that addresses
the concerns and protects the privacy of the public.
As my noble friend said, the UK also needs to
address concerns about misinformation and disinformation, not
least in instances where these are democratic threats. This point
was also reflected by the noble Lords, Lord Vaizey and Lord
Fairfax.
Labour’s regulatory innovation office would give strategic steers
aligned with our industrial strategy. It would set and monitor
targets on regulatory approval timelines, benchmark against
international comparators and strengthen the work done by the
Regulatory Horizons Council. The public need to know that safety
will be baked into how AI is used by both the public and the
private sectors. A Labour Government would ensure that the UK
public sector is a leader in responsibly and transparently
applying AI. We will require safety reports from the companies
developing frontier AI. We are developing plans to make sure that
AI works for everyone.
Without clear regulation, widespread business adoption and public
trust, the UK’s adoption of AI will be too slow. It is the
Government’s responsibility to acknowledge and address how AI
affects people’s jobs, lives, data and privacy, and the rapidly
changing world in which they live. The Government are veering
haphazardly between extreme risk, extreme optimism and extreme
delay on this issue. Labour is developing a practical,
well-informed and long-term approach to regulation.
In the meantime, we support and welcome the principles behind the
Private Member’s Bill from the noble Lord, Lord Holmes, but
remain open-minded on the current situation and solution, while
acknowledging that there is still much more to be done.
12.14pm
The Parliamentary Under-Secretary of State, Department for
Science, Innovation and Technology () (Con)
I join my thanks to those of others to my noble friend Lord
Holmes for bringing forward this Bill. I thank all noble Lords
who have taken part in this absolutely fascinating debate of the
highest standard. We have covered a wide range of topics today. I
will do my best to respond, hopefully directly, to as many points
as possible, given the time available.
The Government recognise the intent of the Bill and the differing
views on how we should go about regulating artificial
intelligence. For reasons I will now set out, the Government
would like to express reservations about my noble friend’s
Bill.
First, with the publication of our AI White Paper in March 2023,
we set out proposals for a regulatory framework that is
proportionate, adaptable and pro-innovation. Rather than
designing a new regulatory system from scratch, the White Paper
proposed five cross-sectoral principles, which include safety,
transparency and fairness, for our existing regulators to apply
within their remits. The principles-based approach will enable
regulators to keep pace with the rapid technological change of
AI.
The strength of this approach is that regulators can act now on
AI within their own remits. This common-sense, pragmatic approach
has won endorsement from leading voices across civil society,
academia and business, as well as many of the companies right at
the cutting edge of frontier AI development. Last month we
published an update through the Government’s response to the
consultation on the AI White Paper. The White Paper response
outlines a range of measures to support existing regulators to
deliver against the AI regulatory framework. This includes
providing further support to regulators to deliver the regulatory
framework through a boost of more than £100 million to upskill
regulators and help unlock new AI research and innovation.
As part of this, we announced a £10 million package to jump-start
regulators’ AI capabilities, preparing and upskilling regulators
to address the risks and to harness the opportunities of this
defining technology. It also includes publishing new guidance to
support the coherent implementation of the principles. To ensure
robust implementation of the framework, we will continue our work
to establish the central function.
Let me reassure noble Lords that the Government take mitigating
AI risks extremely seriously. That is why several aspects of the
central function have already been established, such as the
central AI risk function, which will shortly be consulting on its
cross-economy AI risk register. Let me reassure the noble Lord,
, that the AI risk function will
maintain a holistic view of risks across the AI ecosystem,
including misuse risks, such as where AI capabilities may be
leveraged to undermine cybersecurity.
Specifically on criminality, the Government recognise that the
use of AI in criminal activity is a very important issue. We are
working with a range of stakeholders, including regulators, and a
range of legal experts to explore ways in which liability,
including criminal liability, is currently allocated through the
AI value chain.
In the coming months we will set up a new steering committee,
which will support and guide the activities of a formal regulator
co-ordination structure within government. We also wrote to key
regulators, requesting that they publish their AI plans by 30
April, setting out how they are considering, preparing for and
addressing AI risks and opportunities in their domain.
As for the next steps for ongoing policy development, we are
developing our thinking on the regulation of highly capable
general-purpose models. Our White Paper consultation response
sets out key policy questions related to possible future binding
measures, which we are exploring with experts and our
international partners. We plan to publish findings from this
expert engagement and an update on our thinking later this
year.
We also confirmed in the White Paper response that we believe
legislative action will be required in every country once the
understanding of risks from the most capable AI systems has
matured. However, legislating too soon could easily result in
measures that are ineffective against the risks, are
disproportionate or quickly become out of date.
Finally, we make clear that our approach is adaptable and
iterative. We will continue to work collaboratively with the US,
the EU and others across the international landscape to both
influence and learn from international development.
I turn to key proposals in the Bill that the noble Lord has
tabled. On the proposal to establish a new AI authority, it is
crucial that we put in place agile and effective mechanisms that
will support the coherent and consistent implementation of the AI
regulatory framework and principles. We believe that a
non-statutory central function is the most appropriate and
proportionate mechanism for delivering this at present, as we
observe a period of non-statutory implementation across our
regulators and conduct our review of regulator powers and
remits.
In the longer term, we recognise that there may be a case for
reviewing how and where the central function has delivered, once
its functions have become more clearly defined and established,
including whether the function is housed within central
government or in a different form. However, the Government feel
that this would not be appropriate for the first stage of
implementation. To that end, as I mentioned earlier, we are
delivering the central function within DSIT, to bring coherence
to the regulatory framework. The work of the central function
will provide clarity and ensure that the framework is working as
intended and that joined-up and proportionate action can be taken
if there are gaps in our approach.
We recognise the need to assess the existing powers and remits of
the UK’s regulators to ensure they are equipped to address AI
risks and opportunities in their domains and to implement the
principles consistently and comprehensively. We anticipate having
to introduce a statutory duty on regulators requiring them to
have due regard to the principles after an initial period of
non-statutory implementation. For now, however, we want to test
and iterate our approach. We believe this approach offers
critical adaptability, but we will keep it under review; for
example, by assessing the updates on strategic approaches to AI
that several key regulators will publish by the end of April. We
will also work with government departments and regulators to
analyse and review potential gaps in existing regulatory powers
and remits.
Like many noble Lords, we see approaches such as regulatory
sandboxes as a crucial way of helping businesses navigate the AI
regulatory landscape. That is why we have funded the four
regulators in the Digital Regulation Cooperation Forum to pilot a
new, multiagency advisory service known as the AI and digital
hub. We expect the hub to launch in mid-May and will provide
further details in the coming weeks on when this service will be
open for applications from innovators.
One of the principles at the heart of the AI regulatory framework
is accountability and governance. We said in the White Paper that
a key part of implementation of this principle is to ensure
effective oversight of the design and use of AI systems. We have
recognised that additional binding measures may be required for
developers of the most capable AI systems and that such measures
could include requirements related to accountability. However, it
would be too soon to mandate measures such as AI-responsible
officers, even for these most capable systems, until we
understand more about the risks and the effectiveness of
potential mitigations. This could quickly become burdensome in a
way that is disproportionate to risk for most uses of AI.
Let me reassure my noble friend Lord Holmes that we continue to
work across government to ensure that we are ready to respond to
the risks to democracy posed by deep fakes; for example, through
the Defending Democracy Taskforce, as well as through existing
criminal offences that protect our democratic processes. However,
we should remember that AI labelling and identification
technology is still at an early stage. No specific technology has
yet been proven to be both technically and organisationally
feasible at scale. It would not be right to mandate labelling in
law until the potential benefits and risks are better
understood.
Noble Lords raised the importance of protecting intellectual
property, a profoundly important subject. In the AI White Paper
consultation response, the Government committed to provide an
update on their approach to AI and copyright issues soon. I am
confident that, when we do so, it will address many of the issues
that noble Lords have raised today.
In summary, our approach, combining a principles-based framework,
international leadership and voluntary measures on developers, is
right for today, as it allows us to keep pace with rapid and
uncertain advances in AI. The UK has successfully positioned
itself as a global leader on AI, in recognition of the fact that
AI knows no borders and that its complexity demands nuanced
international governance. In addition to spearheading thought
leadership through the AI Safety Summit, the UK has supported
effective action through the G7, the Council of Europe, the OECD,
the G5, the G20 and the UN, among other bodies. We look forward
to continuing to engage with all noble Lords on these critical
issues as we continue to develop our regulatory approach.
12.25pm
(Con)
My Lords, I thank all noble Lords who have contributed to this
excellent debate. It is pretty clear that the issues are very
much with us today and we have what we need to act today. To
respond to a question kindly asked by the noble Lord, , in my drafting I
am probably allowing “relevant” regulators to do some quite heavy
lifting, but what I envisage within that is certainly all the
economic regulators, and indeed all regulators who are in a
sector where AI is being developed, deployed and in use.
Everybody who has taken part in this debate and beyond may
benefit from having a comprehensive list of all the regulators
across government. Perhaps I could ask that of the Minister. I
think it would be illuminating for all of us.
At the autumn FT conference, my noble friend the Minister said
that heavy-handed regulation could stifle innovation. Certainly,
it could. Heavy-handed regulation would not only stifle
innovation but would be a singular failure of that creation of
the regulatory process. History tells us that right-size
regulation is pro-citizen, pro-consumer and pro-innovation; it
drives innovation and inward investment. I was taken by so much
of what the Ada Lovelace Institute put in its report. The
Government really have given themselves all the eyes and not the
hands to act. It reminds me very much of a Yorkshire saying: see
all, hear all, do nowt. What is required is for these
technologies to be human led, in our human hands, and human in
the loop throughout. Right-size regulation, because it is
principles-based, is necessarily agile, adaptive and can move as
the technology moves. It should be principles-based and outcomes
focused, with inputs that are transparent, understood,
permissioned and, wherever and whenever applicable, paid for.
My noble friend the Minister has said on many occasions that
there will come a time when we will legislate on AI. Let 22 March
2024 be that time. It is time to legislate; it is time to
lead.
Bill read a second time and committed to a Committee of the Whole
House.
|