The Science, Innovation and Technology Committee is today
publishing the last report of the 2019-24 Parliament, for its
inquiry into the governance of Artificial Intelligence, examining
domestic and international developments in the governance and
regulation of AI since its August 2023 interim
Report.
The conclusions and recommendations of this report apply to
whoever is in Government after the General Election. The
Committee says that the current sectoral approach to regulation
is right but the next Government should be ready to legislate on
AI if it encounters gaps in the powers of any of the regulators
to deal with the public interest in this fast developing
field.
The Committee revisits the Twelve Challenges of AI Governance set
out in that interim Report with suggestions for how they might be
addressed by policymakers. It identifies perhaps the most
far-reaching challenge of AI as the way it can operate as a
‘black box': the basis of and reasoning for its output may be
unknowable, but it may nevertheless have very strong, and better
than human, predictive powers.
In the face of that overarching challenge, the Committee says
that if the chain of reasoning cannot be viewed there must be
stronger testing of the outputs of AI models, as a means to
assess their power and acuity.
The report raises concern at suggestions the new AI Safety
Institute has been unable to access some developers' models to
perform the pre-deployment safety testing that was intended to be
a major focus of its work. The Committee calls on the next
Government to identify any developers that refused pre-access to
their models — in contravention of the agreement at the November
2023 Summit at Bletchley Park— and name them and report their
justification for refusing.
The Committee concludes that in a world in which AI developers
command can vast resources, UK regulators must be equipped to
hold them to account. The £10 million announced to support the
UK's sectoral regulators, particularly Ofcom, as they respond to
the growing prevalence of AI in the private and public sectors
will be “clearly insufficient to meet the challenge, particularly
when compared to even the UK revenues of leading AI
developers”.
Chair of the Science, Innovation and Technology
Committee Rt Hon MP
said: “The overarching
“black box” challenge of some AI models means we will need to
change the way we think about assessing the technology.
Biases may not be detectable in the construction of models,
so there will need to be a bigger emphasis on testing the outputs
of model to see if they have unacceptable consequences.
“The Bletchley Park Summit resulted in an agreement that
developers would submit new models to the AI Safety Institute. We
are calling for the next government to publicly name any AI
developers who do not submit their models for pre-deployment
safety testing. It is right to work through existing regulators,
but the next government should stand ready to legislate quickly
if it turns out that any of the many regulators lack the
statutory powers to be effective. We are worried that UK
regulators are under-resourced compared to the finance that major
developers can command.
“The current Government has been active and forward-looking on AI
and has amassed a talented group of expert advisers in Whitehall.
Important challenges await the next administration and in this,
the Committee's final substantive report of this Parliament, we
set out an agenda that the new Government should follow to attain
the transformational benefits of AI while safeguarding hard-won
public protections.”/ENDS
Notes
You can find full details of the inquiry including all the
evidence received here UK governance of
Artificial Intelligence