The UK will host a conference in San Francisco for discussions
with AIdevelopers on
how they can put into practice commitments made at the
AISeoul
Summit.
To be held across the 21 and 22 November, the event will feature
a number of workshops and discussions focused on AI safety ahead of France
hosting the AI
Action Summit in February 2025.
Earlier this year, 16 companies from across the globe including
those from the US, EU, Republic of Korea, China and the
UAE, agreed to publish
their latest AI
safety frameworks ahead of the next Summit.
These frameworks will lay out their plans to tackle the most
severe potential AI
risks, including if the technology was misused by bad actors. As
part of these commitments, companies also agreed to stop the
deployment or development of any models if their potential risks
cannot be sufficiently addressed.
The event will be a moment for AI companies to take stock and
share ideas and insights to support the development of their
AI safety frameworks
through a targeted day of talks between signatory companies and
researchers.
Science, Innovation and Technology Secretary said:
The conference is a clear sign of the UK's ambition to further
the shared global mission to design practical and effective
approaches to AI
safety.
We're just months away from the AI Action Summit, and the
discussions in San Francisco will give companies a clear focus on
where and how they can bolster their AI safety plans building on the
commitments they made in Seoul.
From today, attendees are also urged to share thoughts on
potential areas of discussion at November's conference, including
existing and current proposals for developer safety plans, the
future of AI model
safety evaluations, transparency and methods for setting out
different risk thresholds.
Co-hosted with the Centre for the Governance of AI and led by the UK's
AISafety Institute
(AISI), discussions will
help build a deeper understanding of how the Frontier
AI Safety
Commitments are being put into practice.
The UK's AI Safety
Institute is the world's first state-backed body dedicated to
AI safety, and the
UK has continued to play a global leadership role in developing
the growing international network of AI Safety Institutes – including
its landmark agreement with the US earlier this year.
The conference has been designed as a forum for attendees to
exchange ideas on best practice in implementing the commitments,
ensuring a transparent and collaborative approach for developers
as they refine their AIsafety frameworks ahead of the
AI Action Summit.
It follows the US government yesterday announcing the first
meeting of the International Network of AI Safety Institutes, which will
take place in the days before from 20-21 November 2024, in San
Francisco. The UK launched the world's first AI Safety Institute at Bletchley
Park last November, and since then nations around the world have
raced to establish their own AI safety testing bodies.
The convening hosted by the US will bring together technical
experts on artificial intelligence from each country's
AI safety institute,
or equivalent government-backed scientific office, in order to
align on priority work areas for the Network and begin advancing
global collaboration and knowledge sharing on AI safety.
Notes to editors
Find out more: