Inquiry launch: Social media, misinformation, and the role of
algorithms
The Science, Innovation and Technology Committee has launched a
new inquiry to investigate the relationship between algorithms
used by social media and search engines, generative AI, and the
spread of harmful and false content online.
This inquiry follows a wave of anti-immigration demonstrations
and riots in July and August 2024, with some protests targeting
mosques or hotels housing asylum seekers. These are believed to
have been partially driven by false claims spread on social media
platforms about the killing of three children in
Southport.
The inquiry, the first of the newly appointed Commons
Committee, will examine the role that social media algorithms
and generative AI has in spreading false and harmful content. It
will specifically consider the role of false claims, spread via
profit-driven social media algorithms, in the summer riots. It
will also investigate the effectiveness of current and proposed
regulation for these technologies, including the Online Safety
Act, and what further measures might be needed.
The Chair of the Science, Innovation and Technology
Committee MP, said:
“The violence we saw on UK streets this summer has shown the
dangerous real-world impact of spreading misinformation and
disinformation across social media. We shouldn't accept the
spread of false and harmful content as part and parcel of using
social media. It's vital that lessons are learnt, and we ensure
it doesn't fuel riots and violence on our streets again.
“This is an important opportunity to investigate to what extent
social media companies and search engines encourage the spread of
harmful and false content online. As part of this, we'll examine
how these companies use algorithms to rank content, and whether
their business models encourage the spread of content that can
mislead and harm us. We'll look at how effective the UK's
regulations and legislation are in combatting content like this-
weighing up the balance with freedom of speech – and at who is
accountable.”
Terms of reference
Written submissions are invited in response to the following
questions. The deadline for submissions is Wednesday 18 December.
- To what extent do the business models of social media
companies, search engines and others encourage the spread of
harmful content, and contribute to wider social
harms?
-
- How do social media companies and search engines use
algorithms to rank content, how does this reflect their
business models, and how does it play into the spread of
misinformation, disinformation and harmful
content?
- What role do generative artificial intelligence (AI) and
large language models (LLMs) play in the creation and spread
of misinformation, disinformation and harmful content?
- What role did social media algorithms play in the riots that
took place in the UK in summer 2024?
- How effective is the UK's regulatory and legislative
framework on tackling these issues?
-
- How effective will the Online Safety Act be in combatting
harmful social media content?
- What more should be done to combat potentially harmful
social media and AI content?
- What role do Ofcom, and the National Security Online
Information Team play in preventing the spread of harmful and
false content online?
- Which bodies should be held accountable for the spread of
misinformation, disinformation and harmful content as a result of
social media and search engines' use of algorithms and AI?