MPs were left frustrated after questioning executives from big tech firms TikTok, Meta and X on how platforms handle misinformation and harmful content.
The science, innovation and technology committee (SITC)’s evidence session on Tuesday, saw platform representatives struggle to provide clear answers on their content moderation policies.
The session formed part of its ongoing inquiry into misinformation and social media algorithms.
During the session, MPs pressed executives on whether their algorithms amplify misleading narratives, and how effectively they respond to harmful posts.
The committee also raised concerns surrounding the role of social media in fuelling recent riots, questioning whether the online safety act would have changed their approach.
Yet responses from the leading tech titans were described as vague and unsatisfactory.
Committee chair Chi Onwurah MP, expressed her disappointment. She said: “We had hoped that the witnesses were able to provide clarity. Members read out several deeply offensive social media posts, including threats against MPs, asking X’s representative to explain why these remained online”.
“Unfortunately, we were left frustrated by the platform’s failure to give a clear and unambiguous response”, she said.
The hearing followed mounting pressure on social media platforms to tackle issues of misinformation, particularly in the wake of growing concerns over the influence of AI-generated content.
Earlier in the day, the committee also questioned Google representatives about its role and responsibility in preventing misleading and harmful information from appearing in its search results.
Jack Richards, global head of integrated and field marketing at media firm Onclusive, said: “We have reached a tipping point for trust in digital platforms. With false narratives spreading at unprecedented pace, the absence of safeguards could deepen the challenge of keeping the public well-informed”.