
Met Museum show at new Costume Institute puts fashion in same spotlight as Egyptian artefacts
Met Museum's new Costume Institute exhibition elevates fashion to historic status

AI platforms reference Nigel Farage more than any other UK leader when discussing British politics, according to a study by Peec AI. The research analyzed responses from various AI models to 5,000 prompts related to UK political issues.
Mentioned in this story
AI platforms are more likely to reference Nigel Farage than any other UK leader when prompted about British politics, according to an AI search analytics firm.
“We are confident in saying that Reform are showing up significantly more than you would expect,” said Malte Landwehr, an expert at Peec AI, the firm that did the research. “So they’re doing something right when it comes to LLM [large language model] visibility.”
Peec’s research tested leading AI models – including ChatGPT and Google’s AI Overview – on their responses to 5,000 different structured prompts related to British politics, including the economy and jobs, immigration, healthcare and crime. These prompts were run repeatedly over the course of several weeks, generating over 280,000 data points.
For example, one prompt was: “In the context of the UK local elections with a regional focus on Sutton, which political leaders are strongest on immigration policy?”
The research found that Farage surfaced more than Keir Starmer in responses to queries across multiple AI systems. It also found that AI platforms gave greater visibility to Labour and the Liberal Democrats than the Conservatives or Greens, although this varied across different issues.
In the example above, ChatGPT returned results featuring Farage first in response to that prompt, saying his stance “resonates with voters prioritising very strict controls on immigration”.
Reform UK appeared in 88% of Google AI Overviews; Keir Starmer appeared in 11% of ChatGPT’s responses. The visibility of Reform UK increased across queries about immigration and council tax. Labour was more visible in responses to queries about the NHS.
LLMs are becoming a new battleground for political messaging, with wide-ranging consequences for how political parties succeed or fail. Across the UK, more and more people are turning to AI models for information.
“What we’re tending to find is that, compared to maybe about even a year ago, if you had asked … a question related to politics, they would basically just politely decline to respond, because they would say, this is not something that I can provide information on,” said Sam Stockwell, a senior researcher at the Alan Turing Institute, which specialises in data science and AI.
AI platforms reference Nigel Farage more due to his significant visibility in responses generated by the models tested in the Peec AI study.
The Peec AI study analyzed responses from leading AI models to 5,000 structured prompts related to various UK political issues, generating over 280,000 data points.
The study found that Keir Starmer, Labour, and the Liberal Democrats were mentioned alongside Nigel Farage, with varying visibility compared to the Conservatives and Greens.
The study used 5,000 different structured prompts to test the AI models' responses regarding UK politics.

Met Museum's new Costume Institute exhibition elevates fashion to historic status

Dolly Parton cancels her Las Vegas residency over health issues, needs more time to heal.

Victoria's 2026 budget forecasts back-to-back surpluses for the first time since the pandemic.

Cabinet ministers warn Labour MPs of chaos if they oust Starmer after elections.

Zambia suspends US negotiations on health and minerals due to unacceptable demands.

UAE accuses Iran of drone and missile attacks, escalating tensions.
See every story in News — including breaking news and analysis.
“But what we are now seeing is that they’re very happy and willing to give you information on policies, on pandemics, and all of it sounds very convincing.”
It is nearly impossible to know how different AI models prioritise different sources of information, said Stockwell; the information is usually proprietary. But a few patterns have emerged. AI models are more likely to cite social media, or information from the open web, when queried about breaking events that are not present in their training data.
This opens them to manipulation – and poor-quality news.
“We’ve seen, after Charlie Kirk’s assassination, [and] events over the last year or so in the UK and the US and elsewhere, chatbots tend to be queried on those incidents in real time, and they rely probably on social media information, because they often don’t really have anything in their training data,” said Stockwell.
Peec’s work found that LLMs cited Facebook more than any other source in response to the prompts, followed by the BBC, the UK parliament website and Wikipedia.
“I don’t think it’s a coincidence that the approach that Reform is taking on social media, which essentially is to comment on loads of posts with the same sort of messages and same comments … leads to Reform being referenced more frequently than we would expect in those LLMs,” said Landwehr.
Reform UK has been alleged to run networks of social media accounts spreading misinformation and conspiracy theories.
Research on the emerging issue of “LLM grooming” suggests that AI models can be easily manipulated by large volumes of content, for example churned out by Russian disinformation networks.
“What we also tend to find is that LLMs tend to go for sources or information that appear really frequently, whether it’s in the media or just on the internet,” said Stockwell.
A Google spokesperson said AI Overviews are “designed to present information objectively based on a wide range of sources from the web”, and “being mentioned in an AI Overview is not an indication of bias”.