
'Globalise the intifada' chant is racist, says Starmer
Starmer calls 'globalise the intifada' chant extreme racism after stabbings

OpenAI has instructed its ChatGPT models to stop referencing goblins after discovering increased mentions in responses. The company identified that a 'nerdy personality' feature had inadvertently encouraged these references.
ChatGPT-maker OpenAI has had to instruct some of its AI tools to stop talking about goblins, after finding the term had randomly crept into responses.
In a blog post on Thursday, the company said it spotted increased mentions of the mythological creatures, as well as gremlins, in metaphors used by ChatGPT and other tools powered by its latest flagship model, GPT-5.
After users and employees flagged problems being described as "little goblins", OpenAI said it took steps to mitigate the issue - including telling its coding agent Codex not to refer to them unless relevant.
It discovered that a "nerdy personality" it developed for ChatGPT had unwittingly been incentivised to reward goblin mentions.
The issue highlights the challenges AI firms face in tackling the potential for systems and their training to reward and reinforce errors like language quirks.
OpenAI said it first noticed increased mentions of goblins, gremlins and other creatures after the launch of GPT-5.1 in November.
"Users complained about the model being oddly overfamiliar in conversation, which prompted an investigation into specific verbal tics," the company wrote in its blog post on Thursday.
It added that after a researcher who had seen a few "goblin" mentions asked it to be checked out, developers found the term's appearance in ChatGPT responses had risen by 175% since GPT-5.1's launch.
They meanwhile found that mentions of "gremlin" rose by 52%.
The increases, while large, may account for a small amount of responses overall.
According to OpenAI, "a single 'little goblin' in an answer could be harmless, even charming," but the uptick in their appearance across output warranted investigation.
Ahead of OpenAI's blog post detailing the issue, some social media users flagged a strange detail among lines of code instructing the company's coding assistant Codex how to behave in user interactions.
Alongside telling it to avoid platitudes, it said Codex should "never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query".
A Reddit user who posted about it in the r/ChatGPT subreddit called it "genuinely insane".
"Why does GPT 5.5 have a restraining order against 'Raccoons,' 'Goblins,' and 'Pigeons'?"
While some users elsewhere on social media speculated it may be designed to create hype around its AI tools, a company researcher denied this - writing "it really isn't a marketing gimmick," .
OpenAI found that the term 'goblins' was being used excessively in responses, prompting the need for intervention.
A 'nerdy personality' developed for ChatGPT was inadvertently incentivizing mentions of goblins and gremlins.
OpenAI instructed its coding agent Codex to avoid referring to goblins unless relevant and took other mitigation steps.
The incident highlights the challenges AI companies face in managing unintended language quirks and reinforcing errors in their systems.

Starmer calls 'globalise the intifada' chant extreme racism after stabbings

Historic first commercial flight from the US to Venezuela in over seven years arrives in Caracas.

FIFA President Confirms Iran Will Compete in 2026 World Cup

Louisiana suspends U.S. House primaries due to Supreme Court ruling on gerrymandering.

A WW2 bomb in Plymouth will be detonated, leading to the evacuation of over 1,200 homes in a 400m exclusion zone.

US House passes bill to fund DHS, ending longest agency shutdown
See every story in News — including breaking news and analysis.
OpenAI said in its blog post it added the instruction to curb Codex and its underlying model's "strange affinity for goblins".
The core issue, it explained, seemingly arose while training its models to communicate in the style of particular personalities - in this case with its "nerdy personality".
It found this system would reward mentions of goblins, gremlins and other creatures in metaphors.
While since retired, it said its testing found the personality was responsible for 66.7% of all "goblin" mentions in ChatGPT.
This so-called tic could seep into wider model training if rewarded in one instance and reinforced elsewhere.
The move comes amid a broader industry shift towards making AI chatbots more personality-driven and chatty in a bid to boost user engagement.
As they do, however, experts have warned their potential to make things up - or "hallucinate" as the industry describes it - could intensify.
A recent study by the Oxford Internet Institute found fine-tuning models to have a more warm and friendly personality could result in an "accuracy trade-off", whereby systems make more mistakes or re-affirm a user's false beliefs.
Experts have also cautioned users about taking chatbots' often matter-of-fact statements at face value, particularly when it comes to health and medical advice.
But, like OpenAI's goblin quirk, generative AI mistakes can sometimes be more bizarre and innocuous.
In May 2024, Google's AI chatbot was widely mocked for telling users it was okay to eat rocks and "glue pizza".
Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.