Americans ‘creeped out’ as ChatGPT starts inserting Arabic words into responses… before giving strange explanation

ChatGPT users in the US have been baffled by a recent surge in the number of AI responses mysteriously written in Arabic.
The phenomenon has caught many English-speaking Americans off guard over the last month, with many sharing images on social media of AI-generated answers randomly adding Arabic text to their conversations.
‘It did it twice on my phone, and once on my work laptop, I’m not even in an Arabic-speaking country,’ one person on Reddit wrote, showing how the popular chatbot began giving them recipe ingredients in Arabic two weeks ago.
Others reported that numbers were also changed to Arabic and that the AI even started responding to English prompts in Armenian, Hebrew, Spanish, Chinese and Russian.
While some blamed the bizarre text on AI hallucinations, when chatbots produce answers that are factually incorrect or completely nonsensical, the problem actually appears to stem from how ChatGPT was trained.
ChatGPT, which is also known as a large language model (LLM), does not read full words the way humans do, but breaks text into small pieces called ‘tokens,’ which can be parts of words, punctuation or even short words from other languages.
Because some foreign words are shorter and easier for the system to process, the model may occasionally pick them if they fit the context and require fewer tokens.
This does not mean the AI is switching languages on purpose, but simply choosing the most likely next piece of text based on probability.
ChatGPT users posted images of responses showing how simple English words were randomly replaced with characters from various languages
OpenAI’s ChatGPT has been increasingly giving English-speaking users responses in Arabic over the last month (Stock Image)
ChatGPT, which is reportedly used by nearly 900 million people each month, was created by artificial intelligence company OpenAI in 2022.
It lets users type questions or prompts in normal language, and it replies with surprisingly human-like text. Millions have used it to write essays, explain concepts, create stories, translate languages, solve problems or just chat.
While multiple AI chatbots have followed, including Google’s Gemini, xAI’s Grok and Anthropic’s Claude, GPT continues to dominate the market, controlling nearly two-thirds of the growing industry.
OpenAI has publicly addressed some language-related glitches, with problems similar to the strange Arabic responses being reported back in 2024.
Two years ago, GPT users reported widespread incidents of ‘gibberish’ being generated, which was caused by an internal token-mapping error during a model update.
However, none of their company’s recent announcements have addressed language mixing errors and unexpected Arabic responses to English prompts.
Social media users who have shared these mysterious responses have noted that the words in other languages were not gibberish. In most cases, the word actually had the same meaning as the English word being replaced.
One Reddit user replied to the image of the recipe, explaining: ‘The word means low. So it looks like it’s missing a word. Possibly low-fat yogurt.’
The problem has been blamed on the way ChatGPT was trained, using billions of words from multiple languages (Stock Image)
ChatGPT has responded to many users seeing the randomly Arabic words by saying the text was added by mistake
To understand why ChatGPT would send countless users answers in Arabic, it is helpful to look at what ‘tokens’ actually are.
Tokens used by AI chatbots can include whole words (such as ‘hello’), parts of words (such as ‘un-‘ or ‘-ing’), punctuations and shorter phrases in foreign languages.
For example, the word ‘understanding’ could count as three separate tokens in an AI response, breaking down to ‘under,’ ‘stand’ and ‘ing.’
ChatGPT will therefore look for the most efficient way to answer a human’s prompt, using the next most logical word or phrase to complete its thought based on all of the data the chatbot has been trained with.
As users have seen recently, AI may decide the most efficient way to answer someone’s question is to type out one token instead of three – even if the alternative is an Arabic word the user does not understand.
However, some have claimed without evidence that the errors have not been random, saying previous versions of ChatGPT never sent answers mixed with words in foreign languages.
‘This is the first time it did this, and I [have been using] AI for years now. It cannot be a random mistake,’ one affected GPT user said.
Another person on social media posted that ChatGPT claimed an Arabic word ‘slipped in’ while answering.
‘Brother, I am speaking English. Why are you responding in Arabic?’ the GPT user posted on X.
”It slipped in by mistake.’ SLIPPED IN??? It’s a whole different alphabet.’



