A widely circulated image of nine-year-old Mariam Dawwas, emaciated from hunger and cradled by her mother in Gaza City, has become a flashpoint in the growing controversy over AI misinformation — after Elon Musk’s chatbot, Grok, incorrectly identified it as a years-old photo from Yemen.
The photograph, taken by AFP’s Omar al-Qattaa on August 2, 2025, shows the effects of Israel’s blockade and worsening famine in Gaza. But when users on X (formerly Twitter) asked Grok about the image’s origin, the chatbot wrongly claimed it depicted Amal Hussain, a Yemeni child who died in 2018 — triggering a flood of online confusion and misinformation.
Read also: xAI issues apology for ‘horrific’ behaviour by Grok AI chatbot
Despite corrections and outrage, Grok repeated the false claim in follow-up responses, even after acknowledging the error once. The incident has fueled fresh concerns over the reliability — and inherent bias — of artificial intelligence systems trained and deployed by powerful tech firms.
AI "truth tools" misfire on real-world suffering
Mariam’s case has become emblematic of Gaza’s starvation crisis, with her weight dropping from 25kg to just 9kg, according to her mother, who told AFP that even basic nutrition like milk is "not always available."

Yet Grok, the flagship AI product of Musk’s xAI, provided confidently incorrect information — showing how AI tools can amplify misinformation, particularly in the context of war and humanitarian suffering.
“This is more than just an error,” said AI ethics researcher Louis de Diesbach. “It’s a failure of trust and responsibility in the middle of a humanitarian disaster.”
Political and ethical questions grow
The photo mishap has sparked political fallout. Aymeric Caron, a French pro-Palestinian lawmaker, was accused of spreading disinformation for reposting the image — based on Grok’s misidentification — highlighting how errors by AI systems can lead to real-world reputational damage and manipulation claims.
Critics have pointed to what they describe as Grok’s political slant, suggesting the chatbot’s output reflects the ideological leanings of Elon Musk, including associations with the U.S. far right and controversial figures such as Donald Trump.
“These tools aren’t just wrong — they’re biased,” Diesbach said. “They’re black boxes fine-tuned to produce content, not necessarily truth.”
Systemic flaws, not isolated bugs
The problem, experts argue, is structural. Grok — like other generative AIs — operates without real-time fact-checking or the capacity to learn from new, verified data unless its underlying model is retrained or updated. Even when corrected, it may continue giving false answers, because the "alignment phase" — the process used to define what responses are acceptable — remains unchanged.
Another AFP image of a starving child from Gaza was also misattributed by Grok, this time to Yemen in 2016, leading to accusations of manipulation against the French newspaper Libération.
Even Mistral AI’s chatbot, Le Chat, which was partially trained on AFP content, made the same mistake — suggesting that the issue transcends one specific AI and points to a broader failure in how such tools are designed and deployed.
“A friendly pathological liar”
Diesbach likens AI chatbots to “friendly pathological liars.”
“They may not always lie, but they always could,” he said. “They’re not built to verify facts, they’re built to generate responses — and that distinction is critical, especially in matters of war, famine, and human rights.”
As AI becomes more deeply integrated into how people consume news and validate images online, the Mariam Dawwas case serves as a stark reminder of the limits and dangers of trusting machines to mediate truth — especially in times of crisis.







