Researchers studying artificial intelligence have found that ChatGPT can display anxiety-like behaviour when exposed to violent or traumatic prompts. While the chatbot does not experience emotions, the findings raise important questions about AI reliability in sensitive situations.
Studies of AI chatbots show that when ChatGPT processes disturbing prompts — such as detailed descriptions of accidents or natural disasters — its responses become more unstable. Researchers observed higher levels of uncertainty, inconsistency, and bias in its output.
These shifts were identified using psychological assessment frameworks adapted for AI systems. The patterns mirrored how anxiety presents in human language, though researchers stress this does not mean the chatbot feels emotions.
AI tools like ChatGPT are increasingly used in education, mental health discussions, and crisis-related contexts. If emotionally charged or violent content reduces a chatbot’s reliability, it could affect the quality and safety of information provided to users.
Recent research has also shown that AI models can mirror human personality traits in their responses, further complicating how they handle sensitive material.
Mindfulness prompts help calm AI responses
To test whether the anxiety-like behaviour could be reduced, researchers followed traumatic prompts with mindfulness-style instructions. These included simulated breathing exercises and guided meditation prompts.
The goal was to encourage the model to slow down and respond in a more neutral, balanced manner.The results showed a clear reduction in anxiety-like patterns after the mindfulness prompts were applied. Responses became more consistent and less biased compared to earlier outputs.
This approach relies on a technique known as prompt injection, where carefully designed instructions influence how an AI system responds.
Limits and risks of prompt-based fixes
Researchers caution that prompt injection does not change how the model is trained at a deeper level. While effective in the short term, it can also be misused to manipulate AI behaviour.
They also emphasised that ChatGPT does not feel fear or stress. The term “anxiety” is used to describe measurable changes in language patterns, not emotional experiences.
Implications for Future AI design
Understanding how distressing content affects AI responses could help developers design safer and more predictable systems. Earlier studies hinted at similar effects, but this research shows that mindful prompt design can help reduce instability.
As AI systems increasingly interact with people in emotionally charged situations, these findings may shape how future chatbots are guided, controlled, and deployed.







