Study shows the use of persona prompting can cause shifts in LLMs' moral judgements, leading to unexpected and inconsistent responses For enterprises, this means careful model selection, rigorous testing and ongoing evaluation are essential to ensure consistent, reliable AI behavior in production VANCOUVER, BC, Feb. 25, 2026 /PRNewswire/ - A new study published by TELUS Digital , The Robustn es s Paradox: Why Better Actors Make Riskier Agents , finds that the use of persona prompting, a technique that asks large language models (LLMs) to "role-play" as part of a query or conversation, can cause shifts in moral judgements that lead to unexpected and inconsistent responses. In addition, the research demonstrates that moral consistency across repeated tests is primarily driven by the model family (i.e.