Journal article
npj Digit. Medicine, 2025
Clinical Neuroscientist
School of Public Health, Faculty of Social Welfare and Health Sciences,
University of Haifa
199 Abba Khoushy Ave.,
Mount Carmel, Haifa,
Israel, 3103301
APA
Click to copy
Ben-Zion, Z., Witte, K., Jagadish, A. K., Duek, O., Harpaz-Rotem, I., Khorsandian, M.-C., … Spiller, T. R. (2025). Assessing and alleviating state anxiety in large language models. Npj Digit. Medicine.
Chicago/Turabian
Click to copy
Ben-Zion, Ziv, Kristin Witte, A. K. Jagadish, O. Duek, I. Harpaz-Rotem, Marie-Christine Khorsandian, A. Burrer, et al. “Assessing and Alleviating State Anxiety in Large Language Models.” npj Digit. Medicine (2025).
MLA
Click to copy
Ben-Zion, Ziv, et al. “Assessing and Alleviating State Anxiety in Large Language Models.” Npj Digit. Medicine, 2025.
BibTeX Click to copy
@article{ziv2025a,
title = {Assessing and alleviating state anxiety in large language models},
year = {2025},
journal = {npj Digit. Medicine},
author = {Ben-Zion, Ziv and Witte, Kristin and Jagadish, A. K. and Duek, O. and Harpaz-Rotem, I. and Khorsandian, Marie-Christine and Burrer, A. and Seifritz, Erich and Homan, Philipp and Schulz, Eric and Spiller, Tobias R}
}
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate “anxiety” in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4’s reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs’ “emotional states” can foster safer and more ethical human-AI interactions.