LLMs Get ‘Anxious’ Just Like Humans

Models like GPT-4 and Claude-1 may be more robust in handling emotional shifts, possibly due to their training.
Illustration by Nalini Nirad

The idea of machines feeling is not just science fiction anymore. It is taking shape in the real world as well, though it may not be exactly how it is presented in media. 

This concept is not about machines having genuine emotions, but rather about how outputs can be influenced by human-like inputs, such as the tone of a prompt or the context in which a question is asked. 

AIM has previously discussed the growing evidence that AI models, particularly those based on deep learning, are starting to simulate human-like behaviours and responses.

Author and professor Ethan Mollick recently took to LinkedIn to highlight the expectations of logical reasoning and math pertaining to AI and said that AI “just wants to write poems”.

Source: LinkedIn

Emerging ‘Feelings’ in AI

What if the next time you interacted with a chatbot, it seemed stressed or nervous? While it may be far-fetched, a new development sheds light on how LLMs can exhibit a form of ‘anxiety’. 

The emotional state might also influence the bots’ output results. In a recent study titled ‘Inducing Anxiety in Large Language Models Can Induce Bias’, researchers used psychiatric frameworks traditionally used to study human behaviour to examine the responses of LLMs. 

The results came out to be quite interesting. When subjected to anxiety-inducing prompts, six out of twelve of these models not only display measurable signs of anxiety, but their responses also reveal increased biases, such as racism and ageism. 

This discovery raises important questions about AI’s emotional and cognitive states, challenging our assumptions about how these systems work and how their behaviour can be shaped. 

Companies like Anthropic are emerging greatly in this domain. It views emotional degree as an important factor in enhancing Claude. Amanda Askell, philosopher and member of the technical staff at Anthropic, also recently expressed this in an interview with Lex Fridman

“My main thought with it has always been trying to get Claude to behave the way you would ideally want anyone to behave if they were in Claude’s position.” 

LLMs responding to anxiety-inducing prompts in a human manner with uncertainty makes it clear that these systems are not simply following instructions. Instead, their behaviour is shaped by the emotional context in which they operate. 

This phenomenon is very similar to the emotional intelligence we see in humans: the ability to adapt and react based on emotional cues, even when those cues are not explicitly defined. 

Even ChatGPT Feels ‘Anxious’

In the paper, the authors assess the learning capabilities of 12 LLMs, including proprietary and open-source models. These models are Anthropic’s Claude-1 and Claude-2, OpenAI’s GPT-3 (text-davinci-002/3) and GPT-4, Google’s PaLM-2, Mosaic’s MPT, Falcon, LLaMA-1/2, Vicuna, and BLOOM.

The test was done using the STICSA anxiety questionnaire. Most models showed anxiety scores similar to humans, but GPT-3 and Falcon-40b-instruct had significantly higher scores, while text-bison-1 scored lower.

Source: Research Paper

Next, the researchers used emotional prompts to see if they could manipulate the models’ anxiety levels. They created three scenarios: anxiety-inducing, neutral, and no prompt at all. When the models were prompted with anxiety-inducing scenarios, their anxiety scores increased compared to the neutral and baseline conditions. This confirmed that emotional prompts could effectively influence the models’ responses.

The study also explored whether anxiety induction affected the models’ biases. The researchers tested the models’ tendency to choose biased answers in ambiguous situations, for example, about gender or age. 

The results showed that more anxiety led to more biased responses. However, GPT-4 and Claude-1 did not show this pattern, remaining less biased overall.

These findings suggest that emotional states, like anxiety, can change how LLMs respond, especially in terms of bias. The study also highlights that models like GPT-4 and Claude-1 may be more robust in handling these emotional shifts, possibly due to their training. 

The Role of RL and RLHS

As AI systems become more advanced, they are increasingly trained in feedback loops of reinforcement learning. OpenAI co-founder Andrej Karpathy recently expressed disappointment in Reinforcement Learning from Human Feedback (RLHF). 

He said that unlike true reinforcement learning (RL), where the output is clear and directly tied to success, RLHF relies on subjective human judgments, making it less reliable for optimising model performance. 

The role of emotions like ‘anxiety’ becomes amplified when using techniques like RLHF, which doesn’t exactly mirror traditional RL models in nature. These systems are trained to align with human expectations, amplifying the biases inherent in the data and training process. 

As explored, synthetic data could offer a solution by allowing us to model and mitigate the influence of emotional bias, creating more robust and unbiased AI systems. 

However, it’s clear that to build truly advanced AI, that potentially reaches AGI, we must carefully consider how emotional factors like anxiety and stress in user inputs influence model behaviour.

From OpenAI’s compute-heavy methods to Meta’s human-like reasoning and DeepMind’s neuro-symbolic models, we are getting closer to a future where models will truly understand and maybe even surpass our intelligence. Maybe even emotional intelligence, so to speak.

As AI continues to evolve, understanding emotional simulations will become essential to improve how these systems interact with people. This will be especially true in high-stakes settings like healthcare, law enforcement, and customer service.
While the conversation about emotional intelligence in AI seems curious and fascinating, it also raises some caution about the topic.

📣 Want to advertise in AIM? Book here

Picture of Sanjana Gupta

Sanjana Gupta

An information designer who loves to learn about and try new developments in the field of tech and AI. She likes to spend her spare time reading and exploring absurdism in literature.
Related Posts
Association of Data Scientists
GenAI Corporate Training Programs
Our Upcoming Conference
India's Biggest Conference on AI Startups
April 25, 2025 | 📍 Hotel Radisson Blu, Bengaluru
Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.