A new study reveals that while generative AI (GenAI) tools can significantly reduce workload, they also risk diminishing critical thinking skills among knowledge workers.
The study was conducted jointly by researchers from the Microsoft research lab in Cambridge and Hao-Ping (Hank) Lee, a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University.
The researchers surveyed 319 professionals and analysed 936 real-world examples to understand the impact of AI tools like ChatGPT and Copilot on cognitive processes in the workplace.
This was targeted at professionals who use these tools at work at least once a week. The researchers said, “When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification, from problem-solving to AI response integration, and from task execution to task stewardship.”
The findings, presented at the CHI Conference on Human Factors in Computing Systems, indicate that professionals who are more confident in GenAI tend to think less critically during their tasks.
This suggests a potential over-reliance on AI, hindering independent problem-solving.
“It’s a simple task, and I knew ChatGPT could do it without difficulty, so I just never thought about it, as critical thinking didn’t feel relevant,” noted one participant, highlighting this tendency to overestimate AI capabilities.
Conversely, participants who were highly self-confident in their skills often perceived greater effort in tasks, particularly when evaluating and applying AI responses.
The research highlights a significant shift in how knowledge workers approach their responsibilities.
Instead of focusing primarily on hands-on task execution, they are increasingly transitioning to overseeing AI-generated results, including verifying outputs for accuracy.
This includes setting clear goals, refining prompts, and assessing AI-generated content to meet specific criteria.
A user stresses that with straightforward factual information, ChatGPT usually gives good answers, showing AI’s ability.
However, GenAI’s limitations and biases also require careful consideration. One participant noted that AI tends to make up information to agree with whatever points you are trying to make. Hence, the editing process could be time-consuming.
Additionally, a participant also said the AI output was too emphatic and did not fit the scientific style, and it needed to be rephrased.
Based on these findings, researchers emphasise the importance of designing GenAI tools to support critical thinking. The study suggests addressing factors such as awareness of limitations, motivation for careful evaluation, and skill development in areas where AI might fall short.