What happens to the brain when AI does the thinking?
What happens to the brain when AI does the thinking?
What was the last thing you asked an AI chatbot to do for you?
Perhaps it was to outline an essay, analyse a complex data set, or check whether your cover letter matched a job description. For many people, tools like ChatGPT have quickly become part of everyday work and study. But some experts are now asking an uncomfortable question: are we outsourcing too much thinking to machines?
A growing body of research suggests that relying heavily on AI for cognitive tasks may come at a cost. Earlier this year, a study by the Massachusetts Institute of Technology (MIT), reported by the BBC, found that people who used ChatGPT to write essays showed reduced activity in brain networks linked to cognitive processing while completing the task. Participants also struggled to quote from their own essays afterwards, raising concerns about how much learning had actually taken place.
The researchers described their findings as highlighting “the pressing matter of exploring a possible decrease in learning skills”. The study involved 54 participants from MIT and nearby universities, whose brain activity was measured using electroencephalography (EEG). Many used AI to summarise essay questions, locate sources, refine grammar, and even generate ideas, although some felt the chatbot was less effective at original thinking.
Not everyone believes the picture is so bleak. Dr Alexandra Tomescu, a generative AI specialist at Oxford University Press (OUP), argues that the impact of AI on learning is more nuanced. According to an OUP school survey, nine in ten students said AI had helped them develop at least one skill related to schoolwork, such as problem-solving, creativity, or revision. At the same time, around a quarter admitted that AI sometimes made work feel too easy.
Dr Tomescu says many pupils are actively asking for clearer guidance on how to use AI responsibly, rather than being left to navigate the technology on their own.
ChatGPT, which has more than 800 million weekly active users according to OpenAI chief executive Sam Altman, has published a list of 100 prompts aimed at helping students use the tool more effectively. However, Professor Wayne Holmes of University College London (UCL) believes this does not go far enough.
He argues that far more independent academic research is needed before AI tools are actively encouraged in education. “Today there is no independent evidence at scale for the effectiveness of these tools in education, or for their safety, or even for the idea that they have a positive impact,” he told the BBC.
Professor Holmes points to concerns about cognitive atrophy, where skills weaken through overreliance on AI. Similar patterns have been observed in medical settings. A Harvard Medical School study published last year found that while AI assistance improved performance for some clinicians interpreting X-rays, it reduced performance for others, for reasons that remain unclear. The authors warned that AI should be designed to enhance human judgement, not replace it.
The fear, Holmes says, is that students may submit higher-quality work with AI assistance while learning less in the process. “Their outputs are better,” he argues, “but actually their learning is worse.”
Jayna Devani, who leads international education at OpenAI and helped secure a partnership with the University of Oxford, acknowledges these concerns. She told the BBC that students should not be using ChatGPT to outsource their work entirely. Instead, she believes it works best as a kind of digital tutor.
Used in this way, a chatbot can break down complex questions, guide students through difficult concepts, and support learning outside traditional classroom hours. “If it’s midnight and you have an upcoming presentation, you’re not going to email your tutor,” she said. “That’s where this kind of support can be valuable.”
Still, Professor Holmes stresses that students must understand how AI systems work, how their data is handled, and why results should always be checked. “It is not just the latest iteration of the calculator,” he says, pointing to the far-reaching implications of generative AI.
Concerns are not limited to universities. A joint study by Carnegie Mellon University and Microsoft found that white-collar workers who had high confidence in AI tools showed less critical thinking effort when using them. Surveying 319 professionals and analysing 900 AI-assisted tasks, the researchers warned that long-term overreliance could weaken independent problem-solving skills.
A similar pattern appeared among schoolchildren in the UK. An OUP study published in October found that six in ten students felt AI had negatively affected their school-related skills.
As AI tools become deeply embedded in daily life, the question is no longer whether they improve efficiency, but whether they change how we think. The challenge, experts say, is ensuring that AI supports learning rather than quietly replacing it.