Does being polite to AI actually make a difference?
People swapping tips on how to talk to chatbots have produced a strange kind of folklore; from praising AI before asking a question to threatening it, or even pretending to be aboard the Starship Enterprise. But according to researchers and AI experts, most of this advice is either ineffective or outdated.
Does being polite to AI actually make a difference?
People swapping tips on how to talk to chatbots have produced a strange kind of folklore; from praising AI before asking a question to threatening it, or even pretending to be aboard the Starship Enterprise. But according to researchers and AI experts, most of this advice is either ineffective or outdated.
Research finds that politeness, flattery or intimidation rarely makes a meaningful difference when interacting with large language models (LLMs), the technology behind tools such as ChatGPT.
In one experiment, researchers tested whether “positive thinking” could improve AI accuracy. They complimented chatbots, encouraged them to think carefully and added upbeat phrases like “this will be fun”. The results were inconsistent. The only odd exception came when an AI was asked to imagine it was part of Star Trek, which briefly improved its basic maths skills.
The findings, according to BBC, added to a growing pile of myths surrounding “prompt engineering”, the idea that carefully chosen words can unlock better AI performance. Experts now say there is no magic phrase.
“A lot of people think there’s some secret set of words that will suddenly make an AI smarter,” said Jules White, a computer science professor at Vanderbilt University. “But it’s not about word choice. It’s about clearly expressing what you’re trying to do.”
The question of manners gained traction last year after a post on X asked how much money OpenAI had lost in electricity costs because users say “please” and “thank you” to its models. OpenAI chief executive Sam Altman replied: “Tens of millions of dollars well spent.”
While the remark was widely read as a joke, it sparked a serious debate. Some studies suggest polite prompts produce slightly more accurate responses. Others found the opposite — including one small test where an earlier version of ChatGPT performed better after being insulted.
The research, however, remains limited and often contradictory. AI systems are also updated frequently, meaning findings can become obsolete within months.
According to applied machine-learning engineer Rick Battle, newer models are far less sensitive to tone than earlier versions. “Back then it was a complete gamble,” he said. “Today’s systems are much better at identifying what actually matters in a prompt.”
Experts warn against assuming chatbots have moods, personalities or feelings that can be managed.
AI models work by breaking text into tokens and predicting the most statistically likely response. Every word and punctuation mark can influence the output — but not in a way humans can reliably control.
The key message is simple: clarity matters more than courtesy.
“Stop treating AI like a person,” Battle said. “Treat it like a tool.”
Researchers say there are practical ways to get better results from chatbots:
- Ask for multiple answers rather than one. This encourages comparison and reduces over-reliance on a single response.
- Provide examples of what you want, such as previous emails or writing samples, instead of long lists of instructions.
- Let the AI ask questions, especially for complex tasks like drafting job descriptions or plans.
- Avoid role-playing when accuracy matters. Telling an AI it is an “expert” can increase confidence but also raise the risk of hallucinations.
- Stay neutral and avoid leading prompts that steer the AI towards a preferred answer.
Surveys suggest most people already are. A study by Pew Research Center found more than half of Americans say “please” to smart devices, while a later poll showed 70% of users remain polite to AI tools.
Experts say manners may not improve performance — but they can improve the human experience.
“Being polite can make people more comfortable using the technology,” said researcher Sander Schulhoff. “It doesn’t help the model, but it may help the user.”
AI cannot feel offence or gratitude. But how people interact with it may still shape their habits elsewhere.
As one philosopher once warned, cruelty — even towards things that cannot suffer — has a way of hardening the person who practises it.