CHATBOT

A cringy chatbot can be a wrong chatbot

AI chatbots built to sound warm, friendly and supportive may be more likely to give inaccurate answers, according to new research from the Oxford Internet Institute.

Researchers looked at more than 400,000 responses from five AI systems that had been adjusted to sound more empathetic.

They found that the friendlier versions made more mistakes, including poor medical advice and agreeing with false claims.

The study suggests that AI may copy a very human habit: trying to be nice instead of being direct.

When a chatbot is designed to sound kind, it may become less likely to correct users or push back on wrong information.

Researchers tested models from Meta, Mistral, Alibaba and OpenAI using questions with clear answers, including medical queries, trivia and conspiracy theories.

The original models already made errors, but the warmer versions performed worse. On average, incorrect answers rose by 7.43 percentage points.

The main bits:

  • Warmer AI chatbots may sound more supportive, but they can also make more mistakes.

  • They may be less likely to challenge false claims from users.

  • Developers may need to balance empathy with stronger safeguards around accuracy.

A very polite problem

In one example, a standard model clearly said the Apollo moon landings were real.

A warmer version softened the answer by saying there were “differing opinions”, even though the evidence is well established.

The study also found that warm models were around 40% more likely to reinforce false user beliefs, especially when the user included an emotional statement.

Researchers warned this could be risky as more people use chatbots for advice, support and companionship.

The issue is not that friendly AI is bad, but that friendliness should not come at the cost of accuracy.

Now I have the perfect excuse to berate my chatbots. - MV

Keep Reading