AI SCIENCE

An AI just outperformed psychology’s favourite models

Researchers have built a new AI model called Centaur, trained on data from 160 psychology experiments and over 10 million human choices.

It’s designed to mimic how people make decisions in different scenarios, using Meta’s Llama language model and a dataset called Psych-101.

In tests, Centaur often made choices that closely matched human behaviour, even in tasks it hadn’t seen before, like a modified version of the classic “two-armed bandit” experiment.

Some in the scientific community are intrigued by what Centaur might offer.

It’s shown stronger performance than many older, task-specific cognitive models.

And because it can be run entirely in code, it could help researchers design experiments before running them with real people.

But not everyone’s convinced. Critics say Centaur’s results don’t mean it actually thinks like a person.

In brief:

  • Centaur was trained on 160 experiments and 10 million human decisions

  • Experts question whether it truly reflects how people think

  • The Psych-101 dataset may become a valuable tool for future research

For example, it can remember 256 digits in short-term memory tests (most humans manage about seven), and respond in just 1 millisecond, hardly realistic.

Science is kinda divided

Others argue the model oversimplifies human cognition and can’t explain how the brain really works.

Still, there’s praise for the effort behind the Psych-101 dataset, which could become a handy resource for testing new models.

While Centaur itself might not unlock the secrets of the mind just yet, it’s got researchers talking and testing what AI might reveal next.

Centaur said “I’m just like you fr”, but then did maths faster than my calculator. You should see me with the restaurant bill, Centaur.

Keep Reading

No posts found