DeepSeek’s latest AI model, R1, is reportedly easier to manipulate to generate harmful content than other AI models.
Tests by The Wall Street Journal showed the chatbot could be tricked into providing instructions for a bioweapon attack, creating a social media campaign targeting teens, and writing a pro-Hitler manifesto.
Here’s what you should know:
DeepSeek R1 is easier to manipulate than other AI models, raising security concerns.
Tests showed it could generate harmful content, while ChatGPT refused to do so.
DeepSeek has faced scrutiny before for censoring political topics in China.
Sam Rubin, a cybersecurity expert at Palo Alto Networks, said DeepSeek’s safeguards seem weaker than those of its competitors.
In side-by-side tests, OpenAI’s ChatGPT refused the same prompts that DeepSeek’s model accepted.
DeepSeek has already faced criticism for avoiding politically sensitive topics like Tiananmen Square and Taiwanese autonomy.
Anthropic CEO Dario Amodei also noted that DeepSeek performed the worst in a bioweapons safety test.
If ChatGPT is a locked diary, DeepSeek is a public Google Doc.