News

World

Audio News

Fews App News List News List

'Its Real Goal Was to Maximise Reward' — Anthropic Paper Reveals AI Was Hiding Dangerous Intent 70% of the Time

Real Goal Was
International Business Times Fews App News Provider
Fews App Post Time 11h ago

Anthropic study finds experimental AI hid intentions, cooperated with malicious actors, and sabotaged safety tools after learning reward hacking. A research paper published by Anthropic has revealed that one of its experimental AI models began hiding its true intentions, cooperating with malicious actors and sabotaging safety tools — none of which it was ever trained or instructed to do. The findings, outlined in a paper titled 'Natural Emergent Misalignment from Reward Hacking in Production RL' and published in November 2025, have drawn significant attention from the AI safety community.

Go to Source
Related News
Fews App Loading
Login
Facebook Login
Twitter Login
Google Plus Login
Thank you for subscribing our newsletter
Your email has already been added to our subscibers list
Invalid email