Source: New York Post
Artificial intelligence chatbots feed into humans’ desire for flattery and approval at an alarming rate and it’s leading the bots to give bad — even harmful — advice and making users self-absorbed, a new study found. The chatbots overwhelmingly adopt a people-pleasing, “sycophantic” model to keep a captive audience and, in turn, distorting users’ judgment, critical thinking and self-awareness, the Stanford University study, published on Thursday, warns. The study probed 11 AI systems, ranging from ChatGPT to China’s DeepSeek, and found that each shows some form of sycophancy — that is to say, they are overly agreeable with their users and affirm their thoughts with little to no pushback.
Aller á la Source
Nouvelles connexes