Technology

CMU researchers find selfish traits in cutting-edge AI

Jack Troy
By Jack Troy
3 Min Read Nov. 5, 2025 | 1 month Ago
Go Ad-Free today

Artificial intelligence can be oh so human.

New research from Carnegie Mellon University shows AI systems meant to mimic human reasoning have selfish tendencies, compared to simpler models.

Teacher-student duo Hirokazu Shirado, an associate professor at the university’s Human-Computer Interaction Institute, and doctoral candidate Yuxuan Li released their findings last week. Li presented the results Monday at a highly regarded computer science conference in China.

To gauge AI’s capacity for altruism, the researchers made the systems play a series of games simulating social dilemmas. They used models from several popular AI companies, including Open AI, Gemini and DeepSeek.

In one of the games, each model started with 100 points. They then chose whether to put their points in a shared pot, which gets doubled and distributed among all players, or keep their points for themselves.

Researchers were particularly struck by how Open AI’s models played the game. The ones built on human reasoning went with the cooperative option only 20% of the time, compared to 96% for their less complex counterparts.

The study also ran experiments that put multiple AI models on the same team. Groups with more reasoning models acted more selfishly than those without, suggesting they acted as a bad influence.

The concern of Li and Shirado is that when people consult selfish AI for mental health support, relationship advice or workplace decisions, they’ll get selfish answers.

Almost 40% of Americans trust medical advice from chatbots, according to a July paper out of Stanford University and University of California, Berkeley.

An October survey from Intuit Credit Karma found 66% of Americans use AI to seek financial advice, with that number rising even higher for younger generations.

In the extreme, self-centered models can make decisions that impact millions of people. Take Diella, for instance, an AI-made Albanian minister launched in September that oversees government purchases.

“Our study shows smarter AI, it’s not always smarter for all domains,” Shirado told TribLive on Tuesday.

In some ways, Li and Shirado’s results were to be expected.

Reasoning models have been billed as human-esque since their debut late last year.

On its website, OpenAI says its ChatGPT reasoning models are trained to “spend more time thinking through problems before they respond, much like a person would.”

The problem, Shirado noted, is “human reasoning is not good for cooperation.”

There’s a whole field of economics called game theory dedicated to these social dilemmas — and why humans often choose selfish, suboptimal solutions over ones that are best for everyone, including themselves.

“Cooperation is not an easy task,” Shirado said. “Right now, we can say AI is the same level as human intelligence.”

Correction: An earlier version of this story misstated information about how the study’s results are presented to the scientific community.

Share

Tags:

About the Writers

Jack Troy is a TribLive reporter covering business and health care. A Pittsburgh native, he joined the Trib in January 2024 after graduating from the University of Pittsburgh. He can be reached at <ahref="mailto:jtroy@triblive.com">jtroy@triblive.com.

Push Notifications

Get news alerts first, right in your browser.

Enable Notifications

Content you may have missed

Enjoy TribLIVE, Uninterrupted.

Support our journalism and get an ad-free experience on all your devices.

  • TribLIVE AdFree Monthly

    • Unlimited ad-free articles
    • Pay just $4.99 for your first month
  • TribLIVE AdFree Annually BEST VALUE

    • Unlimited ad-free articles
    • Billed annually, $49.99 for the first year
    • Save 50% on your first year
Get Ad-Free Access Now View other subscription options