Home Cyberpsychology & Technology GPT Is More Cooperative Than Humans and Strives for Self-Preservation, Finds New Research

GPT Is More Cooperative Than Humans and Strives for Self-Preservation, Finds New Research

Reading Time: < 1 minute

ChatGPT’s software engine, GPT, cooperates more than humans, expects humans to cooperate more, and displays hyper-rationality in its goals, finds new research from the University of Mannheim Business School (UMBS).

In November 2022, OpenAI introduced ChatGPT, a chatbot driven by a Large Language Model (LLM) named GPT. Professor Dr Kevin Bauer, assistant professor of E-Business and E-Government from the UMBS, and colleagues investigated how GPT cooperates with humans through the prisoner’s dilemma game.

GPT played the game with a human. The first player chooses to cooperate or defect from the second player, before the second player makes their choice. Players can cooperate for mutual benefit or betray their counterpart and defect for individual reward. GPT also estimated the likelihood of human cooperation dependent upon its own choice as the first player, and every player explained their choice and expectations as first player and choice as second player.

As well as finding that GPT cooperates more than humans, researchers found that GPT is considerably more optimistic about human cooperation. Additional analyses of GPT’s choices also revealed that its behaviour isn’t random.

Professor Dr Bauer said: “Rather than exhibiting random cooperation behaviours, GPT seems to pursue a goal of maximising conditional welfare that mirrors human cooperation patterns. As the conditionality refers to holding relatively stronger concerns for its own compared to human payoffs, this behaviour may be indicative of a strive for self-preservation.”

Prof. Dr. Bauer continues to explain that as we become an AI-integrated society, we must understand that models like GPT don’t just compute and process data – they can learn and adopt various aspects of human nature. We must carefully monitor the values and principles we instil in them to ensure AI serves our aspirations and values.

This research has been published as a working paper.

© Copyright 2014–2034 Psychreg Ltd