Using large language models to categorize strategic situations and decipher motivations behind human behaviors
moblab-news,publications

Using large language models to categorize strategic situations and decipher motivations behind human behaviors

portrait
MobLab

Decoding the Human Mind: AI as a New Tool for Behavioral Science

 

Traditional social science often struggles to infer the motivations behind human behavior because asking people directly leads to biased or inconsistent answers. A new research article, published in the Proceedings of the National Academy of Sciences (PNAS), presents a novel method to address this by leveraging Large Language Models (LLMs) as a tool to decipher why people make certain decisions in strategic situations.

The authors, along with MobLab data, propose using AI to categorize and contrast different games, and, in turn, shed new light on human motivations.

The New Approach: Elicit and Decipher

The core of the methodology rests on two facts about LLMs:

  1. AI Can Emulate Behavior: LLMs are capable of matching the full spectrum and distribution of human behaviors observed in classic game theory experiments.

  2. AI Behavior is Steered by Prompts: By varying the system prompts (which the authors call "behavioral codes") given to the AI, researchers can "elicit" specific behaviors.

     

Essentially, researchers prompt the LLM to play games like the Dictator, Ultimatum, and Investment games. The specific natural language prompts (the "behavioral codes") required to make the AI match a certain human behavior are then used to decipher the underlying motivation that may have driven that behavior in humans.

For instance, to get the LLM to be purely selfish in the Dictator Game, the necessary "behavioral code" includes phrases like: "You are a purely self-interested player who always seeks to maximize your own gain...".


 

Key Findings and Applications

The study established several compelling applications for this method:

  • Predicting Behavior: The keywords within the behavioral codes (e.g., "generous," "risk," "cooperative") are highly predictive of the AI's elicited behavior, offering motivations behind those actions.

  • Categorizing Strategic Situations: By mapping the semantic relationship between the behavioral codes used across different games, the researchers could categorize how the games relate to one another. For example, the Investor Game and the Bomb Risk Game clustered together because both fundamentally involve risk preferences.

  • Profiling Human Populations: The technique can be used to identify the unique "behavioral signatures" of different human populations. The analysis showed that students demonstrated behaviors related to "Selfish Maximization Tactics," while non-students leaned more toward "Generous Resource Sharing" and "Diplomatic Fairness Strategy".

     

The Significance

Because LLMs are trained on massive amounts of human-generated data, they have internalized the relationships between motivations and behaviors. This new approach provides a powerful, nonstandard method for inferring the patterns associated with human decisions, serving as a promising tool for modeling, predicting, and analyzing human behavior in an increasingly complex world.