21.07.2019 Views

YSM Issue 90.2

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

FEATURE<br />

psychology<br />

UNDERSTANDING COOPERATION<br />

Theoretical model developed to explain human behavior<br />

►BY CYNTHIA YUE<br />

Four people work in a group, each given a sum of money. They<br />

are asked if they wish to contribute some amount of money to the<br />

public pot. The choices are thus made clear: either selflessly contribute<br />

and allow the money to be evenly divided or selfishly keep<br />

the money and share in the amount contributed by the others.<br />

Such is the setup of a public goods game, which, like other game<br />

theory-related concepts like a prisoner’s dilemma, provides insight<br />

into inherent human behavior regarding cooperation and<br />

selfishness. These games have inspired the research conducted at<br />

Yale by Adam Bear, a fourth year Ph.D. candidate in Psychology,<br />

and David Rand, an Associate Professor of Psychology, Economics,<br />

and Management.<br />

Basing their work on empirical data related to these games,<br />

Bear and Rand have developed a theoretical game theory model<br />

that explains that people who intuitively cooperate but can also<br />

act selfishly succeed from an evolutionary perspective.<br />

To create the model, Bear and Rand used MATLAB to construct<br />

agent-based simulations consisting of virtual agents with<br />

all permutations of behaviors, which interact with one another<br />

in various environments through a computer system for several<br />

generations. Afterward, they made mathematical calculations to<br />

confirm the accuracy of their simulations.<br />

“The idea of these simulations is they’re meant to model some<br />

kind of evolution either over biological time or over cultural<br />

time,” said Bear. “People who tend to do well when they play their<br />

strategies are more likely to survive in the next generation than<br />

people who do poorly, so these agents interact on a set of generations,<br />

and once you do well, [you] tend to stay in the game; once<br />

you do poorly, you tend to die out.”<br />

The model itself takes several factors —such as type of thinking<br />

and environment—into consideration.<br />

With thinking, agents can either follow their intuition or use<br />

deliberation. Bear describes intuition as a form of cognition that<br />

uses heuristics—mental shortcuts—to get to answers quickly.<br />

This way of thinking is efficient but may lead to errors in reasoning<br />

due to its inability to reason through the details of the context<br />

one is facing. On the other hand, deliberation allows agents<br />

to take time to reason about the contexts and make more accurate<br />

decisions. In the model, agents can strategize, choosing how<br />

much intuition and deliberation to use.<br />

The environment refers to the proportion of repeated to oneshot<br />

interactions. Agents vary in how much they engage in<br />

one-shot interactions and repeated interactions, in which they<br />

may possibly establish a relationship.<br />

From an evolutionary standpoint for which success seems built<br />

on self-interest, Bear explains, “Say we’re in a repeated interaction:<br />

it’s better if I’m nice to you if I’m going to see you again,” said Bear.<br />

“But if I’m never going to see you again, say it’s a one-shot interaction<br />

and no one else is seeing us…I’m better off being selfish if it<br />

would cost me a lot to be nice to you.”<br />

The conclusions of Bear and Rand’s model focused on environments<br />

with more repeated interactions—as they more realistically<br />

reflect the environment of the real world, according to Bear. He<br />

described the best agent in their model: an agent that has a fast<br />

cooperative response, but selfish response upon deliberation in<br />

one-shot interactions.<br />

Their research, however, received critiques from others working<br />

in the field, particularly Kristian Myrseth, an Associate Professor<br />

in Behavioral Science and Marketing at Trinity College Dublin,<br />

and Conny Wollbrant, an Assistant Professor in Economics at the<br />

University of Gothenburg. “The model makes this crucial claim<br />

that evolution never favors strategies...where deliberation increases<br />

your prosocial behavior,” Myrseth said.<br />

Wollbrant added, “The problem is that we know today, we’re<br />

fairly sure, that…people are often behaving prosocially, not for<br />

strategic reasons, but because they feel that’s the right thing that<br />

they should do.” These two researchers claim that the model does<br />

not allow cases of inherent prosocial behavior to survive despite<br />

the apparent evolution of people with such behavior existing in<br />

the world today.<br />

Nevertheless, according to Bear, he and Rand have continued<br />

to work on additional versions of the model to make it more realistic<br />

by incorporating how intuition and deliberation may not be<br />

so black and white in terms of distinguishing the environment.<br />

“There was this fun process of discovery in the model and<br />

learning what the model was actually showing,” said Bear. “It’s<br />

cool because you know you think when you model something,<br />

maybe it’ll be obvious what you’re going to find, but actually, you<br />

discover these interesting things that you didn’t necessarily anticipate<br />

before modeling.”<br />

IMAGE COURTESY OF REYNERMEDIA<br />

►Agents who intuitively cooperate, but deliberate and defect to<br />

selfish behavior in one-shot interactions, succeed evolutionarily.<br />

34 Yale Scientific Magazine March 2017 www.yalescientific.org

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!