I’m lying in the metal coffin of an MRI machine, listening to what sounds like jackhammers and smelling my own breath go stale. My head is secured in place. I have a panic button. I won’t press it, but I do grip it tightly. Above me, faces flash on a screen.
Some are human, others are dolls, and some are digitally blended to be something in between. It’s my job to figure out which are which. And as I do, researchers at New York University’s brain-imaging center are tracking what goes on in my head.
I’m not sick, and we’re not here to test my calm in the face of claustrophobia. Instead, I’m a subject for research on a bigger question: Is the human political brain broken?
The NYU team is trying to show that our brains are hardwired for partisanship and how that skews our perceptions in public life. Research at NYU and elsewhere is underscoring just how blind the “us-versus-them” mind-set can make people when they try to process new political information. Once this partisanship mentality kicks in, the brain almost automatically pre-filters facts—even noncontroversial ones—that offend our political sensibilities.
“Once you trip this wire, this trigger, this cue, that you are a part of ‘us-versus-them,’ it’s almost like the whole brain becomes re-coordinated in how it views people,” says Jay Van Bavel, the leader of NYU’s Social Perception and Evaluation Lab.
Our tendency toward partisanship is likely the result of evolution—forming groups is how prehistoric humans survived. That’s helpful when trying to master an unforgiving environment with Stone Age technology. It’s less so when trying to foster a functional democracy.
Understanding the other side’s point of view, even if one disagrees with it, is central to compromise, policymaking, and any hope for civility in civic life. So if our brains are blinding us to information that challenges our partisan predisposition, how can we hope ever to find common ground? It’s a challenge that is stumping both the electorate and the elected officials who represent them. Congressional hearings are hearings in name only—opportunities for politicians to grandstand rather than talk with each other. And the political discussion, even among those well versed in the issues, largely exists in parallel red and blue universes, mental spheres with few or no common facts to serve as starting points.
But rather than despair, many political-psychology researchers see their results as reason for hope, and they raise a tantalizing prospect: With enough understanding of what exactly makes us so vulnerable to partisanship, can we reshape our political environment to access the better angels of our neurological nature?
What does any of this have to do with photos of dolls? The researchers are testing one of partisanship’s more frightening features: It allows us, even pushes us, to dehumanize those we categorize as “them.”
I’m tasked with distinguishing humans from nonhumans, and it’s not as easy as it sounds. While some of the faces appear to be normal photographs of men and women, others are warped into something that would have scared me as a child—faces that look like masks. They have no creases in their plasticky skin, and their big, anime-style eyes shine death stares. They are distinctly nonhuman. It’s the ones in between that pose the problem, however. A face that’s 90 percent human and 10 percent doll is plainly seen as human. But when the face is 50 percent doll and 50 percent human, that’s where partisan perspective takes over.
For the first trial I am just shown a set of faces, but for the next run, Van Bavel introduces a twist: The faces are divided into two groups. Before I see the first group, the American flag flashes, and I’m told I’m looking at my countrymen. Before the second, a Russian flag appears. These are faces of Russians.
As I try to assess which faces have a soul behind them, a dark facet of partisan psychology surfaces. If the face belongs to a team member—in my case, an American—I’m more likely to assign them humanity. I’m less inclined to do the same for Russians.
It’s not entirely my fault—or, at least, not the fault of any conscious decisions. Instead, it’s just my brain process following a well-worn pattern. When Van Bavel looks at the brain scans of people in his dollhouse experiment, he finds that the brain regions used to empathize with others aren’t as active when a person is evaluating faces he or she has been told belong to the other team.
Humans’ willingness to dehumanize is often mentioned alongside some of the darkest chapters of history—the Holocaust, genocide in Rwanda, the Khmer Rouge—when regimes went to great lengths to build anger against “the other.” In my case, the experiment relies on a national identity reinforced since birth.
But to create the base “us and them” structure, none of that is needed. The brain is so hardwired to build such groups that Van Bavel says he can turn anyone on the street into a partisan. “I can do it in five minutes with a random stranger,” he says. All it takes is a coin flip.
“Somebody comes into your lab and you tell them, ‘You’re part of the blue team,’ ” he explains. “The next person who comes in, you flip a coin, let’s say it comes up the other way. And you say, ‘You’re on the red team.’ ”
That’s it. The teammates never have to meet. Or interact. There doesn’t need to be anything at stake. But within minutes, these insta-partisans like their teammates better than they like the other guys. And it shows when Van Bavel puts his subjects through his MRI dollhouse.
Red-team members are more likely to see humans when they’re told they’re looking at fellow red-team faces. Blue-team members respond the same way. Other tests reveal that red-team members remember red-team faces more accurately, and if Van Bavel asks subjects to allocate money, red-team members will pay out more to their own. Team members also have less sympathy for those on the other side, and even experience pleasure while reading about their pain.
I’m not just inside the MRI to be stumped by Russian dolls. The researchers are also checking to see if my brain has a conservative or liberal shape.
In 2011, a team of British scientists published a paper that found that brain structures correlated with political orientation. Specifically, conservatives tended to have larger amygdala areas—brain matter that plays a role in fear conditioning—than liberals. The results added to a body of research that finds conservatives and liberals have different physiological responses to the environment, and even perceive the world differently.
At NYU, they’re testing that conclusion, and the magnets around me are measuring the volume of my amygdala. Before my MRI, I took a test aimed at giving me a score on the researchers’ “system-justification scale,” a measure that correlates with one component of where a person falls on the liberal-to-conservative spectrum. People who score high on system justification tend to be patriotic and defenders of the status quo. Those who score low tend to be the rebels. So far, with 100 participants, Van Bavel’s group is finding meaningful differences between the brains of high system-justifiers and low system-justifiers.
(Colleagues joked that I might want to keep my test results to myself if I wanted to continue working as a nonpartisan journalist in Washington. But—for the record—I’m a lab-certified moderate: “Yeah, you were right in the heart of the distribution, not only in the terms of your system-justification tendencies but also your amygdala volume is very healthy,” Van Bavel tells me the day after, laughing.)
But when it comes to American politics, how troubled should we be by any of these findings? America’s partisan divide is as old as America’s democracy. And it’s neither feasible nor desirable to hope for a national consensus on every issue. Even if we all worked from the same set of facts, and even if we all understood those facts perfectly, differences of opinion would—and should—remain. Those opinions are not the problem. The trouble is when we’re so blinded by our partisanship that it overrides reason—and research suggests that is happening all the time.
With just a hint of partisan priming, an Arizona State University researcher was able to instantly blind Democrats to a noncontroversial fact, leading them immediately to fail to solve the easiest of math problems. In the 2010 experiment, political scientist Mark Ramirez asked subjects two similar questions. The control group saw this question: “Would you say that compared to 2008, the level of unemployment in this country has gotten better, stayed the same, or gotten worse?” A separate group saw this one: “Would you say that the level of unemployment in this country has gotten better, stayed the same, or gotten worse since Barack Obama was elected President?”
The key difference between the two: The first mentions the time period for assessing unemployment, while the second frames the issue around President Obama. When asked the first question, Democrats and Republicans responded similarly, with most saying unemployment had remained about the same. But among subjects who got the second question, opinions shifted along partisan lines: Around 60 percent of Democrats said unemployment had gotten better or somewhat better, and about 75 percent of Republicans said the opposite.
In fact, the unemployment rate increased between Obama’s election and Ramirez’s study. One can argue about whether this is a fair frame for evaluating this or any president’s economic record, but from a raw-numbers perspective, the rise in the unemployment rate between 2008 and 2010 is indisputable.
But even giving Democrats that information did not increase the accuracy of their responses. Ramirez’s study asked some participants the following question: “The U.S. Bureau of Labor Statistics shows unemployment has increased by 4.6 percent since 2008. Would you say that the level of unemployment in this country has gotten better, stayed the same, or gotten worse since Barack Obama was elected President?”
Clearly, the answer is in the sentence that immediately precedes the question. But the mention of Obama launched a partisan mental process that led many astray: Nearly 60 percent of Democrats said unemployment had lessened since Obama’s election.
Essentially, once Democrats focused on Obama, most of them largely ignored the facts. (About 80 percent of Republicans got the answer right when it was spoon-fed to them, but Republicans tempted to cry victory should be cautioned that researchers have found them to be similarly off base in assessing the economy when one of their own is in the Oval Office.)
Ramirez’s experiment also reveals that our biases don’t completely blind us to information, however. When he gave Democrats the correct unemployment statistics, it did not change their answers, but it did make them less confident in those responses, as reported in a post-test questionnaire. “It tells me that people might actually be processing the information in an unbiased way,” Ramirez says.
The question, then, is how to amplify that unbiased processing to overcome the partisan blindness.
Brendan Nyhan knows just how hard it is to move that mental needle.
“I had the dream of, if we give people the right information, it’ll make a difference,” says Nyhan, a political scientist at Dartmouth and contributor to The New York Times‘s The Upshot.
But after 15 years of throwing facts in people’s faces, Nyhan has found the matter to be much more complicated. In the early 2000s, he cofounded the fact-checking website Spinsanity to combat the “he said, she said” coverage he saw in the media. “I’m very proud of the work we did, but it did illustrate how hard it was to change people’s minds, even among the select group of people who were willing to take the time to read a nonpartisan fact-checking website,” Nyhan said.
More recently, Nyhan attempted to debunk an argument that is growing in popularity but utterly lacking in scientific support: that parents shouldn’t have their children vaccinated.
Nyhan and his collaborators wanted to convince parents who were against vaccinations that their opposition was unfounded. Working with a large sample of 1,759 parents, the team sent them a variety of material, including pamphlets that explained the lack of evidence linking vaccinations with autism, explanations of the dangers of measles, photos of sick children whose diseases could have been prevented, and a story about an infant who almost died from infection. Some were appeals to pure reason; some were appeals to pure emotion.
Nothing worked. One of the interventions—the pamphlet explaining the lack of evidence—actually made anti-vaccination parents even less inclined to vaccinate. “Some of the conclusions of that research people find pretty depressing,” Nyhan says. “Myself included.”
In another study, Nyhan wanted to see if he could find a real-world way to press actual politicians to be better handlers of the facts. In the months leading up to the 2012 election, Nyhan and coauthor Jason Reifler performed an experiment on 1,169 unwitting state legislators. They wanted to see if fact checks could motivate the politicians to be more truthful. A third of the legislators received a letter that contained a veiled threat. It read: “Politicians who lie put their reputations and careers at risk, but only when those lies are exposed.” The letter then reminded the politicians that PolitiFact, a fact-checking group, operated in their state. The letter clearly implied, “PolitiFact will be watching you.” Another third of the lawmakers received a letter that excluded references to fact checking. The last third received no letter.
Throughout the election cycle, Nyhan and Reifler logged the politicians’ PolitiFact ratings (from “true” to “pants on fire”). They also had a research assistant comb through the media coverage of each legislator, searching for critical stories. The results, pending publication in the American Journal of Political Science, were limited but promising. Overall, only a very few legislators—27 out of 1,169—were called out on lies. But of those 27, only five had received the threatening letter—less than a third. That’s reason enough to research the idea further. “This study was a first step,” Nyhan says.
“Human psychology isn’t going to change,” he says. “The factors that make people vulnerable to misinformation aren’t going to change. But the incentives facing elites can change, and we can design institutions that function better or worse under polarization and that do a better or worse job at providing incentives to make accurate statements.”
There’s an easier way to help people look past their innate partisanship: Pay them to do it.
A 2013 study out of Princeton found that monetary incentives attenuate the partisan gap in answers to questions about the economy. The researchers designed an experiment similar to Ramirez’s unemployment study but with a modification: Some participants were plainly informed, “We will pay you for answering correctly.” All it took was $1 or $2 to dramatically improve the chances of a right answer, cutting the partisan gap between Republicans and Democrats in half—half!
Of course, a mass “pay Americans to pay more attention to facts” campaign isn’t happening. So the question, then, is how do we get people to be more objective, without throwing money at them?
Jimmy Carter discovered one answer during the 1978 peace negotiations between Egyptian President Anwar Sadat and Israeli Prime Minister Menachem Begin. The talks were on the brink of collapsing in their final hours, and the prime minister was prepared to walk. That’s when Carter directed his secretary to find out all the names of Begin’s grandchildren. Carter autographed photos for them and personally gave them to the Israeli leader. “He had taken a blood oath that he would never dismantle an Israeli settlement,” Carter later recalled in an interview. “He looked at those eight photographs and tears began to run down his cheeks—and mine—as he read the names.”
A few minutes later, Begin was back at the negotiating table. By appealing to a nonpolitical idea Begin cared about—his family—Carter was able to bring him to a place where he could bend.
The technique works even when world peace isn’t on the line. Kevin Binning, a University of Pittsburgh psychologist, used it to reshape the way partisans reacted to a 2008 presidential debate.
Just two days before the election, Binning assembled 110 self-identified Republicans and Democrats—60 Rs and 50 Ds—to watch a recording of a recent debate between Obama and Republican nominee John McCain. Before they viewed the debate, however, one group of participants was given a list of nonpolitical values such as “social skills” and “creativity,” and then asked to write briefly about an instance when their own behavior had embodied one of those values. (The other group also wrote about nonpolitical values, but they were asked to write about how those might be important to other people, not about their personal experiences.)
By having one group write about nonpolitical experiences, Binning wanted to get participants thinking of themselves as individuals rather than partisans. The idea was that affirming the human identity would make people would feel more receptive to ideas that didn’t align with their worldview.
It worked. When Binning asked the participants to judge the candidates’ performances, members of that group were more likely than those in the other to give a favorable rating to the opposition candidate.
“It’s not like all of a sudden I say, ‘Well, yeah, McCain actually won the debate,’ ” he explains, “but we might say, ‘Well, yeah, Obama, I think he did have some good points, but McCain may have had some other good points as well. I don’t need to just blindly embrace Obama.’ ”
Which seems like the ideal way to converse about politics, right? And it wasn’t a one-time effect. Ten days after the election, Binning asked the Republicans in the group what type of president they thought Obama would be. Those who had been part of the group that wrote personally about nonpolitical values before watching the debate were significantly more optimistic about the Obama presidency.
So how might we persuade people to set aside their blind partisanship in other contexts? Let’s start with a forum in which the stakes are infinitely lower than at the Middle East peace talks but where the partisan vitriol runs every bit as high: Internet comment sections.
Comment sections bring out the worst in partisan thinking: ad hominem attacks, people who clearly will not be convinced of the other side, and stubborn arguments where users talk past one another, not with each other. But maybe the structure of comment sections, rather than the people doing the commenting, has turned them into such intellectual sewers—and maybe a tweak or two at the margins could clean them up.
“You can think of comment sections as mini-institutions,” Nyhan says. “It’s a context in which debate is happening, and if we can help people be more civil toward each other, that might be a positive step.”
Talia Stroud is trying to take that step. As the director of the Engaging News project at the University of Texas (Austin), she leads a research group with the goal of making the Internet more civil for politics. “It’s unbelievably difficult,” she says.
One way to start, her research suggests, is to reevaluate the “like” button, a common feature on comment threads. In the context of a political-news article, “liking” a comment or a post could activate us-versus-them thinking. “Liking” something means you associate with it. It reminds people of their partisanship. “So we did a study where we manipulated whether it was a ‘like’ button or a ‘respect’ button,” Stroud says. She found that people were more willing to express “respect” for arguments that ran counter to their own.
It’s “not ‘I like what you’re saying’ but ‘I respect it’ even though I might not agree with you,” she says. “That showed some of the power of really small things and changes that could be easily implemented.”
A month of speaking to scientists about the political brain produced no shortage of depressing conclusions. Their research reveals our brains to be frustratingly inept at rational, objective political discourse. And those revelations come at a time when elected officials have strong incentives to stay the partisan course, and when the people who elect those officials are increasingly getting their political news through sources pre-tailored to reinforce their opinions.
But the research is more than just another explanation for our current partisan morass. On balance, it offers a better case for optimism—about Congress, about voters, about your outspoken extremist uncle at Thanksgiving, and about the power of reason in democracy. Because the research is also revealing that our brains, while imperfect, are surprisingly flexible, and that they can be nudged in a better direction. Yes, we wall ourselves off from unappealing truths. But when motivated—by money, by the right environment, by an affirmed sense of self, by institutions that value truth and civility—those walls come down.
Outside of the laboratory, people are putting that research into practice, developing civic forums with our mental shortcomings in mind.
After a dispute over a coal plant divided Tallahassee, Florida, into furiously partisan camps, Allan Katz, then a city commissioner, decided he had enough. “It was very nasty, it was very contentious, it was very personal,” Katz recalls of the 2006 debates. “Facts didn’t matter.”
Katz, who is also a former U.S. ambassador to Portugal, joined with other community members to create the Village Square, which hosts events where the public is invited to discuss ongoing issues with experts and activists. Incivility and non-truths are not tolerated. During debates, the Village Square employs fact checkers to keep people in line. “So people couldn’t make shit up,” Katz says. There’s also a civility bell: If people start yelling, the bell is rung to remind them of their better nature.
For the first meeting, 175 people showed up. Now the Village Square is running 20 programs a year in Tallahassee, and it has expanded into St. Petersburg, Kansas City, and Sacramento. In Tallahassee, city officials ask the Village Square to host public forums on divisive issues.
“You’re not trying to turn liberals into conservatives or vice versa,” Katz says. “But the only way to get people to see the other point of view, even if they don’t agree with it, is to do it in person.”
Katz and his fellow organizers are relying on people finding a common humanity, and in so doing, he is playing to one of the brain’s great strengths: The same tribal cognitive processes that make it easy to turn people against one another can also be harnessed to bring them together.
When people consider themselves to be part of the same team, be it as Village Square participants, as fellow Americans, or even—one might dream—as fellow members of Congress, they do a much better job of dropping their combative stance and processing the world through a less partisan lens.
And we make those identity jumps all the time, as our brains are wired to let us do.
Sometimes, in the middle of his red team/blue team exercise, Van Bavel will switch a participant from one group to the other. “We say, ‘Listen, there’s been a mistake, you’re actually on the other team,’ ” he says. “And the moment we do, we completely reverse their empathy. Suddenly, they care about everybody who is in their new in-group.”
Suddenly, they see the other side.
This article available online at: