FIFTY YEARS ago this month one of the most famous studies in the history of psychology was conducted, the Stanford prison experiment. Some 24 young men were randomly divided into the role of prisoner or guard and consigned to a simulated prison. According to the textbook account, the guards very quickly became brutal, the prisoners became passive and the result was so toxic that the study—scheduled for two weeks—had to be abruptly halted after only six days.
Led by Philip Zimbardo and funded by the American navy, the study was meant to investigate tensions between guards and prisoners. It gained notoriety because it spoke to the defining question of the times, a quarter-century after the Holocaust and at the height of the Vietnam War: what makes humans act inhumanely to each other? After 1945, the view was that those who commit atrocities must have particular personalities that set them apart from the rest of us. But that idea was challenged by two events a decade before the Stanford study.
In 1961 Hannah Arendt reported on the trial in Jerusalem of Adolf Eichmann, the Nazi functionary in charge of deportations to the death camps, observing that he acted out of ordinary motives (thus coining the phrase the “banality of evil”). In the same year, Stanley Milgram, a psychologist at Yale, ran his infamous study on obedience. Participants were told to apply electric shocks to someone in another room each time they erred in a memory task. The shocks weren’t real, nor the shrieks of pain—but the participants didn’t know that, and many continued administering the volts under the stern glare of the research scientists, representing authority figures.
The two events seemed to show that perpetrators of brutality were not inherent monsters after all but regular people. The implication was that in certain circumstances, any one of us could act monstrously. The Stanford prison study, which ran from August 15th to 21st 1971, seemed to take Milgram’s point a step further. It appeared to show that an authority figure wasn’t even needed to stand over people to get them to act inhumanely. As Dr Zimbardo and his colleagues famously concluded: aggression was “emitted simply as a ‘natural’ consequence of being in the uniform of a ‘guard’ and asserting the power inherent in that role”.
This is troubling. The interpretation not only ignores the role of authority figures in producing toxic behaviour, but it excuses the perpetrators from blame. It’s just what we do. We can’t be held responsible.
Such a conclusion is not an extrapolation from Dr Zimbardo’s argument: it is his argument. After prisoner abuse at Abu Ghraib in Iraq came to light, Dr Zimbardo acted as an expert witness for the defence. He argued that the abusive guards, just like the young men in his Stanford study, were more victims than perpetrators—helplessly succumbing to toxicity in a toxic environment.
The judge rejected those arguments. Apart from anything else, the abuse was only revealed because other guards, appalled, leaked photos of sexual humiliation and mock executions. So much for the inevitability of “natural” guard aggression.
In fact, a close inspection from the archive on Dr Zimbardo’s study, housed at Stanford University, shows that not all the guards were aggressive and not all the prisoners were passive. If anything, guard brutality was the exception rather than the rule. Moreover the researchers themselves may have played a role in producing the toxic environment. Dr Zimbardo briefed his guards before the study began, telling them: “You can create in the prisoners feelings of boredom, a sense of fear to some degree, you can create a notion of arbitrariness that their life is totally controlled by us,” and concluding: “We’re going to take away their individuality in various ways.”
Over the years, the importance of leadership in producing toxicity has become clearer, thanks in particular to the opening of the archive. We now know that the experimenters spoke to guards about how to produce a sense of fear and arbitrariness in ‘prisoners’, and that those who did so were praised while more reticent guards were admonished and told to act more aggressively. After the study finished, one of the guards wrote to Dr Zimbardo about one of his fellow experimenters: “Through the experiment he gave us very good sado-creative ideas”. In short, the study does not reveal natural brutality, as it is often depicted, but how brutality is mobilised by others.
As an illustration of such encouragement, Alex Haslam, Jay Van Bavel, and I analysed a crucial interaction between one of the experimenters, David Jaffe, who acted as the prison warden, and John Mark, a reluctant guard. Mr Jaffe stressed that the aim of the study was progressive—to reveal the toxicity of prison systems—which is only possible if guards produce a toxic system. Tellingly, Mr Mark refused to take the bait. This underscores that those who did the bidding of the experimenters made an active choice. The leaders appear to have encouraged the toxic environment in service of a cause. The followers who acted cruelly chose to identify with the experimenters, to knowingly (and creatively) used cruelty in the service of that cause.
For years Dr Zimbardo has been confronted with scholarship arguing that the experimenters influenced the guards’ behaviour and has consistently dismissed it. Contacted by The Economist this week, he again rejected the idea: “There was zero influence of staff on guards to be brutal or harsh in any way,” he said.
Yet evidence suggests that experimenters had an effect. In social psychology, “identity leadership” is the process of achieving influence over a group by defining who they are and what they value. The researchers sought to persuade the participants that they were involved in important work of value to human progress. The idea is to appeal to the higher motives of individuals to generate base acts. In our research, Dr Haslam and I have shown that the more people identify with the experimenter and their cause, the more likely they are to follow their shocking instructions to the bitter end.
On the 50th anniversary of the Stanford prison experiment, there are two clear lessons. The first is to reject narratives that render hate and harm as something natural or inevitable. Collective hate and harm rarely arises unsolicited from the human psyche. It is characteristically mobilised. For example, anti-immigrant sentiment is generated by those who label incomers not as people like us but as aliens, as dangerous, as a threat to our prosperity, safety and even our identity. This is why anti-immigrant sentiment is notably higher around elections when the political rhetoric is most inflamed.
Narratives of the inevitability of hatefulness are less an explanation than an excuse. What is more, they do not only excuse the hate-spewers themselves, but they can also undermine the resolve of bystanders to stop it.
The second lesson is that those who perpetrate harm not only know what they are doing but believe in what they are doing. The crucial questions are therefore how people can come to see inhumanity as doing the right thing? And what are the ideas and ideologies that can sustain such an inversion of decency? Claudia Koonz makes this point in “The Nazi Conscience” (Belknap, 2003), arguing that as abhorrent as it is to entertain the idea from the outside, from the inside it is only possible to understand the hold of Nazism if one understands it as a moral project.
When we look at other toxic phenomena, from the Stalinist terror to white-supremacist violence in America today, one likewise finds that they are promoted and embraced as the defence of a good society against internal and external threats. A danger sign is when a community develops (and its leaders proclaim) an absolutist sense of right and wrong, in which one side sees itself as the sum of all good and therefore eliminating those who threaten that side becomes a defence of the good.
The Stanford prison experiment is a landmark in psychology research. A Hollywood film immortalises it and a bronze plaque marks the site where it took place. But its lessons need to be rewritten. Although we all have the capacity for brutality, its manifestation must be mobilised—especially the collective brutality of one group against another. Leaders and perpetrators both need to be held accountable. Society needs to sanction those who produce hateful ideas, not just those who act on them.
Stephen Reicher is a professor of social psychology at the University of St Andrews.