We often help others. In most cases we help friends, family members, or colleagues. However, in some situations, we also help people with whom we have no connection whatsoever, for example, when we donate our change to a homeless person along the street, or when we give part of our salary to a humanitarian organisation. Although it is not very common, there are also well documented cases of people putting their life at risk to save strangers, or even animals, like the man in California who jumped into a wildfire to save a rabbit.
This is quite strange, right? Helping random people, or even animals, does not bring us any obvious direct or indirect benefit and thus it seems to go against the classic academic assumption that helping behaviours evolved because they provided benefits to the helper. For example, if I help my friend today they will help me tomorrow. It is not surprising, then, that understanding helping behaviour towards strangers has become one of the greatest challenges in social science research.
How can we explain it? The starting point is to turn to laboratory experiments using simple economic problems that are meant to model the essence of helping behaviour using a unit of measurement that is as objective as possible; most often, money. As for the decision problem, the most used one is called the ‘Dictator Game’. Here, a person, the dictator, is given a certain amount of money and is asked how to split it with an anonymous stranger, the recipient, who is given nothing. The recipient does not make any choice and only receives what the dictator decides to give them. Clearly, a purely self-interested dictator would give nothing to the recipient, because giving has neither direct nor indirect positive consequences for the dictator. However, mirroring what we see in everyday life, empirical research has repeatedly shown that a significant proportion of dictators give part of their money to the stranger. Why so?
To explain these results, behavioural scientists have typically turned to something called ‘social preferences‘. In the case of giving money in the dictator game, a useful way to explain this behaviour is ‘inequity aversion‘, which is a type of social preference. Inequity aversion assumes that people experience psychological disutility (discomfort) from economic inequalities. To avoid this disutility, thus, some people will turn out to donate part of their money. This might explain why a significant proportion of dictators appear to give money to anonymous recipients.
Is this the whole story? Do people care only about the economic consequences of their actions? Or are there people who care also about doing the right thing, independently of the economic consequences?
In a recent series of papers, my collaborators and I have shown that, indeed, people seemed to be motivated also, and, actually, mainly, by reasons beyond the economic consequences of their actions. Specifically, dictators seem to be donating to recipients not because they are motivated by minimising the inequity between themselves and the recipient, but because they believe that sharing their money is the morally right thing to do.
This point is illustrated in a paper co-written with Dave Rand at MIT that was published in January 2018 in the academic journal Judgment and Decision Making. In that paper, we introduced a ‘trade-off game’, a decision problem that helps distinguish people with social preferences for minimising economic inequalities from people with moral preferences for doing the right thing, beyond its economic consequences. Comparing the average amount of dictator game giving among inequity averse participants with the average amount of giving among moral participants, we found that the latter ones donate significantly more than the former ones, providing clean evidence that giving is mainly driven by moral preferences for doing the right thing.
In a subsequent work conducted with Ben Tappin at Royal Holloway University of London, published in June 2018 in the Journal of Experimental Social Psychology, we replicated this result and extended it by showing that preferences to do the right thing are equally strong as preferences to avoid doing the wrong thing when giving money in the dictator game.
Together, these studies provide robust evidence that helping behaviour in the laboratory is not driven by social preferences for minimising inequities per se, but is mainly driven by moral preferences for doing the right thing.
Of course, this finding raises a number of important questions that should be explored in further research.
For example, thus far we have focused on laboratory behaviour involving relatively small amounts of money. Exploring whether morality preferences extend to real behaviour and/or situations in which stakes are much higher (think about the Californian guy who jumped into the fire to save a rabbit), is an important direction for future work. In these cases, it is difficult to make predictions. On the one hand, one might think that larger stakes will make people more caring about the economic consequences of their actions; on the other hand, one might also think that there is a subclass of subjects for which moral preferences are relatively stake-independent (perhaps among deontologists, i.e., people for whom the rightness or the wrongness of an action does not depend on the consequences of that action, but only on whether that action instantiates or violates certain moral norms and duties?).
Another important question concerns the path through which people construct moral judgements in a given context. What are the cues that make people conclude that one action, among all the available ones, is the morally right one? This is likely to be a multidimensional question. In a working paper, co-written with Andrea Vanzo, a linguist at the Heriot-Watt University in Edinburgh, we have observed that one dimension is certainly important – the language used to describe the available actions.
We have shown that simply changing one word in Dictator Game-like instructions can dramatically change people’s behaviour. For example, people are less likely to steal money from another participant than to take money from another participant, although ‘stealing’ and ‘taking’, in the given contexts, have the same economic consequences. Therefore, it seems that people use the language used to describe the available actions to deduce properties about the moral qualities of the corresponding actions.
In sum, this line of research provides evidence that helping behaviour in the laboratory has not much to do with minimising economic inequalities, but it is mainly driven by moral preferences for doing the right thing. Exploring the boundary conditions of this morality preference hypothesis and studying how people build moral judgements are fundamental directions for future research.