This will be the start of a series of longer posts, on topics having to do with economics. I will try to be regular with these posts, and I will try to be comprehensible to those without a PhD in the subject. Think of these posts as “what I’m reading”, but for what I find interesting reading in economics journals, rather than in the more general reading I do.
I start with the following article:
“Incentives and Prosocial Behavior”, by Roland Bénabou and Jean Tirole, American Economic Review, Volume 96(5), December 2006, 1652-1678.
As time passes, it becomes more and more clear that mainstream economics is more capable of capturing the richness of human behavior than what the old “homo economicus” model suggests. Yet, people still criticize economists for assuming that people act “rationally” and, what’s worse, critics also confound “rational” action with “selfish” action.
I will try here to illustrate the progress economists have made in the direction of better models of behavior, and with a focus on pro-social behavior. This is a direct answer to the perceived criticism of economics I’ve just stated, but I will have my own criticism, an insider’s job, of the techniques now used by economists in this richer endeavor.
Teaser: In the next post I will look at another paper from the same issue of this journal, on “A Dual-Self Model of Impulse Control”.
On with the job, then, starting with some basic background. A “rational” individual, in economics lingo, is one with a stable ranking of all possible alternatives, which is transitive: if A is higher ranked than B, and B than C, then A is higher ranked than C. Given such a ranking, the individual looks at what alternatives are actually available, in each given choice situation, and chooses the one ranked highest. Nothing really stunning here, except the devil is in the details.
What details? Well, what exactly *is* an alternative? It depends on the context of the choice for which the chooser uses this preference ranking. For instance, if the choice is for insurance contracts, it demands rather a lot of our individual to be able to envision and then rank all contracts she can encounter in the marketplace. It is even harder to imagine the individual being able to come up with a stable ranking of all these abstract concepts called insurance contracts.
Faced with such complexities, the usual textbook treatment limits choices to bundles of goods an individual can afford to buy. From here starts a very long build-up from preference rankings to such things as “utility functions” and the math involved soon looks forbidding to the non-economist.
The confusion between this notion of “rational” choice and selfishness probably comes from this focus in textbooks. It looks like the theory can only handle loafs of bread, bottles of wine, and the like, and since the bread I ate you can’t eat, my preferences about the combinations of bread and wine to buy have an inevitable aura of selfishness.
But people climb up burning buildings to save others, donate to charities, give directions to foreigners, etc. People often exhibit “prosocial” behavior, which (finally!) brings me to the article at hand.
Contrary to some naive criticism of economics, assuming people have stable preference rankings says nothing about selfishness. The alternatives being ranked could involve a combination of bread, wine, volunteering in a local soup kitchen, giving to charities, and risking one’s life to save others. Until recently, it was hard to get reasonable conclusions out of models of choice with such alternatives, though. The article I’m discussing gets some interesting insights into the distinction, well-known to psychologists for decades, between intrinsic and extrinsic motivation, and the effect of this distinction on prosocial behavior. It does so by using game theory and by focusing on issues of information.
“Information?” you say. Here’s the society the authors want you to imagine. There’s individuals with different values. Some are more altruistic than others, intrinsically. Some value money more than others. All value their reputations, if not to others, then to themselves in the future. As soon as a reputation comes up, information issues arise.
For instance, if I make a contribution to a charity and get my statue erected in a public square as a reward, did I contribute because of my altruistic motives, which should enhance my reputation for virtue, or did I contribute because the idea of my statue stroked my ego in just the right amount? This latter motive would enhance my reputation less, if at all.
In many real-life situations, extrinsic rewards (the statue) coexist with intrinsic rewards (the altruistic “warm glow” in my heart). But note that the existence of the extrinsic award tends to water down the power of my prosocial action to enhance my reputation, as it makes it harder for others to discern which motive drove my action more.
This is the basic idea at the heart of the article. From this idea, Bénabou and Tirole get quite a few intriguing conclusions (after making a bunch of assumptions too technical to discuss here). Here are some of their conclusions.
- Extrinsic incentives (the statue in the square) can crowd out intrinsic incentives. The end result may well be that introducing extrinsic rewards actually makes people contribute less to a cause. (There is previous evidence of such an effect, for instance when the extrinsic reward was payment for giving blood.)
- Whether stigma or honor is the more important reputational concern matters to whether individual prosocial actions act as substitutes or complements to each other (whether my donation makes you donate less or more to the same cause).
- Sponsor competition can reduce overall welfare. For example, all these people walking marathons for one or another worthy cause would generate a better social balance of benefit for their cause at less cost to themselves by simply donating, instead of competing in the marathon. But they do compete because their worthy cause sponsor competes with sponsors of other worthy causes. From the point of view of supporting the worthy causes, this “holier than thou” competition generates only mild support for the causes while imposing a heavy cost on the donors, who are thereby locked into a zero-sum competition for social standing (to be seen as more altruistic than others).
I could go on, as the paper fascinates me and there are other interesting conclusions in it. But I will close with a bit of criticism. In order to make their model tractable, the authors use the by now unquestioned standard of game theory. This is akin to building robots that can walk by incorporating some tricky math into their CPUs. No robotics expert will claim seriously that this is how humans manage to walk, but the robot can be a decent model of human walking. Same here: you don’t have to believe that people are the sophisticated calculators of game theory to accept the authors’ model of prosocial behavior. Yet, the empirical evidence of robots actually walking is stronger than the empirical evidence that people play games in a mathematically very sophisticated way, as game theory has them do.
But I could write lots of long posts on this criticism, and I’m sure I’ve taxed your patience enough in this long post. More on economics on the next installment of “here’s what intrigued me in an economics journal”, some time relatively soon.