10° 40' N, 61° 30' W

Sunday, November 24, 2002

Nick thought I was taking a shot at him with my brief post on risk. I wasn't, nor was I talking about a specfic policy action. All I was saying is that people react to different risks with like probabilities differently, irrespective of whether something can be done about it or not.

Standard expected utility theory, formulated by John von Newmann and Oskar Morganstern in the 1940s, assumes that people are consistent in their preferences, in that they are expected-utility maximizers. A risk-averse person (one who fears a loss more than they value a gain) should therefore be consistent in their desire to avoid risks of a similar magnitude. It did not take long for this hypothesis to break down, though. Nobel laureate Maurice Allais in 1954 published his famous paradox. Allais conducted a series of experiments that showed that risk-averse individuals often choose a gain in a second decision round that is inconsistent from the one the made in the first round, even though the expected utilites or outcomes are the same. Daniel Ellsberg in 1961 showed that decision-makers give events with 'known' probabilities a higher weight in their outcome evaluation, even though the 'unknown' outcome has a higher expected value. Part of this year's Nobel Prize recognises Daniel Kahneman for his work (together with the late Amos Tversky) in providing a more relistic framework for how people approach uncertain situations.

Why does von Neumann-Morganstern expected utility theory survive, though? Mainly because of its sheer predictive power--you can model with it pretty decently, whereas most paradoxes are based on small-scale experiments that are difficult to scale up to the level of a functioning market, let alone a macroeconomy. It's not perfect; read D-Squared if you want a description of why most current models don't actually model the future, but really flatten it down into the present--your dynamically modelling what you think of the future rather then the future itself. For most economic applications, though, expected utility works pretty well. Consistency is also a sounder basis for policy "Non-rational" models help explain how people behave, and can give and insight as to how people respond to polices, allowing those polices to be tailored more appropriately to their desired outcomes, especially if you're trying to influence behaviour. Models of inconsistency, though, are not a good reference for how policymakers should make decisons--if anything, they are simply a reminder that decisions should be made rationally.

The example I used about pesticide use was grabbed out of the air, but not entirely so; I had in mind the campaign of the World Wide Fund for Nature, among others, to ban DDT. That DDT has had a harrmful environment impact on animals, especially birds of prey, in undisputed, and in large part has to do with the indiscriminate use of the organochoride in open areas. It's impact on humans, however, is questonable at best, though the WWF cites this as a major part of their call for a total ban. Several scientists maintained that DDT, while it should be banned in agriculture, should be retained for use in malaria control, where no other control method is nearly as effective. In 1999 a group of doctors and scientists successfully petitioned diplomats towards this.

The WWF was prepared to put up with uncertainty about an alternative means of malaria control, but not the uncertainty of continued DDT use. According to Cliff Curtis, director of WWF's global toxics initiative, "[the WWF] have called for a phase-out of the use of DDT by 2007 . . . and that remains our position. The cause of finding alternatives to DDT, and getting committed funding for that work, will be far better served by establishing a deadline for a phase-out." The malaria activitsts, on the other hand, thought that public health benefits could, on balance, be best served by the certain, continued use of DDT until the uncertain arrival of a proven, effective alternative.

Juan Gato makes this point:

[T]he story of DDT illustrates the trade-offs that are inherent in most environmental policy questions. Pesticide use (or overuse) can cause environmental harms, such as the decline of bird species due to DDT. The prohibition of pesticide use can mean the loss of habitat or, in the case of DDT, a resurgence of malaria. It is not clear to me why good environmentalists must be more concerned about the former than the latter.


The simple act of banning pesticides, like all other policy decisions, is not cost-free, which is the original point of Taylor and VanDoren's article. The 1-in-1 million example of risk that I used is actually quite low. Motorcyclists risk a far higher probability of death or serious injury, yet many are prepared to live with the risk as they enjoy the freedom of the open road. People live in volatile flood plains, in earthquake zones, at the foot of volcanos, in tornado areas or close to fire-prone forests, fully aware of the risks they are facing. (To be fair, having good insurance helps; if you know you're going to be well compensated, there's little incentive to live carefully. Models of this are part of the reason why Joseph Stiglitz shared the Nobel Prize last year. That, however, is another post.) Policymakers, and those who would shape policy, would do a lot better to keep risk, and people's heuristics regarding them, in mind.