Friday, February 03, 2017

Precautionary approach vs fat tails: flawed politics vs solid facts

We had an exchange with AngularMan in the Greene vs Taleb thread. He basically said that it's risky to talk about the fat tails because this discussion encourages the precautionary principle and the ban of nuclear energy and fossil fuels.

Well, the "fat tails" and "precautionary principle" are sometimes conflated. The most sophisticated part of the defenders of the "precautionary principle" knows something about "fat tails" which is why they may use fat tails as an argument in favor of the precautionary principle. And this justification may sometimes be legitimate.

But in the full generality, these two phrases, "fat tails" and "precautionary principle", are completely different and independent things. The differences depend on the definitions of these two concepts – and various people may use different definitions. But with the most widespread definitions, one qualitative difference is self-evident: "fat tails" are a property that may exist or be absent and whose existence may be justified by legitimate rational arguments as a "positive statement" (what is true) while the "precautionary principle" is a legal or political principle i.e. basically a "normative statement" (how people should behave).




Let us be more specific. A fat-tail distribution is a distribution \(\rho(x)\) whose decrease for \(x\to \infty\) is slower than a Gaussian (normal) or any exponential decrease; the simplest fat-tail distributions often behave approximately as power laws \(\rho(x)\sim C/x^\alpha\) for \(x\to\infty\). Note that convergence of \(\int\rho\) requires \(\alpha\gt 1\).

On the other hand, the precautionary principle says that one is obliged and authorities are obliged to assume that "a thing is dangerous and therefore banned" if no proof (or solid evidence) about the safety of "the thing" exists in one way or another. Wikipedia helpfully tells us that the European Union has adopted this crazy principle as a "statutory requirement" in whole areas of law.




Can you spot the difference? I hope that you can. The fat tails are a property of probability distributions that we may rationally discuss while the precautionary principle is just a religious dogma that some unelected officials worship and it can't be rationally discussed because it's stupid and because the unelected officials don't tolerate any rational thinking about these matters.

I think that with the definitions I have described, most people would wonder why "fat tails" and the "precautionary principle" have anything in common at all.

Fine, let us begin with a discussion why the precautionary principle (or precautionary approach) as defined above is idiotic. The principle assumes that if you have two possible laws, "A" and "non A", one of them may be labeled as the "potentially dangerous" and it's the one that must be avoided in the absence of evidence. But laws and propositions don't come with these God-given signs. The set of all possible propositions (or possible laws or policies) can't be divided to the "positive ones" and "negative ones". A statement "A" can't be proven to be an "a priori safe" or the "right default one" or the "positive one". You can't say that "A" is a positive statement because it doesn't contain "non". After all, "A" is exactly equivalent to "non(non(A))".

Let me give you an example. A smoking ban may be adopted because it hasn't been proven that smoking in restaurants doesn't lead to the death of a whole nation. However, the safety of the smoking ban hasn't been proven, either. The ban itself may also lead to the death of a nation. The smokers will feel terrible and kill everyone else before they kill themselves. And maybe the people start to rapidly collapse after they and their ancestors have lived without the vital vitamin called nicotine for 137 years. So the precautionary principle really means that the violent non-smokers are "in charge" while the smokers are second class citizens. So the violent non-smokers may declare the smokers dangerous and the application of the precautionary principle means that the smokers may be suppressed. But there's no logical justification that it has to be like that. The smokers could also be "in charge" and declare all the violent non-smokers dangerous.

The precautionary principle is nothing else than a regulation that places a class of citizens above others and it's generally assumed that everyone knows what is the safer part of the citizens that must be "in charge": the lazy people who aren't doing anything creative or anything that would make them deviate from the most average people, Luddites, environmentalists, and similar folks. The precautionary principle has been largely adopted or hijacked by these movements that we generally consider left-wing which is why the "precautionary principle" says nothing else than that the left-wingers and NGOs should be in charge. As we saw, the "precautionary principle" becomes much more subtle when we talk e.g. about mass immigration. Mass immigration obviously carries some significant risks and the situation is analogous – except that it is much more justified – as an example of a situation in which the precautionary principle should be used. But it's not being used because everyone knows that the "precautionary principle" should always be a tool to support the left-wing ideologies and the people paid by George Soros, among related filth.

So the precautionary principle is nothing else than a dishonesty, a deliberately introduced asymmetry in the thinking. It can't be consistently applied to questions about policies. After all, we may say that it's logically self-contradictory. The proof is analogous to other proofs of various incarnations of the "liar paradox".
The question is whether we may prove that the precautionary principle allows the society and mankind to survive. There's no proof that the society may survive with that so according to the precautionary principle, the precautionary principle must be banned! ;-)
OK, while the precautionary principle – as defined by Wikipedia or the EU – is self-evidently a dishonest and irrational distortion of the rational thinking, a fat tail is meant to be something else, namely a property of a statistical distribution that may be fully justifiable or provable in many cases.

The rational decision not only in policymaking is based on the cost-benefit analysis. Imagine that you're deciding whether you should adopt a new law, "A", or keep the current status "non A". (In general, we may be comparing more options.) We don't know what will exactly happen given "A" or "non A". The consequences may be good (positive) or bad (negative). Imagine that there are possible scenarios labeled by parameters \(\lambda_i\) and we group them by the overall well-being \(W(\lambda_i)\) that we evaluate in some way.

The rational decision whether we adopt "A" or keep "non A" is based on the cost-benefit analysis. We compute the expectation value\[

\langle W\rangle_A = \int d^n \lambda_i\,\rho(\lambda_i) W(\lambda_i)_A

\] where \(\rho(\lambda_i)\) is the probability density that the parameters have the values around \(\lambda_i\). The probability distribution is normalized so that the integral above is equal to one if \(W(\lambda_i)\) is replaced by \(1\). OK, things are obvious and rational: if \(\langle W\rangle_A\gt\langle W\rangle_{{\rm non}\,A}\), then it's a good idea to adopt the policy "A", otherwise it's not. (Let's ignore the "infinitely unlikely" case in which the cost-benefit analysis ends up ambiguously; in that case, no rational decision may be justified.)

Note that the cost-benefit analysis doesn't require you to say which law is "A" and which law is "non A". If you exchange the meaning of "A" and "non A", the expectation values get exchanged as well, and whenever the first was greater than the second, the first will be smaller than the second, and vice versa. So you will obviously end up with the same recommendations for the laws.

OK, can it have any relationship with the precautionary principle? In the precautionary principle, when it's at least slightly justified, it's assumed that the distribution \(\rho(\lambda_i)\) isn't really known. And the function of well-being \(W(\lambda_i)\) may be unknown, too. But there may still exist arguments that\[

\exists \lambda_i:\quad \rho(\lambda_i)\neq 0, \,\,W(\lambda_i)\to-\infty

\] So for some possible choice of the parameters \(\lambda_i\), some future that can't be excluded, the well-being is minus infinite. The latter statement typically means that the whole civilization dies or at least someone dies etc. If the law "A" introduces some significantly nonzero risk that everything we like will die (e.g. the mankind), and this risk didn't exist for the law "non A", then it's a better idea not to adopt the law "A".

That's a variation of the precautionary principle that is actually justified – it's justified by the cost-benefit analysis, a rational attitude to all these "should we adopt A" questions.

Again, don't forget that this is not how the "precautionary principle" is usually used. The precautionary principle is being used even in situations in which the worst-case scenario is much less dramatic than the destruction of the mankind. And it is being used even in the situation in which the risk of total destruction exists even with the law "non A" and no one can actually show that \(\langle W\rangle\) would get worse under the law "A".

In other words, the "precautionary principle" may sometimes mean a policy that may be shown to be wise – and "refined versions" of this argumentation exist. But much more likely, it is applied as a policy to distort the behavior in a way that cannot be justified at all. The principle is used as an illegitimate tool to strengthen the power of a predetermined "winner". These two levels of the precautionary principle are often being conflated. In some cases, this kind of reasoning looks OK, so the whole public is often being brainwashed and led into thinking that the general precautionary approach is always wise or safer. Except that it is not.

So far, I've mentioned that the cost-benefit analysis is the rational way to decide whether it's a good idea to adopt "A". Sometimes, it may justify the precautionary principle but it most cases when people refer to the principle, the cost-benefit analysis doesn't justify it. For anyone who understands these things – what it means to think rationally in the presence of uncertainty – the rest is all about examples. Is the usage of the precautionary principle or warnings about fat tails legitimate in one particular situation or another?

Aside from the situations in which people respond relatively rationally, I would find examples in which people are ignoring the "fat tails" even though they shouldn't. And on the contrary, they are sometimes mentioning them even though they don't help them either because they don't exist or because they're not fat enough.

AngularMan stated that "fat tails" justify the ban on nuclear energy or fossil fuels. I don't think so. There's no plausible way of getting "globally destructive" or even "huge" losses because of either of them. Chernobyl was bad enough but it killed some 50 people directly, the indirect later deaths are at most in a few thousands, and the direct losses were $15 billion while the indirect ones $250 billion in the subsequent 30 years.

Just in the U.S., nuclear energy produced 800 terawatt-hours in 2015. Kilo, mega, giga, tera. You see that it's 800 billion kilowatt-hours. Count some at least $0.12 per kilowatt-hour and you will see that nuclear energy has revenues of $100 billion a year or so just in the U.S. A sizable fraction of it is profit. No doubt, the damages of Chernobyl have been repaid. Chernobyl was really a worst-case scenario. You expect a future accident – that will materialize at some point – to be much less harmful. In many cases, one may give a near proof that things won't be as bad in Chernobyl.

Fossil fuels won't destroy the world, either. They won't destroy it directly but they can't destroy it indirectly, e.g. through global warming, either. The effect of CO2 on the temperature is proportional to the "climate sensitivity" \(\Delta T\), the warming per doubling of the CO2 concentration. Its value isn't known too accurately – if it is known at all. The simplest feedback-free calculation gives about \(\Delta T\sim 1.2\,{\rm K}\). The IPCC says that this figure gets approximately doubled by positive feedbacks, so \(\Delta T\sim 2\,{\rm K}\).

Two degrees of warming (and even 10 degrees if you could get them in some way) won't lead to the end of the world or the mankind, of course, which is why the assumption of the precautionary principle that you need an "infinite destruction" to make the argument valid isn't obeyed. All conceivable consequences have thin tails.

The climate sensitivity itself is unknown and you could suggest that the probability distribution \(\rho(\Delta T)\) has a fat tail. Does it?

If you only used some evidence – more precisely, one particular theoretical method to calculate the distribution for \(\Delta T\) which ignores everything else – you could conclude that the climate sensitivity has a fat tail. Why? Because we may write \(\Delta T\) in terms of the feedback-free part \(\Delta T_0\) and the feedback coefficient \(f\):\[

\Delta T = \frac{\Delta T_0}{1-f}

\] The factor \(1/(1-f)=1+f+f^2+\dots\) may be visualized as this geometric series, as the sum of the "correction \(f\)" and the "correction arising from the correction", and so on. When \(f\lt 0\), we talk about negative feedbacks and \(\Delta T\lt \Delta T_0\). When \(1\gt f\gt 0\), the net feedbacks are positive and \(\Delta T\gt \Delta T_0\). When \(f\gt 1\), it's even worse because the geometric series is divergent (although formally, the sum is negative), and what you get is a runaway behavior: the deviation of the temperature from the equilibrium grows exponentially for some time, before this effective description breaks down.

If \(f\) has a distribution that has a nonzero probability to be between \(0.99\) and \(1.01\), for example, then \(1/(1-f)\) and therefore \(\Delta T\) has a distribution with a fat tail near \(\Delta T\to \infty\) which arises from \(f\to 1\). It could easily happen, with the probability comparable to \(p=1/10,000\), that \(f\sim 0.9999\), and therefore \(\Delta T\) could be \(10,000\) times greater than \(\Delta T_0\), formally 10,000 degrees. A typical fat tail. Of course, we don't want the civilization to end in a 10,000 °C hell with the probability as high as \(p=1/10,000\) which is why a CO2 ban could be justified.

But as I said, this "fat tail" only survives if you refuse to acknowledge any other evidence – whether it's empirical evidence or other theoretical considerations. For example, in 2010, I argued that the sensitivity can't be high and positive i.e. \(f\to 1\) is virtually impossible because if the probability were substantial for \(f=0.9999\), the values \(f=1.0001\) would have to be similarly likely, too. In fact, \(f\) isn't a universal physical constant but probably evolves with the conditions on Earth. And if \(f\) had been (sufficiently) above one, it would have happened that during the 5-billion-years history, the Earth would have already experienced the lethal runaway behavior of the global warming.

The evidence arguably shows that it couldn't have happened for 5 billion years. That's why \(\rho(f)\) for \(f\sim 1.01\) or whatever must be basically zero (the inverse of the longevity of the Earth), by continuity (or fluctuations of \(f\)), \(f\sim 0.99\) is also ruled out, and that's why we may rule out sensitivities of order a hundred of degrees – and rule them out much more safely than to make the probability \(1/100\).

This was an extreme argument – and how far it gets you depends on your assumption on the continuity of \(\rho(f)\) and/or the size of the fluctuations of \(f\) during the Earth's history. There are saner ways to rule out the huge sensitivities, of course. If the sensitivity were above 5 °C, then the predicted warming per decade in 8 recent decades would be around 0.3 °C. The probability that you would get (as we observed) less than 0.2 °C per decade in each of these 8 decades would be something like \(p\sim (1/3)^8 \sim 0.00015\) so with the certainty around 99.99%, you may say that this argument is enough to be convinced that the sensitivity must be smaller than 5 °C. There are other, partially but not completely independent, arguments excluding high sensitivities which may help you to rule out even smaller sensitivities. My basic argument will be getting increasingly strong if the mild warming (or cooling) will continue, of course. The longer history you observe, the more accurately you may eliminate noise – the more reliably you may identifying the measured trend with the "real underlying" one.

At the end, the fat tail just isn't there if you take a sufficient amount of theoretical arguments and empirical data into account. In other words, "really big" values of the climate sensitivity are excluded at a huge significance level. The tail is basically thin. Maybe it's a power law but it would have to be a quickly decreasing power law. It is therefore legitimate to assume that the sensitivity isn't insane and the Gaussian distribution for \(\Delta T\) is good for almost all purposes. The value of \(\Delta T\) is some 1 °C plus minus 1 °C or so. Richard Lindzen and a collaborator have claimed to derive a much narrower error margin around a figure that is close to (but a bit smaller than) 1 °C. But almost everyone else has error margins comparable to 1 °C. If Lindzen is wrong, no one has really done a better job than the 1 °C plus minus 1 °C that I have mentioned. And given this big uncertainty, it doesn't really make much sense to be more accurate. Everything between –1 °C and +3 °C is somewhat realistically possible, values around +1 or +2 °C are the most likely and the "linearized" analysis is OK.

The damages caused by a 0.5 °C or 1.0 °C warming between 2017 and 2100 – which follows from the 1 °C or 2 °C sensitivity, respectively – surely has a vastly lower magnitude than those caused by the ban of a majority of fossil fuels etc. over the following decades (just compare how much you would personally lose if the temperature increased by one degree; and if you couldn't use any fossil fuels or things that required them – you may multiply both numbers by 7 billion if it makes it easier for you to understand that this is exactly the global questions we're discussing) which is why the cost-benefit analysis unambiguously says that when it comes to the fight against climate change, the only rationally justifiable policy is to have courage and do nothing. Comments about fat tails are just wrong because the tail isn't fat here. There's a significant uncertainty in the climate sensitivity but all the conceivable values are qualitatively analogous – the sensitivity is at most of order one degree Celsius.

The really dangerous phenomena have the fat tail. In many cases, it's because the damages basically grow exponentially for some time. The damages are\[

|\langle W\rangle | = \exp(D)

\] where \(D\) is the effective number of \(e\)-foldings over which the problems grow exponentially. The quantity \(D\) itself has some distribution and its width may be e.g. \(10\). But when the exponent changes by ten, the exponential changes multiplicatively by a factor of \(\exp(10)\sim 22,000\) or so. That's why the uncertainty in \(D\) is very important and the more extreme yet conceivable values of \(D\) completely dominate the formule for the expected damages \(|\langle W\rangle |\).

That's when the precautionary principle is actually justified. If you can't prove that \(D\sim 20\) is impossible, you should better assume that it's possible.

Again, this danger only exists in situations in which "some exponential growth" of some bad or dangerous things may be shown to be possible. Pandemics. Mass conversion of Muslims to the radical Islam – which would be a special case of pandemics, too. Or something of the sort. Yes, nuclear energy did potentially contain similar threats in which the precautionary principle could have been applied.

For some time after the war, even some top physicists weren't certain that it was impossible for the thermonuclear weapons to ignite a chain reaction in the atmosphere and burn the whole atmosphere or the Earth. A nuclear explosion does involve some exponential reaction – a neutron breaks a larger number of nuclei that produce a larger number of neutrons, and so on. But can't the whole atmosphere become one giant bomb when a good enough thermonuclear weapon is detonated?

At the end, a rather simple calculation is enough to see that it can't happen. But it's right to check such dangers when you sell your first thermonuclear weapons, among other things. However, when the analysis of possible processes and threats is already done accurately, it's a good idea not to deny these "things are OK" arguments. The most widespread usage of the "precautionary principle" is when some people simply deny all "things are safe" arguments altogether. They shouldn't be using fancy phrases such as the precautionary principle in these situations at all – instead of a principle, what they're doing is just plain dishonesty.

In a complete discussion of these matters, there would be a big chapter dedicated to financial risks, financial black swans, and similar things. Technically, it's surely correct to say that many tails in the financial distributions are fat – in the sense of decreasing much more slowly than exponentially, e.g. as power laws. So many people often assume that big changes are really impossible even though they are not as impossible. These are matters that everyone who is doing some risk management should know. Also, the fat tail discussion may often be important because exponentially growing "chain reactions" of problems and bankruptcies similar to the nuclear blast may take place in the financial world – that's why the talk about the domino effect may sometimes be legitimate.

On the other hand, the realistic power laws are often enough to be rather safe. And the chain reactions and domino effects are usually impossible even when lots of people say that they are possible. Companies ultimately are – or should be – mostly independent entities that are created and that die in isolation from others. Every company (and every individual) should be primarily responsible for itself (or himself). The efforts to link and include everyone into one holistic bloc may look "nice" to someone – because "unity" is so nice and politically correct – but they actually increase the vulnerability of the whole system which is normally resilient partly thanks to the isolation between companies, individuals, nations, and civilizations. The domino effects sometimes emerge but it's because of a self-fulfilling prophesy: traders think that everyone is connected, and therefore they bring everyone into trouble (all similar banks go bust etc.). But it doesn't have to be so and in a functional capitalist economy with rational players, it shouldn't be so. An unhealthy chain reaction may exponentially grow in a bank but a competing bank is already "outside the bomb" and won't continue in the spreading of the fire, just like the atmosphere isn't a continuation of the H-bomb.

So while I think that there exist people who underestimate fat tails and risks in the financial world (and lots of people and especially collectives underestimated the risks before the 2008 downturn or before various flights of space shuttles etc.), I think that it's much more typical these days for people to overestimate the potential for big problems and the fatness of the tails.

No comments:

Post a Comment