Tapping unusual quarters: summary and conclusions
by Andrew Teller
Since September 2003, you have seen this column appear in each and every issue of ENS News (barring the autumn 2011 one). Its author has however come to the realisation that the main thrust of his contribution to the nuclear debate has now been amply expounded in the past articles. What was their common thread? It might be worth summarizing it here one last time. I have tried to analyse the way the debate for and against nuclear energy is conducted between two vastly different parties: the advocates and the critics of nuclear energy. The former consist mainly of scientifically-minded people, often belonging to the nuclear industry. They strive to reason accurately and to harness nuclear energy so as to enable mankind to benefit from its advantages. The latter form a loose coalition of organisations who will take any argument, however weak it is, providing it strengthens a position that appears to be preordained. This is illustrated e. g. by the fact that they are often using contradictory premises to reach the same conclusion, which indicates that the conclusion is independent from either premise. In the world of Public Relations, the scales are heavily tipped in favour of the critics, who are not held accountable for such inconsistencies, whereas the advocates are quite understandably expected to deliver.
Adding more to what has been already said along these lines would be belabouring the point.
Had I decided to put an end to this column before 11 March 2011, I would have concluded by saying that the outcome of the nuclear debate would remain forever subject to whims and fashions even if, at that time, the improving image of nuclear energy was cause for optimism. Unfortunately, since then, we have had the Fukushima accident, which leaves cause for soul-searching, if not pessimism. What is the situation today? Although the critics of nuclear energy resort to faulty reasoning most of the time, they have been proven right on one point: nuclear reactors have not yet reached the point where they can be said to be foolproof. Three Mile Island showed that the physics of nuclear power generation was insufficiently understood; Chernobyl showed that the nature of the human factor could in certain cases be inadequately taken into account; now Fukushima teaches us that even the extent of the worst possible environmental conditions can be underestimated. For sure, each of the above cases has generated appropriate countermeasures: post-TMI upgrades, WANO, reappraisal of the applicable accident analyses. One nagging question remains: have we now exhausted the range of accident causes to be integrated in the safety design of nuclear reactors? Can we now be confident that no other accident cause still remains unidentified, taking its time until it decides to strike? Although sound reasoning is an objective pursued by we advocates of nuclear energy, one of Fukushima’s lessons is that we have overestimated our capability of avoiding serious accidents. Or was it a failure to heed the warnings of Probabilistic Safety Analysis? A back-of-the-envelope calculation indicates that incurring a serious nuclear accident does not remain a remote prospect as time goes by: assuming an average number of 400 reactors in operation since 1986, 10,000 reactor-years have elapsed since Chernobyl. Let us assume again an average serious accident rate of 1 in every 100,000 years, which would be roughly applicable to Generation II reactors. Elementary probability theory tells us then that the probability of occurrence of at least one serious accident after 10,000 reactor-years of operation is equal to 0.1, which pushes it into the territory of statistical meaningfulness. In other words, it could not be expected for such a fleet to go on forever without encountering a serious hazard. My conclusions regarding the future of nuclear energy are, therefore, threefold:
Generation III reactors have been designed to feature rates for serious accident that are two orders of magnitude lower than their Generation II forebears. Had the above-mentioned fleet consisted of Generation III units, the same total of 10,000 reactor-years of operation would have led to an accident probability equal to 0.001, which leaves it as a remote possibility. This alone shows the importance of replacing Generation II reactors by Generation III ones, unless of course the former can be upgraded to meet the standards of the latter.
Even when opting for Generation III models, it is necessary to go for deterministic designs, i.e. designs that are capable of handling a serious accident no matter what, and not on a statistical basis resting on the assumption that the conditions to be handled are highly unlikely (weren’t the conjunction of a major earthquake and a tsunami high unlikely?)
Finally, let us not fall into the trap of playing the game only by the rules devised by the critics of nuclear energy. Let us not forget that what caused the Fukushima accident also killed tens of thousands of people and wiped large areas off the surface of the earth. This is not going to happen everywhere. Similarly, although cost is an issue not to be dismissed lightly, let us not forget that the cost of non-interruptible energy is going to increase steadily. This factor will continue to shed a more favourable light on Generation III reactors and make it easier for them to meet the requirement that they be no more expensive than their cheapest competitors.