As a scientific rationalist I get irritated when people argue without understanding all the facts, but as a student of history I get more irritated when people with the facts argue without understanding their limits. One of the things that engineers tend to do incorrectly is assess risk factors largely independent of each other: if X happens then part 1 will address it and if Y happens then part 2 will address it, etc. They think this way in part because they design different functionalities in a contained fashion and in part because systemic modelling is so difficult. Yet unforeseen shocks to complex systems almost always often destroy the assumption of independence because they affect multiple parts concurrently and therefore make the risk of catastrophic failures much higher than anticipated. To make matters worse, what may be a safety feature for one part may inadvertently cause catastrophe for another, e.g. pouring water around a very hot reactor to cool it off in an attempt to fix things, only for it to turn into hydrogen and cause an explosion.
It is a fundamental truth that we will not be able to think of all possible chains of events and even when we do we cannot actually know what will happen because there is insufficient data to predict it. People try to model from basic principles but once a few subsystems are out of their design parameters the number of variables becomes so large that it is a shot in the dark about whether the models will capture the behavior or not.
In light of these issues I find condescension that pops up when there is potential disaster to be unwise and detrimental to the type of discussion that needs to take place. I’ve been having a really hard time figuring out what the real risks are in Japan because people are mostly either freaking out or shutting down conversation by insisting that there isn’t a problem because of the various safety measures that will “fix things.” I find Kathy’s post to be an ideal because she explains what the intent of the design is while putting in caveats that the systems may be compromised, but I have yet to find a good summary on what would have to happen for there to be a dangerous release of radiation. Everyone can explain why it won’t happen, but they won’t talk about why it could; without knowing both sides it’s difficult to determine which arguments are stronger. And I mean that both from a physical and man made perspective: even if the physical law holds, what are the engineering tolerances needed in order to satisfy the conditions and what evidence is there that they’ve been followed?
I had personal discussions with friends that have a professional level understanding of this type of reactor and for each reason they had about why we shouldn’t worry I had a counterargument about why they may not be able to take that action or why it may not work as expected. Instead of being placated by a convincing argument they both ended up making assertive statements that they had faith in the operators and design.
And on that note I present Adam Curtis’ documentary “A for Atom”, from his Pandora’s Box series. It can either be downloaded from here or watched below in parts (just click on the next part once the current one is done).