Welcome Visitor. Today is Tuesday, December 3, 2024. Sign-on
Follow Us On Facebook
Risk Assessment of TVA Nuclear Energy
A risk assessment of TVA’s nuclear power industry
based on Perrow’s theory of Normal Accidents
 
To generate electricity, the Tennessee Valley Authority uses:
  • twenty-nine hydroelectric dams,
  • a pumped storage hydro facility at Raccoon Mountain,
  • eleven coal fired-fired power plants,
  • forty eight natural-gas or oil powered combustion turbines,
  • and five nuclear reactors.
By 1995 TVA’s had invested +/- 60% of its money in nuclear power. However this only generated 14% of TVA’s electricity. Since then, 2 additional nuclear plants have come on-line, but these will only increase the percentage to 22%. Largely due to these inefficient investments, TVA was by the end of 1996 28 billions dollars in debt, paying 35% of its income in interest (2 billion dollars per year).
Two sites of TVA’s five nuclear plants are in our bioregion. Sequoyah, above Chattanooga, and Watts bar, near the Watts bar dam. Since the normal daily operations of nuclear power plants are relatively inexpensive, and since nuclear reactors can not easily be switched of and on, TVA runs the plants 24 hours a day
All nuclear power releases small amounts of radioactive materials in the environment. There is no conclusive evidence if this is damaging to the health of people living nearby. However, all studies done funded by institutions with an interest in increasing nuclear power programs. If you have attended one of the citizens’ information evenings of the coalition for a Healthy environment, OREP and SOCCUM, it becomes pretty clear that nuclear waste has an effect. It was very sad to hear the stories of former Oak Ridge employees. Mostly fired because they spoke up, they suffered from memory loss, attention deficit, depression and numerous physical problems.
The greatest concern with nuclear power plants, however, is the possibility of a major accident. Of course we are all familiar with the accident in Chernobyl in 1986: thousands of people died and will die from cancers, 300.000 people had to be relocated and about 3,900 square miles of land were heavily contaminated and much of Europe received dangerous levels of fallout. TVA itself has had one near meltdown, at Browns Ferry in 1975. Three Miles Island in Pennsylvania had a partial core meltdown in1979
Charles Perrow, a sociologist, states that certain types of highly complex technological systems, such as nuclear power plants, are unmanageable by nature and therefore run the risk of a serious accident.  Accidents he calls “normal” in the sense that they are unavoidable. But, hasn't technology improved? True. But: Perrow’s theory claims that it is not a property of our technology but a property of our human mind that we can not understand this.

Let me try to illustrate this with an example, the 1979 accident at Three miles island in Pennsylvania.
First of all, this is, simplified, how a nuclear reactor works :
1. nuclear core generates large amount of heat
2. heat absorbed by pressured water surrounding core and circulates through the primary loop
3. heat of 1e loop water turns water circulating  in the physically separate but adjacent  secondary loop into steam
4. steam turbine
5. electricity
 
Nov, 4 a.m.  March 28, 1977 the following happened:
1. Secondary loop water pump stops because of a minor mechanical problem.
As a result there was no more absorption heat from hot water, causing temperature to rise in the first loop.
2. Automatic safety system senses this, turns on 2nd loop auxiliary water pumps to maintain water circulation. Unfortunately, 2 maintenance workers had left two valves in the auxiliary line closed, a violation of operator rules, so extra pressure did not reach the loop. Further, the control room operators were not aware the valves were closed, because an unrelated repair tag obscured the indicator light on the main control panel. THUS: pumps were on, but no water circulating, temperature 1st loop kept on rising
3. Automatic safety system sensed too much heat and scrammed the reactor, or stopped the nuclear fission.
The result is afterheat, a normal phenomenon in a scrammed reactor (can take days or weeks to settle down.) Because of afterheat the temperature in 1e loop rose further
4. In response to this, a pressure relief valve (PRV) in 1st loop releases the water from 1st loop. Unfortunately, this valve had a mechanical flaw and failed to reclose after water was led out of the 1st loop. However, a poorly designed indicator light reported incorrectly that this valve had closed. Thus, not known by the operators, too much water was running out of 1st loop, and the reactor core began to boil itself dry!
5. in response an automatic safety system, the emergency core cooling system (ECCS), began injecting water into the core. Temperature began to drop in the core. Good news (?!), it would prevent meltdown due to lack of water in 1st loop, due to unclosed water relief valve, which all was caused by mechanical problems with pump in 2e loop! Unfortunately, operators manually overrode the safety system and turned off the emergency core cooling system (ECCS)
Why?:
a) they did not understand that 1st loop was losing water, because of the control light which falsely indicated that water release valve was closed. they thought 1st loop was intact and emergency core cooling system had  come on by mistake
b) Further, the operators knew from training that if the ECCS injects water in 1st loop, the plumbing could rupture, causing a serious loss of coolant accident.
A little confusing, but it is just an illustration
After all this, this is what happened:
- core temperatures rose to 1,600 Fahrenheit
- parts of uranium fuel rods and metal supports began to swell and crack, releasing radioactive gases into the water pouring from the open water release valve. This water overfilled the relief tank, leaked into the containment building sump, was pumped into the auxiliary building’s waste tank, which also overflowed, and released radioactive material in the environment
- when temperatures reached 2, 000 Fahrenheit zirconium metal began to oxidize, generating hydrogen gas. A large hydrogen bubble formed in the upper part of the containment building and exploded ten hours later, causing critical damage to the containment building. Engineers and scientist were also concerned about an hydrogen explosion in the pressure vessel itself, which would cause even more damage.
- for several days, America watched how engineered tried to fix the plant.
- eventually they succeeded, bringing temperature and vessels down to safe levels.
President Carter appointed a committee of experts to investigate. They concluded several causes:
* poorly designed equipment
* poor control room design
* poor operator training
* poor operator procedures
* deficiencies in the Nuclear Regulatory Commission oversight
They concluded that the main immediate cause of accident was human error, especially the operators stopping the emergency core cooling system. they said: “If the operators had just stood there with their hands in their pockets and not overridden the automatic safety systems, the accident would have been far less serious” They recommended sweeping changes in NRC structure. Following Perrow's theory of Normal Accidents one could argue that the operators were not mainly to blame for the accident, because nuclear plants are by nature unmanageable and subject to serious accidents. Those accidents are “normal” accidents, because they are unavoidable
 


In highly complex technological systems (such as nuclear plants) like this,
accidents are unavoidable BECAUSE of four reasons:
 
1) THERE ARE UNPREDICTABLE COMBINATIONS OF MULTIPLE SMALL FAILURES.
each of which is relatively minor by itself, but that together are “fatal!”
I believe the TMI example makes this clear. Before TMI accident most nuclear experts assumed that accidents would be caused by dramatic major failures.
2) THE MULTIPLE CAUSES OF A NORMAL ACCIDENT, THOUGH INDIVIDUALLY SMALL AND SIMPLE, INTERACT IN FATAL WAYS
The operators in the control room could easily have resolved any 1 or 2 or even 3 of the causes, but not all of the causes together.
3) THE FATAL COMBINATIONS ARE UNFORESEEABLE
the designers or operators cannot predict the combination of multiple, interacting small causes of the accident. Perrow argues that the harmful interactions often involve functionally unrelated subsystems that just happen to be in close physical proximity.
Designers of highly complex systems often built in multiple or redundant   back up and safety systems to minimize unforeseeable mishaps and to increase safety. Again, Perrow argues that these systems often have the opposite effect: they increase the overall complexity of the equipment, the potential for unintended interactions and thus the probability of a mishap.
4)  NORMAL ACCIDENTS ARE OFTEN INCOMPREHENSIBLE TO THE PEOPLE WHO ARE RESPONSIBLE FOR STOPPING THEM
Critical information operators need to diagnose what is happening in a normal accident is often absent, ambiguous, incorrect, or hidden by excessive amounts of other information. There are thousands of lights, meters and gauges on a 8 by 100 feet large control panel. Computer systems could of course aid here. However, the complexity of such a computer system is also enormous, in hardware and in software, and can also increase the probability of negative interactions! Major flaws are very common in most widely used computer systems, and manufacturers spend far more time on testing and repairing problems, than on the original design! Operators form faulty mental models about the system they are operating and about the accident that is occurring.
In Social Psychology this is called  “bounded rationality”: the idea that humans are forced to make sense out of a reality which they can’t fully comprehend. As a result of this selection of relevant information--or more correct: rejecting of irrelevant information--we create simplified, workable models of the world around us.
Operators of complex equipments undergoing normal accidents often err on the side of excessive optimism. This means that one judges the probabilities or frequency of a future event based on the ease with which one can imagine or recall similar events from the past. In psychology they call this the “availability heuristic”. Availability appears to have played a role in the TMI accident. Quoting Perrow: “uncovering the reactor core was unheard of; it had never happened in a large commercial reactor”.
What is a highly complex system?
Only systems that are both tightly coupled and highly interactive. Tightly coupled means:
  • delays in operations are usually impossible
  • operations must follow in fixed order
  • it is not possible to make substitution in personnel, supplies and equipment highly interactive
  • limited information available to operators about some ongoing processes
  • about which there is incomplete scientific understanding
  • in which pieces of equipment that are not functionally related are spaced close together
  • in which isolation of failed components is not possible
So, other examples are Aircrafts, chemical plants and space missions.
 
Criticisms of Perrow
Several researchers and scholars have criticized Perrow’s theory. They state that more empirical research needs to be done to support him. They also point to what they see as an unwarranted prejudice against commercial nuclear plants.
Hirschorn, Prof of Management, argues that Perrow’s theory of the causes of normal accidents does not prove that such accidents will happen despite any future improvements in the technologies or managerial systems that operate them. He claims that the managerial weakness of TMI was responsible for a “non-normal” accident.
Similarly, other researchers point out that there are many examples of complex, tightly coupled, interactive, high-potential risk technological systems that have proven to be very reliable. However, a critical review of claims made by these researchers shows that no specific statistical evidence are given, and that in several cases the organization studied are operating under unusually favorable conditions (e.g. aircraft carriers studied only in peacetime)
There is no definite answer to this debate. Perrow’s ideas are provocative and since nuclear power plants have the potential to do considerable damage to humans and the environment, they must be regarded in ways other technologies need not be. Also, it alerts us to problems humans and human institutions might have  in future attempts to manage large technological systems.
 

Literature:
 
G.T. Gardner & P.C. Stern (1996). Human Interactions with complex systems: “normal” accidents and counterintuitive system behavior. In: Environmental Problems and Human Behavior. Allyn and Bacon, Needham Heights, MA
Nolt et al. (1997) What Have We Done? The Foundation for Global Sustainability's State of the Bioregion Report for the Upper Tennessee Valley and the Southern Appalachian Mountains, Earth Knows Publishers, Washburn, TN 
 

Printer-friendly format




Do you know someone else who would like to see this?
Your Email:
Their Email:
Comment:
(Will be included with e-mail)
Secret Code

In the box below, enter the Secret Code exactly as it appears above *


 

website hit 
counter
Powered by Bondware
News Publishing Software

The browser you are using is outdated!

You may not be getting all you can out of your browsing experience
and may be open to security risks!

Consider upgrading to the latest version of your browser or choose on below: