What a surprise! The Deepwater Horizon rig drilling for oil 5000 feet beneath the surface of the Gulf of Mexico broke down and has been gushing oil into our waters since April 20, with no end in sight. It’s another example of the “Nothing can go wrong… go wrong ..go wrong ..go wrong ..go wrong” syndrome — check the article I posted on May 19th about the stock market. The brief stock plunge didn’t do too much damage; the explosion on the Deepwater Horizon, which took the lives of eleven people and continues to spew oil into the Gulf of Mexico is a major catastrophe.
The headline in the June 21st edition of the New York Times, http://www.nytimes.com/, is “Lapses Found in Oversight of Failsafe Device on Oil Rig.” What does failsafe mean? It means nothing can go wrong. The extensively researched article under that headline reveals that this was no surprise to a great many thoughtful people in the industry.
Brown’s law says that for any engineered system to be reliable it needs to meet the following criteria:
• A technical system design has to assume that the worst-case scenario can definitely occur.
• A system design has to include safety features to cope with the worst-case scenario.
• A system whose failure would be a catastrophe should never include a single point of failure.
• Safety features must be redundant.
• The design of safety features has to take into account practical limitations, such as cost.
• Cost of safety features must be balanced against costs of system failure.
• Nothing is completely failsafe.
Good Test and Maintenance
• The best design in the world is worthless without a serious program of regular testing and maintenance.
• The best testing and maintenance program in the world is worth very little if it is not based on accurate as-built information.
• The best testing and maintenance program in the world is worth very little without keeping complete and accurate records.
The story in the New York Times about the Deepwater Horizon describes how these rules were repeatedly ignored. The safety depended on a device called a blowout preventer (BOP). In the event of an accident this would cause a “blind shear ram” to cut and seal the pipe that connects the well to the outside world. It was supposed to be failsafe. IT DIDN’T WORK!! According to the Times, a confidential report from the year 2000, “concluded that the greatest vulnerability by far on the entire blowout preventer was one of the small shuttle valves leading to the blind shear ram. If this valve jammed or leaked, the report warned, the ram’s blades would not budge.” It was a single-point of failure.
What happened was no surprise — it was deliberately ignoring the principles of good design and test (Brown’s Law) in order to reduce costs. This is turning out to be a very costly set of decisions.
WILL WE LEARN FROM OUR FAILURES OR ARE WE DOOMED TO REPEAT THEM OVER AND OVER?