Manufacturing AUTOMATION

Black swans and baggage: How to manage unscheduled events

September 20, 2010
By Dick Morley

The black swan. I’ve mentioned it before in this column in a discussion about perceived impossibilities. The Black Swan is a best-selling book in which author Nassim Nicholas Taleb examines the world of unexpected events. He gets the name from the old world presumption that all swans are white, and that black swans are impossible. We, of course, now know this to be false. 

The point is that we often have preconceived notions in society and technology that conflict with actual data. The world is not a smooth linear system, but one that is populated with black swans – events that are unpredictable as single elements and that seldom repeat themselves. Catastrophes will be with us forever. Even if we have perfect hindsight on individual events, there is no remedy.

So, how do we manage the unmanageable? It is basically impossible to do this. It is, however, possible to recover from such catastrophes, but we need assets in place to do this. When black swans happen, we can recover, and afterwards everybody knows what should have been done.

I live in the woods of New Hampshire, and I believe in bad times. We have power failures for lengthy periods every year. I have resources such as power, water, food and communications to last up to four days. I thought I had the black hole syndrome thoroughly covered. Not so. In 2008, we had a terrible ice storm. Almost every home in New Hampshire was affected. I did not have power for 10 days. I predicted no events beyond four days, and I was wrong.

Advertisement

Another unscheduled event – this one of the software variety – was responsible for the Denver baggage handling system fiasco, which played a part in postponing the opening of the airport for almost a year. What happened? A long time ago, in the days of wooden ships and iron men, Denver decided that its new airport needed a novel baggage system. United Airlines has one of their busiest terminals in Denver. As you might expect, the baggage issue with automated vehicles was overrun in time and dollars, and never really operated correctly. United got in touch with me and asked, "Can you fix the system?" I answered, "Of course, but you’ll never hire me." "Why not?," they asked. My response: "My engineering staff would come from New England universities and Japan."

Since two of the objectives of the project were to bring technology to Denver and employ local people, they did not agree to my request. I understood that. That was not the problem. The real problem in Denver was that the software designers didn’t understand the intricacies of a baggage system. When I mentioned this on the web, many suggested that the algorithms they installed never failed, but that the designers forgot that availability of the automated carriers had to be distributed across the system at all times; not parked at a remote location. Other items, such as partial local failures of vehicles and equipment, had to be absorbed into the system without complete failure.

Other technology black swan examples include the Titanic, the Hindenburg and the Chernobyl disaster. On the software side, black swans include the 1962 Mariner space probe, the 1982 Soviet gas pipeline, the 1993 Pentium floating-point division flaw, and the 1998 AT&T network outage.

Unscheduled events happen continuously, but seldom, if ever, repeat themselves. I examined the last 50 years of my design efforts. Only one job had no unscheduled events. Every engineer worth his salt knows that whatever he has done has Murphy lurking over his shoulder. We tend to underestimate the amount of luck needed and overestimate our skills for any project. We have the illusion of control over complex systems where the black swans are inevitable and unmanageable.

Taleb has some simple rules. Some of these sound like black humour, but they get the point across: Don’t make a large, single function system; keep the systems small so that any collapse occurs locally; it is okay to make a small system fragile, as long as it doesn’t collapse the entire complex. He suggests that you don’t drive a school bus blindfolded; don’t use incentive awards; and don’t give children sticks of dynamite, even if they have a warning sticker.

Some of Morley’s rules are: It’s okay to be smart; freedom is an illusion; accountants and lawyers give advice, not permission; the devil is not in the details; and adapt or die.

Black Swans are, by definition, impossible to predict. A system provides no way to fail gracefully, and then we repeat the errors in the next design.

This is a revolution, not of technology, but of ideas. Almost none of the great innovations came from planning. I strongly recommend you read The Black Swan – a difficult read, but worth the effort. As with all silver bullets, this is not the only answer, but one that should be in your toolbox.

What does this mean for control systems and management? It means that Lean, by itself, is bad, because there are no assets available to recover after the swan lays an egg. Assets should be available so that recovery is quick and robust. Systems should be small, local systems loosely connected into the whole. When we drive a car on a long trip, we naturally react to asset management. In the trunk we put duct tape, WD-40, a Leatherman, cable ties and a flashlight. And when an accident happens, we should not blame other people. They just happen. Deal with it.

Dick Morley is the inventor of the PLC, an author, speaker, automation industry maverick and a self-proclaimed ubergeek. E-mail him at morley@barn.org.


Print this page

Advertisement

Story continue below