Another talk, this time on The Logic of Failure: Recognizing and Avoiding Error in Complex Situations, a book by Dietrich Dorner.
The book’s been a favorite of mine for years, not just for the set up, but for the detailed, unsparing look it provides on how human beings fail to get things right. Too often in psychology, there’s an emphasis on either seeing how people feel about a situation, or how well or how poorly they perform at a given task. Dorner goes further, and tries to understand not just how, but why they fail.
The setup was simple. Dorner set up a computer simulation of an African village called Tanaland. This book was written in 1990, and so Sim City was not widely known, but it’s the same concept. The players were given dictatorial powers, given the goal to “improve the wellbeing of the people” and had six opportunities over 10 years to review (and possibly change) their policies.
Given the tools the players had at hand, they went to improving what they could. They improved the food supply (using artifical fertilizer) and increased medical care. There were more children and fewer deaths, and lif expectancy was higher. For the first three sessions, everything went well. But unknown to the players, they’d set up an unsustainable situation.
Famine typically broke out in the 88th month. The agarian population dropped dramatically, below what they had been initially. Sheep, goats and cows died off in their herds, and the land was left barren by the end. Given a free hand, most players engineered a wasteland.
One player, by the end of the simulation, had a stable population and had significantly better quality of life for the villagers. Failure was the rule, but somehow he had found an exception.
The litany of possible errors was a long one, and so immediately recognisable that it's hard to suppress a wince of empathy on reading.
The players who did badly tended not to ask "why" things happened. They tended to jump from one subject to another, switching aimlessly, without focus. They proposed hypotheses without testing them. If they did test their hypotheses, they did so on an adhoc basis, testing success cases without testing possible failure cases. In some cases they had tunnel vision: focussing on irrelevancies at the expence of the larger picture. In other cases, they attempted to "delegate" intractable problems to the villagers themselves or refused to deal with the issue at all. Finally, and most tellingly, most players dealt with the problems that they saw "on the spot" without thinking of the larger, longer term problems that they were setting up with that immediate short term solution.
These results were not a surprise. They were just what Dorner's team was looking for. Where many scientists would have looked at the successes and determined the optimal "working strategy" – Dorner was just as interested in the range of failures in the experiment. Dorner's team had specifically designed the simulation so that most people would fail at it, precisely aiming at the weak points of human decision making.
Not all players failed in the same way. Even amongst the players who failed the same way, many players had different reasons for their particular mode of failure. And yet, there were strong commonalities among the failing players, both in their reactions to incipient failure, and in their attempts at recovery.
The reason why most people failed was that they did not understand the nature of Tanaland. Despite being a simulation, Tanaland was no game, and Dorner's team programmed in as accurate a simulation of an African village as the hardware would allow. The watertable under the village had a limited amount of water available. The population grew at an exponential rate given the available food and healthcare. Even the topsoil was modelled accurately, so that overgrazing caused by massive herds would erode the topsoil over time. All of this data was available to the players – had they thought to look. But most players didn't. The experiment ended in three predictable failure modes: either the cattle starved and died, or the groundwater was exhausted, or the population exceeded the available food. Far from being a bundle of independent subsystems, all of Tanaland was deeply intertwingled.
The deeper reason why Tanaland was so successful at bamboozling players is partly due to the incredible success of the human brain's pattern recognition system. Human beings are capable of driving in heavy traffic, understanding language, and recognizing patterns in almost random data, feats far beyond most computers. But there are some problems which defeat human intuition.
Linear extrapolation. Human beings have a tendency to assume change itself is static. Even when shown exponential growth, we're not good at internalizing that knowledge. This may be why calculus is so hard for many people, because we don't think about the rate of growth itself growing.
Delayed Feedback. Human beings tend to assume that an action will yield a response immediately, or not at all. This is the way that we interact with the world on a daily basis, and we can become very confused when there's a significant delay in the system's response. Even when we recognize intellectually that a change is "in the pipeline", we may struggle against the instinct to do more and oversteer rather than correctly "sitting on our hands."
Contradicting goals. Part of the failure of players was inherent in the vague goals that they had. What, exactly does "improve the wellbeing of the people" really mean? Does it mean providing the best quality of life to all the villagers? Growing the village as a whole to be more prosperous? In many cases, the top level goal ended up being broken down into goals that conflicted with each other. In other cases, players tried to find concrete problems to fix. One player, deciding that the village needed irrigation, set out building an irrigation system and quickly became fixated on that one problem, becoming "addicted" to his experience of flow. In such cases, players were unable to clearly form goals at all.
Priorities. Even when the players had clearly defined goals, they had another problem to contend with: they would be stymied by cases where actions which furthered one goal would thwart another. The complex interdependencies in the system did not allow for a full optimization of every variable, and players would either flail uselessly or be paralysed by their inability to cover every base.
Information overload. In many cases, the players used too little information to know how to make the best decisions. However, some players had the opposite problem; given access to all data of a complex system, they tried to see the entire system at once. These players found themselves paralysed by complexity, and unable to interpret the results of the data. Interestingly, the problem the players had was not that they did not see the correct chart. They were literally unable to recognize the charts and the changes in data as relevant – having looked at all the data available, their abilities to see a pattern was exhausted well before they stumbled on the correct chart.
Reductive Hypotheses. By far the worst problem that players had, above all others, was that the first hypotheses they formed about the system were not changed in response to the data. If anything, the players were apt to be the most sure in their beliefs when the hypotheses were completely wrong. Part of this came from uncertainty and cognitive dissonance. Uncertainty produced fear and doubt. Asserting the hypothesis helped quell this fear and doubt. Over time, the players learned that the more they believed in the hypothesis, the better they felt. This reduced their ability to develop new hypotheses, as they were already "wedded" to their existing ideas.
There were also commonalities amongst the players who did well. The players who did well were the ones who could tolerate uncertainty. They defined clear goals and priorities. They made many small decisions in different areas, and followed up on the expected vs actual results of most, if not all of those decisions. They kept an eye on the overall processes of the system, and did not succumb to flow experiences.
Adding to this, Dorner's team ran an experiment with two groups of fifteen players each. One group was drawn from the student population. The other group was made of senior managers from large industrial and commercial firms. The managers did significantly better than the students on every possible metric; given several different challenges, they responded appropriately to each one. Dorner's team was unable to determine if this was innate talent or the benefit of years of experience.
A Decision Making Model
So what is the right thing to do when faced with a complex situation? Dorner presents a possible schema for problem solving, intended more as a helpful aid than as a representation of how people actually solve problems.
- Formulation of Goals - deciding what it is that needed fixing, and putting priorities on those goals.
- Formulation of Models - determining the internal workings of the system.
- Prediction and Extrapolation - determining the eventual output of the system.
- Planning of Actions; decision making, and execution of actions - feeding input into the system.
- Review of effects of actions and revision of strategy - determining the expected model vs the actual model.
Interestingly, this is very close to the Plan/Do/Check/Act cycle proposed by Deming – it assumes incomplete knowledge of a complex system and tries to improve understanding of the underlying model through repeated iterations of the cycle.
Dorner also notes that being simply told of a decision making strategy did no good at all to players; when given instruction on dealing with complex systems, the players thought that they had been helped and were better able to discuss their failures with better terminology… but their actual performance was the same as the control group. What really helped players, overall, was repeated exposure to complex systems. Showing was not enough; they had to experience their own reactions and build up their tolerance to decision making in the face of uncertainty and emotional stress.