Why winners don’t learn (the winner’s curse)

Teams and individuals who are winning, are often the poorest at learning – a particular form of “winner’s curse”.

Who learned more about Tank Warfare from World War One? Was it the victorious Americans, British and French, or the losing Germans?

It was, of course, the Germans.

The story below is taken from a review of a book by Max Boot.

“The British military and government, before Churchill became Prime Minister, lost interest in tanks. In France, Captain Charles de Gaulle was interested in fast-moving mechanized warfare, but the French military favored defensive warfare and firepower.  The United States also devoted little interest in armored warfare. Writes Boot:

“The U.S. had deployed a Tank Corps in World War I, but it was disbanded in 1920 over the anguished objections of two of its leading officers — Colonel George S. Patton and Major Dwight D. Eisenhower.

“It was the Germans who were most interested in fast-moving mechanized warfare. Writes Boot:

“Around 1934, Colonel Heinz Guderian, chief of staff of the Inspectorate of Motorized Troops, gave the Fuehrer [Adolf Hitler] a short tour d’horizon of tank warfare. “Hitler,” Guderian wrote, “was much impressed by the speed and precision of movement of our units, and said repeatedly, “that’s what I need! That’s what I want!'”

“In 1939 Hitler had a three-hour parade of mechanized forces. Fuller was there, invited because of his fascist sympathies. Hitler said to him, “I hope you were pleased with your children.” Fuller replied:

“Your Excellency, they have grown up so quickly that I no longer recognize   them”. 

The Winners’ curse is that the winner often fails to learn, and so is overtaken in the next competition by the loser. That’s why Germany overtook the Allied powers in terms of tank warfare in 1939, and the loser became winner for a while.  Winners are complacent, and reluctant to change. Losers are eager not to lose again.

We often see this “Winner’s Curse” in our Bird Island KM exercises, where the team that builds the tallest initial tower seems to learn the least from the others (and often from the Knowledge Asset as well).  Very often they are not the winning team at the end of the exercise.

The very fact that a team is ahead in the race, means that they have less incentive to learn. So the team with the tallest tower “relaxes” a bit. The best learners are often the teams with the second-tallest tower, as they know that with a little bit of learning effort, they can be in the lead. Also there seems to be a tendency to learn more readily from failure, than from success.

The story of the Wright Brothers is another example – having developed the first effective aeroplane, they failed to learn and optimise their design, and were eventually outcompeted. Their design became obsolete and the Wright Brithers went out of business.

Beware of the Winner’s Curse in your KM programs. Ensure the winning teams also continue to learn. Capture lessons from successes and failures, and encourage even the winners to keep pushing to do even better.  Learning from failure is psychologically easier, but learning from success allows success to be repeated and improved.

Learning from success is very difficult, but it is the most powerful learning you can do.

View Original Source Here.

Tacit Knowledge and cognitive bias

Is that really Tacit Knowledge in your head, or is it just the Stories you like to tell yourself?

IMAGINATION by archanN on wikimedia commons

All Knowledge Managers know about the difference between tacit knowledge and explicit knowledge, and the difference between the undocumented knowledge you hold in your head, and documented knowledge which can be shared.  We often assume that the “head knowledge” (whether tacit or explicit) is the Holy Grail of KM; richer, more nuanced, more contextual and more actionable than the documented knowledge.

However the more I read about (and experience) cognitive bias and the failures of memory, the more suspicious I become of what we hold in our heads.

These biases and failures are tendencies to think in certain ways that can lead to systematic deviations from good judgement, and to remember (and forget) selectively and not always in accordance with reality. We all create, to a greater or lesser extent, our own internal “subjective social reality” from our selective and flawed perception and memory.

Cognitive and memory biases include

  • Confirmation bias, which leads us to take on new “knowledge” only when it confirms what we already think
  • Gamblers fallacy, which leads us to think that the most recent events are the more important 
  • Post-investment rationalisation, which leads us to think that any costly decisions we made in the past must have been correct
  • Sunk-cost fallacy, which makes us more willing to pour money into failed big projects than into failed small projects
  • Observational selection bias, which leads us to think that things we notice are more common that they are (like when you buy a yellow car, and suddenly notice how common yellow cars are)
  • Attention bias, where there are some things we just don’t notice (see the Gorilla Illusions)
  • Memory transience, which is the way we forget details very quickly, and then “fill them in” based on what we think should have happened
  • Misattribution, where we remember things that are wrong
  • Suggestibility, which is where we create false memories
So some of those things in your head that you “Know” may not be knowledge at all. Some may be opinions which you have reinforced selectively, or memories you have re-adjusted to fit what you would have liked to happen, or suggestions from elsewhere that feel like memories. Some of them may be more like a story you tell yourself, and less like knowledge.

Do these biases really affect tacit knowledge? 

Yes they really do, and they can affect the decisions we make on the basis of that knowledge.  Chapter 10 of the 2015 World development Report, for example, looks at cognitive biases among development professionals, and makes for interesting reading.

While you would expect experts in the World Bank to hold a reliable store of tacit knowledge about investment to alleviate poverty, in fact these experts are as prone to cognitive bias as the rest of us. Particularly telling, for me, was the graph that compared what the experts predicted poor people would think, against the actual views of the poor themselves. 

The report identifies and examines 4 “decision traps” that affect the development professionals and influence the judgements that they make:

  • the use of shortcuts (heuristics) in the face of complexity; 
  • confirmation bias and motivated reasoning; 
  • sunk cost bias; and 
  • the effects of context and the social environment on group decision making.
And if the professionals of the World Bank are subject to such traps and biases, then there is no guarantee that the rest of us are any different.

So what is the implication?

The implication of this study, and many others, is that one person’s “tacit knowledge” may be unreliable, or at best a mish-mash of knowledge, opinion, bias and falsehood. As Knowledge Managers, there are a number of things we can do to counter this risk.

  1. We can test Individual Knowledge against the knowledge of the Community of Practice. The World Bank chapter suggests that “group deliberation among people who disagree but who have a common interest in the truth can harness confirmation bias to create “an efficient division of cognitive labor”. In these settings, people are motivated to produce the best argument for their own positions, as well as to critically evaluate the views of others. There is substantial laboratory evidence that groups make more consistent and rational decisions than individuals and are less “likely to be influenced by biases, cognitive limitations, and social considerations”. When asked to solve complex reasoning tasks, groups succeed 80 percent of the time, compared to 10 percent when individuals are asked to solve those tasks on their own. By contrast, efforts to debias people on an individual basis run up against several obstacles (and) when individuals are asked to read studies whose conclusions go against their own views, they find so many flaws and counterarguments that their initial attitudes are sometimes strengthened, not weakened”. Therefore community processes such as Knowledge Exchange and Peer Assist can be ideal ways to counter individual biases.
  2. We can routinely test community knowledge against reality. Routine application of reflection processes such as After Action review and Retrospect require an organisation to continually ask the questions “What was expected to happen” vs “What actually happened”.  With good enough facilitation, and then careful management of the lessons, reality can be a constant self-correction mechanism against group and individual bias.
  3. We can bring in other viewpoints. Peer Assist, for example, can be an excellent corrective to group-think in project teams, bringing in others with potentially very different views. 
  4. We can combine individual memory to create team memory. Term reflection such as Retrospect is more powerful than individual reflection, as the team notices and remembers more things than any individual can.
  5. We can codify knowledge. Poor as codified knowledge is, it acts as an aide memoire, and counteracts the effects of transience, misattribution and suggestibility. 
But maybe the primary thing we can do is to stop seeing individual tacit knowledge as being safe and reliable, and instead start to concentrate on the shared knowledge held within communities of practice.  
Think of knowledge as Collective rather than Individual, and you will be on teh right track.

View Original Source Here.

Skip to toolbar