The Gorilla illusions and the illusion of memory

Here is a reprise from the archives – a post primarily about the illusion of memory. The story here from Chabris and Simons raises some disturbing issues about the trustworthiness of tacit knowledge over a long timescale.

Gorilla 2

Originally uploaded by nailbender

I have just finished reading The Invisible Gorilla, by Christopher Chabris and Daniel Simons (an extremely interesting book). These are the guys who set up the famous “invisible gorilla” experiment (if you don’t know it, go here). The subtitle of the book is “ways our intuition deceives us”, and the authors talk about a number of human traits – they call them illusions –  which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

The illusions which have most impact on KM are

 I would like to address these three illusions in a series of blog posts, as its a bit much to fit into a single one.

The illusion of memory has massive impact in KM terms, as it affects the reliability of any tacit knowledge that is held in human memory alone.

I have already posted about the weakness of the human brain as a long-term knowledge store. Chabris and Simons give some graphic examples of this, pointing our how even the most vivid memories can be completely unreliable. They describe how one person had a complete memory of meeting Patrick Stewart (Captain Picard of Star Trek) in a restaurant, which turned out not to have happened to him at all, but to be a story he has heard and incorporated into his own memory. They talk about two people with wildly differing memories of a traumatic event, which both turn out to be false when a videotape of the event is finally found. And they give this story of a university experiment into the reliability of memory.

 On the morning of January 28, 1986, the space shuttle Challenger exploded shortly after takeoff. The very next morning, psychologists Ulric Neisser and Nicole Harsch asked a class of Emory University undergraduates to write a description of how they heard about the explosion, and then to answer a set of detailed questions about the disaster: what time they heard about it, what they were doing, who told them, who else was there, how they felt about it, and so on.

Two and a half years later, Neisser and Harsch asked the same stu­dents to fill out a similar questionnaire about the Challenger explosion. 

The memories the students reported had changed dramatically over time, incorporating elements that plausibly fit with how they could have learned about the events, but that never actually happened. For example, one subject reported returning to his dormitory after class and hearing a commotion in the hall. Someone named X told him what happened and he turned on the television to watch replays of the explo­sion. He recalled the time as 11:30 a.m., the place as his dorm, the ac­tivity as returning to his room, and that nobody else was present. Yet the morning after the event, he reported having been told by an ac­quaintance from Switzerland named Y to turn on his TV. He reported that he heard about it at 1:10 p.m., that he worried about how he was going to start his car, and that his friend Z was present. That is, years after the event, some of them remembered hearing about it from differ­ent people, at a different time, and in different company.

Despite all these errors, subjects were strikingly confident in the ac­curacy of their memories years after the event, because their memories were so vivid—the illusion of memory at work again. During a final interview conducted after the subjects completed the questionnaire the second time, Neisser and Harsch showed the subjects their own hand­written answers to the questionnaire from the day after the Challenger explosion. Many were shocked at the discrepancy between their origi­nal reports and their memories of what happened. In fact, when con­fronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.

The authors conclude that those rich details you remember are quite often wrong—but they feel right. A memory can be so strong that even documentary evidence that it never happened doesn’t change what we remember.

The implication for Knowledge Management

The implication for Knowledge Management is that if you will need to re-use tacit knowledge in the future, then you can’t rely on people to remember it accurately. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to “what feels right”. The forgetting curve will have kicked in, and it kicks in quickly.  Tacit knowledge is fine for sharing knowledge on what’s happening now, but for sharing knowledge with people in the future (ie transferring knowledge through time as well as space) then it needs to be written down quickly while memory is still reliable.

We saw the same with our memories of the Bird Island game in the link above. Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written or photographic record can come as a shock, and can feel wrong, even if it’s right. People may not only refuse to believe the explicit record, they may even edit it to fit their (by now false) memories.

Any KM approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

View Original Source ( Here.

The curse of knowledge and the danger of fuzzy statements

Fuzzy statements in lessons learned are very common, and are the result of “the curse of knowledge”

Fuzzy Monster
Clip art courtesy of

I blogged yesterday about Statements of the Blindingly Obvious, and how you often find these in explicit knowledge bases and lessons learned systems, as a by-product of the “curse of knowledge“.

There is a second way in which this curse strikes, and that is what I call “fuzzy statements”.

It’s another example of how somebody writes something down as a way of passing on what they have learned, and writes it in such a way that it is obvious to them what it means, but which carries very little information to the reader.

A fuzzy statement is an unqualified adjective, for example

  • Set up a small, well qualified team…(How small? 2 people? 20 people? How well qualified? University professors? Company experts? Graduates?)
  • Start the study early….(How early? Day 1 of the project? Day 10? After the scope has been defined?)
  • A tighter approach to quality is needed…. (Tighter than what? How tight should it be?)
You can see, in each case, the writer has something to say about team size, schedule or quality, but hasn’t really said enough for the reader to understand what to do, other than in a generic “fuzzy” way, using adjectives like “small, well, early, tighter” which need to be quantified.

In each case, the facilitator of the session or the validator of the knowledge base needs to ask additional questions. How small? How well qualified? How early? How tight?

Imagine if I tried to teach you how to bake a particular cake, and told you “Select the right ingredients, put them in a large enough bowl. Make sure the oven is hotter”. You would need to ask more questions in order to be able to understand this recipe.

Again, it comes back to Quality Control.

Any lessons management system or knowledge base suffers from garbage In, Garbage Out, and the unfortunate effect of the Curse of Knowledge is that people’s first attempt to communicate knowledge is often, as far as the reader is concerned, useless garbage.

Apply quality control to your lessons and de-fuzz the statements

View Original Source ( Here.

How to curb overconfidence by considering the unknowns

Overconfidence is one of the most powerful cognitive biases that affects KM. Here is how to address it.

Cognitive biases are the plague of Knowledge Management. They cause people to neglect evidence, to fail to notice things, to reinvent their memory, and to be overconfident about their own knowledge.

Overconfidence in particular is an enemy of learning. People are more willing to accept knowledge from a confident person, but confidence is more often linked to a lack of knowledge – the “Dunning-Kruger effect“. Overconfidence leads to wishful thinking, which leads to ignoring knowledge from others, and is one of the primary causes of project cost and time overruns.

Overconfidence is therefore what happens when you don’t know what you don’t know, and a recent Insead study shows that overconfidence can be significantly reduced just by considering your lack of knowledge. In this study they gave people general knowledge questions, and found (as is often the case) that people were overconfident about their answer (You can take a similar test, to test your own level of overconfidence). Then they tried again with two groups of people – with the first group they asked the people to list a couple of missing pieces of knowledge which would help them guess the answer better, and with the second group they asked them to consider reasons why their choice might be wrong (a “devil’s advocate” approach).

The paper contains a very clear graph which shows that the approach of “considering the unknowns” has a major impact on overconfidence, while the devils advocate approach is far less powerful. The report concludes:

In our view, overconfidence often arises when people neglect to consider the information they lack. Our suggestion for managers is simple. When judging the likelihood of an event, take a pen and paper and ask yourself: “What is it that I don’t know?” Even if you don’t write out a list, the mere act of mulling the unknowns can be useful. And too few people do it. Often, they are afraid to appear ignorant and to be penalised for it. But any organisation that allows managerial overconfidence to run amok can expect to pay a hefty price, sooner or later.

In Knowledge Management, we have a simple and powerful process that allows exactly this process of  “Considering the unknowns”. This is the Knowledge Gap Analysis, or its more elaborate version for larger projects – the Knowledge Management Plan. Both of these processes require a team to list the things they do not know (thus reducing overconfidence) and then set up learning actions to acquire the knowledge (thus reducing the number of unknowns).

These are two of many KM techniques that can help address cognitive bias.

View Original Source ( Here.

Why winners don’t learn (the winner’s curse)

Teams and individuals who are winning, are often the poorest at learning – a particular form of “winner’s curse”.

Who learned more about Tank Warfare from World War One? Was it the victorious Americans, British and French, or the losing Germans?

It was, of course, the Germans.

The story below is taken from a review of a book by Max Boot.

“The British military and government, before Churchill became Prime Minister, lost interest in tanks. In France, Captain Charles de Gaulle was interested in fast-moving mechanized warfare, but the French military favored defensive warfare and firepower.  The United States also devoted little interest in armored warfare. Writes Boot:

“The U.S. had deployed a Tank Corps in World War I, but it was disbanded in 1920 over the anguished objections of two of its leading officers — Colonel George S. Patton and Major Dwight D. Eisenhower.

“It was the Germans who were most interested in fast-moving mechanized warfare. Writes Boot:

“Around 1934, Colonel Heinz Guderian, chief of staff of the Inspectorate of Motorized Troops, gave the Fuehrer [Adolf Hitler] a short tour d’horizon of tank warfare. “Hitler,” Guderian wrote, “was much impressed by the speed and precision of movement of our units, and said repeatedly, “that’s what I need! That’s what I want!'”

“In 1939 Hitler had a three-hour parade of mechanized forces. Fuller was there, invited because of his fascist sympathies. Hitler said to him, “I hope you were pleased with your children.” Fuller replied:

“Your Excellency, they have grown up so quickly that I no longer recognize   them”. 

The Winners’ curse is that the winner often fails to learn, and so is overtaken in the next competition by the loser. That’s why Germany overtook the Allied powers in terms of tank warfare in 1939, and the loser became winner for a while.  Winners are complacent, and reluctant to change. Losers are eager not to lose again.

We often see this “Winner’s Curse” in our Bird Island KM exercises, where the team that builds the tallest initial tower seems to learn the least from the others (and often from the Knowledge Asset as well).  Very often they are not the winning team at the end of the exercise.

The very fact that a team is ahead in the race, means that they have less incentive to learn. So the team with the tallest tower “relaxes” a bit. The best learners are often the teams with the second-tallest tower, as they know that with a little bit of learning effort, they can be in the lead. Also there seems to be a tendency to learn more readily from failure, than from success.

The story of the Wright Brothers is another example – having developed the first effective aeroplane, they failed to learn and optimise their design, and were eventually outcompeted. Their design became obsolete and the Wright Brithers went out of business.

Beware of the Winner’s Curse in your KM programs. Ensure the winning teams also continue to learn. Capture lessons from successes and failures, and encourage even the winners to keep pushing to do even better.  Learning from failure is psychologically easier, but learning from success allows success to be repeated and improved.

Learning from success is very difficult, but it is the most powerful learning you can do.

View Original Source Here.

Tacit Knowledge and cognitive bias

Is that really Tacit Knowledge in your head, or is it just the Stories you like to tell yourself?

IMAGINATION by archanN on wikimedia commons

All Knowledge Managers know about the difference between tacit knowledge and explicit knowledge, and the difference between the undocumented knowledge you hold in your head, and documented knowledge which can be shared.  We often assume that the “head knowledge” (whether tacit or explicit) is the Holy Grail of KM; richer, more nuanced, more contextual and more actionable than the documented knowledge.

However the more I read about (and experience) cognitive bias and the failures of memory, the more suspicious I become of what we hold in our heads.

These biases and failures are tendencies to think in certain ways that can lead to systematic deviations from good judgement, and to remember (and forget) selectively and not always in accordance with reality. We all create, to a greater or lesser extent, our own internal “subjective social reality” from our selective and flawed perception and memory.

Cognitive and memory biases include

  • Confirmation bias, which leads us to take on new “knowledge” only when it confirms what we already think
  • Gamblers fallacy, which leads us to think that the most recent events are the more important 
  • Post-investment rationalisation, which leads us to think that any costly decisions we made in the past must have been correct
  • Sunk-cost fallacy, which makes us more willing to pour money into failed big projects than into failed small projects
  • Observational selection bias, which leads us to think that things we notice are more common that they are (like when you buy a yellow car, and suddenly notice how common yellow cars are)
  • Attention bias, where there are some things we just don’t notice (see the Gorilla Illusions)
  • Memory transience, which is the way we forget details very quickly, and then “fill them in” based on what we think should have happened
  • Misattribution, where we remember things that are wrong
  • Suggestibility, which is where we create false memories
So some of those things in your head that you “Know” may not be knowledge at all. Some may be opinions which you have reinforced selectively, or memories you have re-adjusted to fit what you would have liked to happen, or suggestions from elsewhere that feel like memories. Some of them may be more like a story you tell yourself, and less like knowledge.

Do these biases really affect tacit knowledge? 

Yes they really do, and they can affect the decisions we make on the basis of that knowledge.  Chapter 10 of the 2015 World development Report, for example, looks at cognitive biases among development professionals, and makes for interesting reading.

While you would expect experts in the World Bank to hold a reliable store of tacit knowledge about investment to alleviate poverty, in fact these experts are as prone to cognitive bias as the rest of us. Particularly telling, for me, was the graph that compared what the experts predicted poor people would think, against the actual views of the poor themselves. 

The report identifies and examines 4 “decision traps” that affect the development professionals and influence the judgements that they make:

  • the use of shortcuts (heuristics) in the face of complexity; 
  • confirmation bias and motivated reasoning; 
  • sunk cost bias; and 
  • the effects of context and the social environment on group decision making.
And if the professionals of the World Bank are subject to such traps and biases, then there is no guarantee that the rest of us are any different.

So what is the implication?

The implication of this study, and many others, is that one person’s “tacit knowledge” may be unreliable, or at best a mish-mash of knowledge, opinion, bias and falsehood. As Knowledge Managers, there are a number of things we can do to counter this risk.

  1. We can test Individual Knowledge against the knowledge of the Community of Practice. The World Bank chapter suggests that “group deliberation among people who disagree but who have a common interest in the truth can harness confirmation bias to create “an efficient division of cognitive labor”. In these settings, people are motivated to produce the best argument for their own positions, as well as to critically evaluate the views of others. There is substantial laboratory evidence that groups make more consistent and rational decisions than individuals and are less “likely to be influenced by biases, cognitive limitations, and social considerations”. When asked to solve complex reasoning tasks, groups succeed 80 percent of the time, compared to 10 percent when individuals are asked to solve those tasks on their own. By contrast, efforts to debias people on an individual basis run up against several obstacles (and) when individuals are asked to read studies whose conclusions go against their own views, they find so many flaws and counterarguments that their initial attitudes are sometimes strengthened, not weakened”. Therefore community processes such as Knowledge Exchange and Peer Assist can be ideal ways to counter individual biases.
  2. We can routinely test community knowledge against reality. Routine application of reflection processes such as After Action review and Retrospect require an organisation to continually ask the questions “What was expected to happen” vs “What actually happened”.  With good enough facilitation, and then careful management of the lessons, reality can be a constant self-correction mechanism against group and individual bias.
  3. We can bring in other viewpoints. Peer Assist, for example, can be an excellent corrective to group-think in project teams, bringing in others with potentially very different views. 
  4. We can combine individual memory to create team memory. Term reflection such as Retrospect is more powerful than individual reflection, as the team notices and remembers more things than any individual can.
  5. We can codify knowledge. Poor as codified knowledge is, it acts as an aide memoire, and counteracts the effects of transience, misattribution and suggestibility. 
But maybe the primary thing we can do is to stop seeing individual tacit knowledge as being safe and reliable, and instead start to concentrate on the shared knowledge held within communities of practice.  
Think of knowledge as Collective rather than Individual, and you will be on teh right track.

View Original Source Here.