The Gorilla illusions and the illusion of memory

Here is a reprise from the archives – a post primarily about the illusion of memory. The story here from Chabris and Simons raises some disturbing issues about the trustworthiness of tacit knowledge over a long timescale.

Gorilla 2

Originally uploaded by nailbender

I have just finished reading The Invisible Gorilla, by Christopher Chabris and Daniel Simons (an extremely interesting book). These are the guys who set up the famous “invisible gorilla” experiment (if you don’t know it, go here). The subtitle of the book is “ways our intuition deceives us”, and the authors talk about a number of human traits – they call them illusions –  which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

The illusions which have most impact on KM are

 I would like to address these three illusions in a series of blog posts, as its a bit much to fit into a single one.

The illusion of memory has massive impact in KM terms, as it affects the reliability of any tacit knowledge that is held in human memory alone.

I have already posted about the weakness of the human brain as a long-term knowledge store. Chabris and Simons give some graphic examples of this, pointing our how even the most vivid memories can be completely unreliable. They describe how one person had a complete memory of meeting Patrick Stewart (Captain Picard of Star Trek) in a restaurant, which turned out not to have happened to him at all, but to be a story he has heard and incorporated into his own memory. They talk about two people with wildly differing memories of a traumatic event, which both turn out to be false when a videotape of the event is finally found. And they give this story of a university experiment into the reliability of memory.

 On the morning of January 28, 1986, the space shuttle Challenger exploded shortly after takeoff. The very next morning, psychologists Ulric Neisser and Nicole Harsch asked a class of Emory University undergraduates to write a description of how they heard about the explosion, and then to answer a set of detailed questions about the disaster: what time they heard about it, what they were doing, who told them, who else was there, how they felt about it, and so on.

Two and a half years later, Neisser and Harsch asked the same stu­dents to fill out a similar questionnaire about the Challenger explosion. 

The memories the students reported had changed dramatically over time, incorporating elements that plausibly fit with how they could have learned about the events, but that never actually happened. For example, one subject reported returning to his dormitory after class and hearing a commotion in the hall. Someone named X told him what happened and he turned on the television to watch replays of the explo­sion. He recalled the time as 11:30 a.m., the place as his dorm, the ac­tivity as returning to his room, and that nobody else was present. Yet the morning after the event, he reported having been told by an ac­quaintance from Switzerland named Y to turn on his TV. He reported that he heard about it at 1:10 p.m., that he worried about how he was going to start his car, and that his friend Z was present. That is, years after the event, some of them remembered hearing about it from differ­ent people, at a different time, and in different company.

Despite all these errors, subjects were strikingly confident in the ac­curacy of their memories years after the event, because their memories were so vivid—the illusion of memory at work again. During a final interview conducted after the subjects completed the questionnaire the second time, Neisser and Harsch showed the subjects their own hand­written answers to the questionnaire from the day after the Challenger explosion. Many were shocked at the discrepancy between their origi­nal reports and their memories of what happened. In fact, when con­fronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.

The authors conclude that those rich details you remember are quite often wrong—but they feel right. A memory can be so strong that even documentary evidence that it never happened doesn’t change what we remember.

The implication for Knowledge Management

The implication for Knowledge Management is that if you will need to re-use tacit knowledge in the future, then you can’t rely on people to remember it accurately. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to “what feels right”. The forgetting curve will have kicked in, and it kicks in quickly.  Tacit knowledge is fine for sharing knowledge on what’s happening now, but for sharing knowledge with people in the future (ie transferring knowledge through time as well as space) then it needs to be written down quickly while memory is still reliable.

We saw the same with our memories of the Bird Island game in the link above. Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written or photographic record can come as a shock, and can feel wrong, even if it’s right. People may not only refuse to believe the explicit record, they may even edit it to fit their (by now false) memories.

Any KM approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

View Original Source ( Here.

The impact of the forgetting curve in KM

We forget stuff over time, if we don’t practice it. What does that mean for Knowledge Management?

The human brain learns and remembers stuff, but it also forgets stuff too. We know all about learning curves, but also need to realise there is a forgetting curve. The brain discards knowledge it feels is irrelevant or not urgent, then begins to subtly alter the remaining knowledge so that it fits with our preconceptions.  We learn, we fill our brain with knowledge, then it begins to seep away. 
There are plenty of studies that show this effect. This reference, for example, suggests that, on average, students forget 70 percent of what they are taught within 24 hours of the training experience, unless given frequent “boosters” or reminders to keep the knowledge fresh. We found in our Bird Island exercise that having done the exercise before did not help people perform better a few months later. And Daniel Schachter has written a whole book on how the mind forgets and remembers, reviewed here.

The implications are these:

  • Telling people something does not give them lasting knowledge unless they have a chance to use and practice it. Therefore it’s best to have the Telling as close as possible to the Using. This speaks in favour of pull-based KM, where people seek for knowledge as and when they need it, rather than push-based knowledge.
  • Leaving knowledge as tacit “head knowledge” will work if an activity is regularly practiced or happens on a regular basis. The regular ongoing nature of the activity keeps practice knowledge fresh (the lower chart in the diagram above).
  • This is particularly true of communities of practice. Connecting up people from multiple parts of the organisation makes it more likely they the activity is being practiced somewhere, and therefore that someone in the community has fresh experience and knowledge.  It’s called a Community of Practice, because the knowledge is being practiced by the community. 
  • When knowledge is infrequent or irregular, then keeping knowledge as tacit “head knowledge” is a risky strategy (the upper chart in the diagram above). We may think we can remember an activity we did a year ago, but the chances are that we can’t recall any reliable detail, and many of the things we remember are false. In situations like this we need to document knowledge as best we can, both as an aide memoire to the knowledge which still remains deeply buried in our heads, and as a replacement for the knowledge we have forgotten. Because we WILL have forgotten huge chunks of it.
Therefore knowledge of the routine operations of a factory, for example, can safeuly be left tacit. The operators deal with this knowledge day-in, day-out, and can keep it fresh in their heads. The knowledge associated with the non-routine operations however – the emergencies, the process upsets, and the maintenance shutdowns – probably needs to be recorded and stored, because you wont be able to trust the operators’ memories from the least time something similar happened. 

The type of activity, and in particular the frequency by which we will practice it, can make a huge difference to the way you manage the associated knowledge.

View Original Source ( Here.