A case study of a failed learning system

When lesson learning failed in the Australian Defence Force, they blamed the database. But was this all that was at fault?

design-databaseHere’s an interesting 2011 article entitled “Defence lessons database turns off users”. I have copied some of the text below, to show that, even thought the lessons management software seems to have been very clumsy (which is what the title of the article suggests), there was much more than the software at fault.

 “A Department of Defence database designed to capture lessons learned from operations was abandoned by users who set up their own systems to replace it, according to a recent Audit report. The ADF Activity Analysis Data System’s (ADFAADS) was defeated by a “cultural bias” within Defence, the auditor found. Information became fragmented as users slowly abandoned the system”.

So although the article title is “defence lessons database turns off users”, the first paragraph says that it was “defeated by cultural bias”. There’s obviously something cultural at work here ……

“Although the auditor found the structure and design of the system conformed to ‘best practice’ for incident management systems, users found some features of the system difficult to use. Ultimately it was not perceived as ‘user‐friendly’, the auditor found. Convoluted search and business rules turned some users against the system”. 

….but it also sounds like a clumsy and cumbersome system

“In addition, Defence staff turnover meant that many were attempting to use ADFAADS with little support and training”.

…with no support and no training.

“An automatically-generated email was sent to ‘action officers’ listing outstanding issues in the system. At the time of audit, the email spanned 99 pages and was often disregarded, meaning no action was taken to clear the backlog”.

There needs to be a governance system to ensure actions are followed through, but sending a 99-page email? And with no support and follow-up?

 “It was common for issues to be sent on blindly as ‘resolved’ by frontline staff to clear them off ADFAADS, even though they remain unresolved, according to the auditor”.

Again, no governance. There needs to be a validation step for actions, and sign-off for “resolution” should not be developed to frontline staff.

 “Apart from a single directive issued by Defence in 2007, use of the database was not enforced and there were no sanctions against staff who avoided or misused it”.

There’s the kicker. Use of the lessons system was effectively optional, with no clear expectations, no link to reward or sanction, no performance management. It’s no wonder people stopped using it.

So it isn’t as simple as “database turned off users”. It’s a combination of

  • Poor database
  • Poor notification mechanism
  • No support
  • No training
  • No incentives
  • No governance
  • No checking on actions

It’s quite possible that if the other items had been fixed, then people might have persevered with the clumsy database, and it’s even more likely that if they built a better database without fixing the other deficiencies, then people still would not use it.  A

What they needed was a lessons management system, not just a database.

So what was the outcome? According to the article,

…..establish a clear role and scope for future operational knowledge management repositories, and develop a clear plan for capturing and migrating relevant existing information ….. prepare a “user requirement” for an enterprise system to share lessons.

In other words – “build a better database and hope they use it” Sigh.

View Original Source (nickmilton.com) Here.

Is Learning from Failure the worst way to learn?

Is learning from failure the best way to learn, or the worst?

Classic Learning
Classic Learning by Alan Levine on Flickr

I was driven to reflect on this when I read the following quote from Clay Shirkey;

Learning from experience is the worst possible way to learn something. Learning from experience is one up from remembering. That’s not great. The best way to learn something is when someone else figures it out and tells you: “Don’t go in that swamp. There are alligators in there.”

Clay thinks that learning from (your own bad) experience is the worst possible way to learn, but perhaps  things are more complex. Here are a few assertions.

  • If you fail, then it is a good thing to learn from it. Nobody could argue with that!
  • It is a very good plan to learn from the failure of others in order to avoid failures of your own. This is Clay’s point; that learning only from your own failures is bad if, instead, you can learn from others. Let them fail, so you can proceed further than they did. 
  • If you are trying something new, then plan for safe failure. If there is nobody else to learn from, then you may need to plan a fail-safe learning approach. Run some early stage prototypes or trials where failure will not hurt you, your project, or anyone else, and use these as learning opportunities. Do not wait for the big failures before you start learning.
  • Learn from success as well. Learn from the people who have avoided all the alligators, not just from the people that got bitten. And if you succeed, then analyse why you succeeded and make sure you can repeat the success.
  • Learning should come first, failure or success second. That is perhaps the worst thing about learning from experience – the experience has to come first. In learning from experience “the exam comes before the lesson.” Better to learn before experience, as well as during and after.  

Learning from failure has an important place in KM, but don’t rely on making all the failures yourself. 

View Original Source (nickmilton.com) Here.

A story of how a community lost trust

It is possible for the members of a Community of Practice to lose trust in the community as an effective support mechanism. Here’s one story of how that happened.

The story is from one of Knoco’s Asian clients.

  • This community started well, with 4 or 5 questions per week from community members. 
  • The community facilitator forwarded these questions to community experts to answer, rather than sending them to the whole community and making use of the long tail of knowledge.  This may well have been a cultural issue, as her culture reveres experts.
  • Sometimes the expert would answer on the community discussion forum, but most of the time they answered by telephone, or personal visit. Therefore the community members did not see the answer, and were not even aware the question had been answered.
  • Often the expert did not have enough business context to answer the question (this is a complicated business), so when they did answer on the forum, the answer was vague and high-level. In a culture where experts are not questioned, nobody interrogated these vague answers to get more detail. 
  • Often the questions themselves were asked with very little context or explanation, so it was not possible to give good answers. The community facilitator never “questioned the question” to find out what the real issue was.
  • Where there was a discussion around the question, it very quickly went off-topic. Again the facilitator did not play an active role in conversation management.
  • When the facilitator followed up, to see if the questioner was satisfied by the answer, the answer was usually No.
  • A year later, the questions have dropped to 1 or 2 a month.
As far as the community members were aware through observing interactions on the forum, the questions seemed either to receive no answer (as the real discussion happened offline), or to receive worthless answers.  The users lost trust in the community forum as a way to get questions answered effectively, and have almost stopped asking. 
One way to revitalise this community will be to set up a series of face to face meetings, so that the members regain trust in each other as knowledgeable individuals, then ask the members to help design an effective online interaction. This will almost certainly involve asking the community and not the experts, and making much more use of the facilitator to get the questions clarified, to make sure the answers are posted online, to probe into the details of vague answer, and to keep the discussions on topic.
This sort of discussion is needed at community kick-off, so the community can be set up as an effective problem-solving body, and so that the members trust that their questions will be answered quickly and well.

If the members do not trust that the community will answer their questions, they will soon stop asking.

View Original Source Here.

Make small mistakes

You will inevitably make mistakes in your Knowledge Management program. Make sure they are small ones, not fatal ones.

Knowledge Managers need to learn, learning requires experimentation, experiments often lead to mistakes, but mistakes can be costly and derail your program.  That’s a big dilemma for every Knowledge Manager.

You cannot afford to make a big mistake. Too often we see failed KM programs which have started with grand plans and expensive software purchase, failed to deliver, and set back the cause of KM in the organisation for many years.  After a big expensive flop, KM will have a tarnished reputation and management will be resultant to spend any more money.  This can be a fatal KM mistake, and impossible to recover from.
Therefore implementing Knowledge Management should be a series of smaller steps and smaller experiments, where failure will not be fatal. Follow the approach below.
  1. Do your research. Find out what is involved in Knowledge Management. Understand what your organisation needs, and the type of KM framework which will support this. Conduct an assessment, review the culture, develop a strategy – all of this before you start to make any changes at all.
  2. See what others are doing. Research the world leaders in KM. Find a consultant who has worked with them, and who can share the details.
  3. Start with small experiments; proof-of-concept trials and pilots where you introduce a minimum viable KM framework. The proof of concept trials should be small enough that failure doesn’t matter; these are your chance to learn as you go, and to experiment. The Knowledge Management pilots can be a little larger, and should be set up to solve specific business problems, but can be a simplified version of the final Knowledge Management framework. Learn from the trials and pilots, until your final KM framework is bullet-proof.
  4. Roll out the framework.
Make all your mistakes in Stage 3 (and if you have been diligent in Stages 1 and 2, these mistakes should be few and minor). This is a far better approach than starting with step 4 and making your mistakes there. 

Make small mistakes early, and avoid the large mistakes later.

View Original Source Here.

Skip to toolbar