How the coastguard seeks input to lesson learning

Public organisations can learn from the coastguard when it comes to getting wide scale input to lesson learning

Any public organisation, especially one with an element of high priority service, needs a lesson-learning process to improve that service. The emergency response services in particular have well-developed lesson learning systems, but here is a wrinkle I had not seen before, from the US coastguard.

This article from 2017, entitled “Innovation Program seeks hurricane lessons learned from Coast Guard responders” describes how the US coastguard set up what they called the “Hurricane Lessons Learned challenge” on the Coast Guard’s ideas-capturing portal CG_Ideas@Work.

This portal was started as a way to preserve and institutionalize the wealth of lessons learned during hurricane response efforts, and all Coast Guard personnel who participated in any of the response efforts are encouraged to share their observations, issues and ideas.

This is a means of capturing ideas observations and insights which analysts later could convert into lessons (the sequence from Observations to Insights to Lessons is widely recognised in the Lesson learning community). Some direct lessons may also be captured.

As the article explains

 The Coast Guard routinely captures lessons learned as a way to improve its operations, but the CG_Ideas@Work challenge offers one distinct advantage: “Our crowdsourcing platform not only provides a place to submit ideas, but also to collaborate on them,” (Cmdr. Thomas “Andy”) Howell said. “Everyone from non-rates to admirals can discuss ideas.” Speed is also an advantage. “Catching the ideas when they’re fresh and raw preserves their integrity,” Howell said.

The US Coastguard are well aware that capturing lessons is not enough for them to be a learnign organisation. These lessons must also drive change.

The Commandant’s Direction says we need to become an organization capable of continuous learning, so it’s important that the innovations and adaptations that made this response successful are institutionalized,” Howell said. Ideas shared through the Hurricane Lessons Learned challenge are immediately shared with the responsible program. Many will be considered as potential projects for next year’s Research, Development, Test and Evaluation Project Portfolio.

The portal has been very well received

“We’ve heard from pilots, inspectors, commanding officers, district command staffs, reservists, Auxiliary personnel – the entire gamut of responders,” Howell said. “It’s a very user-friendly way to collect information, and comes with the benefit of collaboration,” he said.

This is an approach other similar organisations can learn from.

View Original Source (nickmilton.com) Here.

What’s the difference between a lesson-learned database and a lesson management system?

In this blog post I want to contrast two software systems, the Lessons Database, and the Lessons Management System.

There are two types of Lessons Learned approaches, which you could differentiate as “Lessons for Information” and “Lessons for Action”.

These represent maturity levels 1 and 2 from my three level categorisation, and can be described as follows.

Lessons for Information” is where lessons are captured and put in reports, or in a database, in that hope that people will look for them, read them, and assimilate them.

Lessons for Action” is where lessons are used to drive change and improvement. Lessons are captured, reviewed, validated, and action is taken to embed the lessons in process, procedure, standards and/or training.

“Lessons for Information” is supported by a Lessons Database, “Lessons for Action” by a Lessons Management System. Let’s contrast the two.

  • In a Lessons Database, the database is the final home of the lessons. In a Lessons Management System, the final home of lessons is considered to be the compiled knowledge of the organisation, which may be procedures, doctrine, guidance, best practices, wikis, etc.
  • In a Lessons Database, lessons reach their reader through search. In a Lessons Management System, lessons are pro-actively routed to those who need to see them and to take action.
  • In a Lessons Database, lessons accumulate over time (this was the problem with my first Lessons system in the 90s – it got clogged up over time with thousands of lessons, until people stopped looking). In a Lessons Management System, lessons are archived once they have been embedded into process and procedure, and the only live content in the system is the set of lessons currently under review.
  • In a Lesson Database there is only one type of lesson – the published lesson. In a Lesson Management system there are at least two types of lesson – the open lesson (where action has not yet been taken) and the closed lesson, which may then be archived. Some organisations recognize other types, such as the draft lesson (not yet validated) and the Parked lesson (where action cannot yet be taken, or where action is unclear, and where the lesson needs to be revisited in the future).
  • In a Lessons Database, there may be duplicate lessons, out of date lessons, or contradictory lessons. Through the use of a Lessons Management System, these have all been resolved during the incorporation into guidance.
  •  In a Lessons Database, there are limited options for metricating the process. You can measure how many lessons are in the system, but that’s about it (unless you capture data on re-use). Through the use of a Lessons Management System, you can track lessons through to action, and can measure whether they are being embedded into process, you can see where they are being held up, and by whom, and you can see how long the process is taking and where it needs to be speeded up.
Lessons management is the way to go. Lesson databases really do not work in the long term, and usually become lesson graveyards, and the home for Lessons Lost.

View Original Source (nickmilton.com) Here.

5 reasons why Enterprise Search will never be as good as Google

All the time we hear managers saying “we want a search engine as good as Google”. Here are 5 reasons why you can never even get close.

Image from wikimedia commons

 Google is the yardstick for search, and managers seem to want internal enterprise search that works as well and as (apparently) intuitively as google. But there are 5 good reasons why this will never happen (bearing in mind that I am by no means a search specialist).


1) Search engine optimisation – webpages want to be found

Do you have a website? If you do you will be as familiar as I with the deluge of Spam emails offering to optimise my website for Google search. SEO (Search Engine Optimisation) is big business, and the owners of webpages are doing lots of work on Google’s behalf to ensure their pages are indexable and findable and optimised for search.

But who, in an organisation, optimises their documents and sites for internal search? Let me tell you who – Nobody; that’s who.  Unless you are very lucky, few if any people think about the issues of findability when they publish content.

Google is successful in finding sites because those sites want to be found. They are often very keen to be found, because they are trying to sell you something. The search results at the top of Google’s list are often the ones most desperate to be found. Many documents in your enterprise system do not want to be found, often for issues related to confidentiality as described below.

2) The fact that the web is interlinked html pages, whereas your content is usually isolated word documents (if you’re lucky!)

Sometimes it’s not even Word documents – I know organisations that save their critical knowledge in pdf form!

The difference between interlinked web pages and isalted documents is critical. Google can crawl through the web of interlinked sites, can understand the context of a site partly through its links, and can identify authoritative or important sites based on the number of links that point to them. The search engine results at the top of the list are often the ones with the most backlinks.  The components of the page are also obvious to Google – the title, the first level headings, the metadata – and these also are used to understand what the page is about.

Your documents are not linked. Each stands alone. Each has to be searched and indexed separately. There are no backlinks. There is no visible structure to the document, other than to the human eye, and the search engine cannot tell a footnote from a level 1 heading.

3) The hordes of search engine specialists employed by Google.

How many search engine specialists do you employ? None, right? Google employs tens of thousands. That’s one of the reasons their search works better than yours.

This is especially an issue if you are planning to use Semantic search, or to optimise customer search of your knowledge base. In these cases you will need a search engine specialist to build and evolve the ontology, track and improve the search accuracy, and define the synonyms and stop words.  However managers often neglect this aspect, and assume a search-engine is a one-off purchase that will run itself.

4) Google doesn’t do “security levels”

Google assumes everything is available and visible to everyone. It doesn’t do passwords or access restrictions or security levels. It searches everything that is not on the Dark Web.

A lot of your documents are effectively on the dark Web – they are in secure folders on Box, or Dropbox, or SharePoint. I consulted recently to an organisation that had 300 separate databases or document management systems. They had opened about 6 of these for indexing, the rest were effectively “dark” as far as search was concerned.

5) The web doesn’t do version control

Every webpage on the web is the only version. Rather than storing a webpage as version 3.5 and writing version 4.0, you just rewrite and publish the page. Every page on the web is the current version, and is constantly under development. Google only returns one version of the page – the current version.

You don’t treat documents in this way. Very often, unless your document management is very good, you will have multiple versions of the same document stored in different places.  One of the bugbears of enterprise search is that it will often find all these version in your search results.

So the next time your managers ask “Why can’t we have search like Google” – 

you can reply – “Yes, we can, IF

  • You move all content out of documents onto wikis
  • You keep only one version of every document
  • You train all staff in search engine optimisation
  • You hire a team of search engine specialists, and
  • You make all documents open to everyone”.
Then see what they say!

Enterprise search can work, but it will never work like Google.

View Original Source (nickmilton.com) Here.

The risks when an algorithm takes your job

An interesting Forrester blog highlights some of the risks of process automation

NAO Robot
image from wikimedia commons

We live in a world where automation is beginning to impact knowledge work, in the same way that it impacted manual work in the last century.  On the one hand this is great news for organisations, as it can potentially revolutionize the productivity of the knowledge worker. On the other hand it brings risk.

One attractive opportunity is process automation, where a process that a human used to operate can become automated. The rules, heuristics and knowledge applied by the human can be extracted, using various knowledge management techniques, and turned into an algorithm which a computer or robot can use.

So a job like drafting a will, or cooking a meal, or monitoring a refinery, can be automated. Human knowledge is converted into algorithms, and the know-how that a human used to employ is passed to a machine which will reproduce the logic faithfully, tirelessly, without error, and for a fraction of the lifetime cost.

The problem of course is that know-how is not enough, and we also need know-why.  The know-how is great in a predictable environment, but the know-why is needed once you move into uncharted territory.

That’s one of the messages given in this Forrester blog entitled “Ghost In The Machine? No, It’s A Robot, And It Is Here To Help“. The author, Daniel Morneau, is an advisor on the Technology council, and writes in the blog about robotic process automation, the benefits it will bring, and the governance it will need.

He also quotes one Industry leader who identifies a risk he had not anticipated:

“The hard lesson I learned is that once that knowledge is built into the bot and the employee goes out the door, it’s gone forever,” he said…“We captured the process in the code, right? I mean, the bot knows how to follow the process; we’ve just lost the business logic behind it”

When Moreau asked him what he would do differently, having learned this lesson, he replied

“I’d document the business logic where I can (and) I’d find a way to keep the best employees whose roles are being replaced so their deep understanding of the business logic can be available as we continue to support our businesses. I mean, the business function that that system is used for is not going away, and having employees who have a deep understanding of our business is the hardest thing to hire for”.

So that’s an interesting conclusion about the need for human knowledge of business logic.

We may increasingly outsource some of the know-how to the robots, but we still need to retain humans with the know-why. 

View Original Source (nickmilton.com) Here.

What you need to know about social tools and KM

Here is a very interesting article from HBR entitled “What managers need to know about social tools” – thanks to Anshuman Rath for bringing it to my attention.  It’s well worth a complete read.

Image by Codynguyen1116
on wikimedia commons

The article by Paul Leonardi and Tsedal Neeley, from the Nov/Dec issue of HBR last year, looks at the way companies have often introduced social tools – often because “Other companies are, so we should too” or “That’s what you have to do if you want to attract young talent”  – and describe some of the surprising outcomes.

Here are some of the points the article makes, with excerpts in quotes:

  • Use of these tools make it easier to find knowledge, through making it easier to find knowledgeable people.

“The employees who had used the tool became 31% more likely to find coworkers with expertise relevant to meeting job goals. Those employees also became 88% more likely to accurately identify who could put them in contact with the right experts”

  • Millenials are not keen adopters of enterprise social tools.

“Millennials have a difficult time with the notion that “social” tools can be used for “work” purposes (and are)wary of conflating those two worlds; they want to be viewed and treated as grown-ups now. “Friending” the boss is reminiscent of “friending” a parent back in high school—it’s unsettling. And the word “social” signals “informal” and “personal.” As a 23-year-old marketing analyst at a large telecommunications company told us, “You’re on there to connect with your friends. It’s weird to think that your manager would want you to connect with coworkers or that they’d want to connect with you on social media [at work]. I don’t like that.”

  • How people present themselves on internal networks is important to developing trust.

“How coworkers responded to people’s queries or joked around suggested how accessible they were; it helped colleagues gauge what we call “passable trust” (whether somebody is trustworthy enough to share information with). That’s important, because asking people to help solve a problem is an implicit admission that you can’t do it alone”.

  • People learn by lurking (as well as by asking).

“Employees gather direct knowledge when they observe others’ communications about solving problems. Take Reagan, an IT technician at a large atmospheric research lab. She happened to see on her department’s social site that a colleague, Jamie, had sent a message to another technician, Brett, about how to fix a semantic key encryption issue. Reagan said, “I’m so happy I saw that message. Jamie explained it so well that I was able to learn how to do it.”

  • The way social tools add value to the organisation and to the individual is to facilitate knowledge seeking, knowledge awareness, knowledge sharing and problem solving. The authors give many examples mostly of problem-solving, and about finding either knowledge or knowledgeable people. One example saved a million dollars, and i will add that to my collection of quantified value stories tomorrow.

  • The value comes from practice communities. The authors do not make this point explicitly, so perhaps I am suffering from confirmation bias here, but they talk about the “spread of knowledge” that they observed as being within various groups covering practice areas such as marketing, sales, and legal.

The authors finish with a section on how to introduce the tools, namely by making the purpose clear (and the purpose may be social, or it may be related to knowledge seeking and sharing), driving awareness of the tools, defining the rules of conduct, and leading by example.

The article reminds us again that social tools can add huge value to an organisation, but need careful attention and application. Just because Facebook and Twitter are busy in the non-work world, does not mean similar tools operate the same way at work.

View Original Source (nickmilton.com) Here.

A model for KM technology selection

An example from Schlumberger shows us how selecting KM technology should be done.

image from wikimedia commons

At the KMUK conference a few years ago, Alan Boulter introduced us to the Schlumberger approach to selecting Knowledge Management technology. This is a very straightforward contracts to the common “gadget-store pick and mix” approach, and worth repeating.

Firstly, Schlumberger defined exactly what the business needed from their Knowledge Management technology. They divided these needs into 4 groups;

  • Connecting people to solutions
  • Connecting people to information
  • Connecting people to communities of practice
  • Connecting people to people
Secondly, they bought technology which does each required job, and only that job, and does it well.  If no technology was available that did the job well enough, they built it in-house.
Thirdly, they stuck with that technology over time, provided it still did the job well. People were familiar with it, so they stuck with it.
Finally (and this seems so rare nowadays, that I want to emphasise it), if they bought new technology which had optional functionality that duplicated an existing tool, they disabled that functionality. As an example, they brought in SharePoint as an ECM tool, and SharePoint comes with the “MySite” functionality, which can be used to build a people-finder system. Schlumberger had a people-finder system already, and to introduce a second one would be crazy (if you have two systems, how do you know which one to look in?). So they disabled MySite.
Schlumberger have ended up with a suite of ten tools, each perfect for the job, and with no duplicates. Staff know how to find what they need, and which tool to use. Schlumberger are long-term winners of the MAKE awards, and deliver hundreds of million dollars annually through KM.  Their technology selection forms part of their success.

View Original Source (nickmilton.com) Here.

Do you agree with these two KM assumptions?

A recent paper from the Gartner group seems to contain two basic assumptions about knowledge management which I think are worth addressing. See what you think.

The Gartner paper is entitled Automate Knowledge Management With Data Science to Enable the Learning Organization, and contains the following blocks of text:

“The capture of expertise and experiential knowledge diverts experts and skilled professionals away from productive work. Project managers, software engineers, product developers, hiring managers or customer support agents may be asked to document their work; to participate in peer-to-peer communities to capture and share expertise; to work in more open and transparent ways to encourage serendipitous connections and information flows across an organization; or to shadow peers and learn from observation. But for every minute they do this, it is a minute taken away from doing what they are supposed to be doing”.

“Although still relatively immature, and requiring much manual fine-tuning with domain as well as technical expertise, the “body of knowledge” that powers smart machine movers, sages and doers is extracted automatically by analyzing, classifying, labelling and correlating volumes of structured and unstructured data, including free-form text”.

Now these two chunks of text seem to me to be based on two assumptions. Let’s look at these one by one.

The first assumption is that “knowledge management is not real work”. Note how they say “diverts away from productive work – taken away from what they are supposed be doing”.  So they do not see KM as productive, nor do they see it as something a knowledge worker should be doing.

But KM is productive – it may not produce a tangible object which can be sold to a customer, but it produces knowledge which can be used to improve processes or innovate new products in the future, and so adds value to the business. In some cases, such as R&D, knowledge is the only value. In Pharma, for example, the success rate of R&D projects in delivering a successful product is only 2% or 3%, and in every other case knowledge is the only product. And as the development manager at Toyota said (and I paraphrase); “In Toyota NPD our job is not to produce cars but to produce knowledge, and from that knowledge great cars will emerge”. Producing knowledge is an investment in the future; producing products gives value in the immediate term while producing knowledge gives value in the longer term. KM is productive work.

And maybe KM is something that everyone should be doing, or at least contributing to. Who else should contribute if not the knowledge workers? And you cannot say that the job of the engineer is only engineering, the job of the salesperson is only to sell, or the job of the IT coder is only to write code. All of these people have other things to consider – they need to bear finances in mind, and safety, and quality, to name just three. They can’t say “I don’t want anything to be involved in quality or in safety – just let me do my real job”. The real job needs to be done within a number of contexts, and knowledge management is one of them.  Imagine an airline pilot saying “I am not going to take part in this lessons meeting about my recent near miss, because this is a day taken away from what I am supposed to be doing”. That airline pilot would not keep their job for very long, because the aviation industry knows very well the value of knowledge and of knowledge work, and knows that KM is something pilots need to contribute to.

But if there is pressure to balance the demands on the knowledge worker, between their short-term delivery of product and their long-term delivery of knowledge, can smart machines take up the slack?  The answer to this depends very much as to how much knowledge you think lives in the “volumes of structured and unstructured data, including free-form text”.

Personally I don’t think there is much knowledge in there at all.

The vast majority of structured and unstructured data and documents are work products – the outcome of knowledge work, but not containing the knowledge itself. For example:

  • A CAD drawing may show you a design for a product, but does not help you know understand the process of design, nor how best to design the next product;
  • A bid document tells you how a bid was constructed, but does not contain knowledge of why the bid was won or how to improve bid success;
  • A project plan tells you how a project was planned, but contains nothing about project best practices.

Knowledge is not created through work, it is created through reflection on work, and it is captured not in work products, but in knowledge products such as lessons learned, best practices, guidelines and checklists. If those knowledge products are not created by the knowledge workers, because “that would be a minute taken away from doing what they are supposed to be doing” then the machines will have no knowledge to find.

So I don’t think either assumption is valid. I think KM should be part of the job, and part of the expectation for any knowledge worker, and I do not think the machines will find knowledge where no knowledge exists. I think the machines will help greatly, and will enhance the work of the knowledge workers, but not as a replacement for KM activities. Gartner seem to acknowledge this when they say “This is not in order to replace conventional KM techniques but to augment them where automated techniques may be more effective or economically viable”.
Let’s look at where automation helps, let’s embrace that, and let’s not assume this means people can stop doing KM and can leave it all to the machines,

View Original Source (nickmilton.com) Here.

Will AI replace KM?

My answer is No, for the following reasons.

image from wikipedia

I have been working in Knowledge Management for a long time now, and the history of KM includes examples of one technology after another claiming that it will replace KM or make it obsolete.

Yet KM is still here.

  • In the 1990s, it was Expert Systems that would make KM obsolete
  • Then in the late 90s, it was Groupware that would replace KM
  • Then Enterprise Search would be the saviour of KM
  • In the mid 200s, Social networking became the new trend that would supercede KM (“Social is the new KM”)
  • And of course SharePoint – “all you need for KM”
  • Then came Enterprise 2.0, and Enterprise Social. They would become the new KM
  • In 2015 I met a purveyor of Semantic Search wearing a T-shirt reading “John Snow may not be dead, but knowledge management is”. Made obsolete by his technology, obviously.
  • And now Big Data and AI and Chatbots and IBM Watson are set to “make KM obsolete”.
Yet KM is still here.
All of these technologies have found their place within the KM toolbox over the years, and they have certainly made certain elements of KM work much faster and much more easily, while making little difference to other elements. 
Yet KM as a discipline is still needed.
Enterprise search, for example, makes it far easier to find documented knowledge, but you still need KM to ensure the knowledge is documented in the first place. Enterprise Social Media makes it far easier to set up conversations within a community of practice, but you still need the community of practice in the first place, with its roles, processes, culture, and stores of shared knowledge. Semantic search makes it far easier to retrieve content in context, but content is only half of the content/conversation duo, and retrieval is only half of the supply/demand duo, and technology is only one of the four legs on the KM table, so there is far more to KM than just search.
All of these technologies make KM faster and easier, but none of them replace KM.
Even AI will not replace KM.

AI is a game-changer, for sure. It makes it possible to make new and rapid correlations from within massive datasets, but someone has to create the datasets, and clean them, and then train the AI, and then interpret the correlations and draw knowledge from what they observe (because we all know correlation is not causation). As I posted here, in the context of Big Medical Data at the European Bioinformatics Institute,

Big Data does not become Knowledge because of it’s size – people have to add Knowledge to the data to make sense of it. The huge data resources of the EBI have to be combined with the specialist knowledge of the staff, and the application of the knowledge is the sense-making step

Also AI and Big Data still only work in the realm of documents, information and data, and in the processes of analysing and retrieving; they don’t help with the transfer and creation of knowledge through conversation, or with tacit knowledge. So AI will be a massively powerful tool in the KM toolbox, but it won’t replace the toolbox. We will need the roles and the processes and the governance to interplay with the technology. KM shifts up a gear, but still will be needed.

So call me an old grouch, but to date none of the new technologies touted as “the killer of KM” have made KM obsolete, and history suggests that neither will AI. And neither will the new technology that comes along in 5 years time.  They will simplify, disrupt, and accelerate KM, but not replace it.

To the extent that people need to use knowledge to make decisions and judgments, then Knowledge Management will be augmented by technology, but not replaced.

View Original Source (nickmilton.com) Here.

Knowledge management and technology – but what sort of technology?

When we think of KM and technology, we usually think of IT. But is this the wrong sort of technology to concentrate on?

image from wikimedia commons

Knowledge Management as a discipline was born in the 1980s as a combination of Organisational Learning and the technological revolution sweeping through organisations. In BP, where I worked at the time, Knowledge Management went hand in hand with, and was enabled by, the development of a common operating system, personal desktop computers for all, email and video conferencing. In fact, the KM program was a direct successor to the Video Telecommunications project.

Technology has always been part of KM, right from the beginning, and is still one of the four legs on the KM table.  So why do we so often get the Technology part wrong, and end up going down database rat holes, creating mega-systems people don’t like to use.
I think it is partly because we focus on IT, and not on ICT. 
There is only one letter different between IT and ICT, but its a crucial letter. The additional C stands for Communication.  As wikipedia says; 

Information and communication technology (ICT) is another/extensional term for information technology which stresses the role of unified communications and the integration of telecommunications (telephone lines and wireless signals), computers as well as necessary enterprise software, middleware, storage, and audio-visual systems, which enable users to access, store, transmit, and manipulate information.

It was not so much the availability of information processing power or the ability to store information that sparked the birth of KM, so much as the networking of computer systems and the ability to communicate far more widely. Suddenly, through ICT, we were connected with so many more people, so many more knowledgeable people. 

I clearly remember one day in the mid 90s, working in BP Norway, when someone came to me with a seismic plot from the North Sea with some very strange features on it. Neither of us could work out what these were, but we realised that now we had networked computers, linked to geologists all round the world, we did not have to solve this problem ourselves. We could send out an email to all our colleagues, and tap into a much broader knowledge base.

It was primarily the communication technology that enabled KM, and its worth remembering this as we look at out technology tools.

Let’s focus not so much on IT and more on ICT, because that C makes all the difference.

View Original Source (nickmilton.com) Here.

Why you can’t have AI without KM

The rise of AI in the form of intellegent agents requires the rise of KM to support it. 

Image from wikimedia commons

This is the conclusion of Gartner research, quoted in this Computer Weekly post entitled “IT staff will need to retrain when automation deskills their jobs”.

According to the post – 

Before automation and intelligent agents can really take off in the enterprise, IT operations teams will need to build knowledge management systems. 

Gartner said knowledge management is essential for a chatbot or virtual support agent (VSA) to provide answers to business consumers, but the response can only repeat scripted answers when based on existing data from a static knowledge base. It warned that intelligent agents without access to this rich source of knowledge cannot provide intelligent responses. 

As such, Gartner suggested that infrastructure and operations managers will need to establish or improve knowledge management initiatives. Gartner predicted that, by 2020, 99% of AI initiatives in IT service management will fail because of the lack of an established knowledge management foundation.

That’s quite a prediction, but really it makes sense. AI in the form of intelligent agents like IBMs Watson is really a delivery vehicle for knowledge, allowing contextual answers to be provided quickly and effectively, and it requires a robust source of knowledge in order to work. Without KM, AI will fail. 

View Original Source (nickmilton.com) Here.