Home > KCS v6 Practices Guide > Section 2 The KCS Practices > The Evolve Loop > Practice 5: Content Health > Technique 5.9: Knowledge Domain Analysis

Technique 5.9: Knowledge Domain Analysis

As the organization matures in its use of KCS, an important function evolves:  Knowledge Domain Analysis. This critical function assures that issues are resolved effectively and efficiently. The knowledge workers doing this function, Knowledge Domain Experts (KDE) must have both deep subject matter expertise as well as a profound understanding of KCS. KDEs look after the health of a collection or domain of knowledge, usually a subset of the knowledge base that aligns with their expertise. To help maximize the benefits of KCS, Knowledge Domain Analysis focuses the knowledge base and pays attention to the quality of the articles, the effectiveness of the workflow that produces and improves the articles and, perhaps most importantly, the use of the articles.  The KDE seeks to optimize the creation, improvement, and use of articles as well as identify patterns and trends of reuse to identify potential product, process, or policy changes that could eliminate the root cause of the most frequent issues. Based on the analysis, the KDEs work with Coaches and the KCS Council to improve the content standard and the KCS workflow.  Success of the Knowledge Domain Analysis function is measured through improvements in findability,  self-service use, and success rates and incident volume reduction that is a result of corrective actions taken to eliminate the cause of pervasive issues. 

 

Most organizations have multiple knowledge domains. Knowledge domains are virtual collections of KCS articles that are related to a common topic, function, process, technology, or product family. Knowledge domains are not precise or absolute in their boundaries; they often overlap. A knowledge domain is the collection of content that makes sense to include for pattern recognition and cluster analysis. Therefore, the purpose or intent of the analysis defines the collection of articles that are relevant.

 

For each domain, one or more subject matter experts emerge as the Knowledge Domain Experts (KDE) who do the Knowledge Domain Analysis. They have enthusiasm for and curiosity about the topic or function. They are typically subject matter experts who continue to have other functional responsibilities: the KDE is not a full-time role. KDEs are the people who are naturally attracted to using data analytics to figure out what we can learn from this collection of knowledge. They must be capable of establishing a relationship with the business functions that need to take corrective actions.  Depending on the domain, this may be the owners of business policy or processes and/or the owners of the product or services functionality and road maps. The goal is to provide the functional owner with quantifiable, actionable information that is based on the users experience. Because of the cross-functional collaboration, the Knowledge Domain Analysis is most effective with cross-organizational participation.

 

Knowledge Domain Analysis outputs include the identification of :

  • Improvements to the content standard and process integration (workflow)

  • Findability issues: knowledge exists but is not being found - search performance and optimization

  • Content gaps: knowledge people are looking for that does not exist

  • Content overlaps: consolidating duplicate articles, identifying the best or preferred resolution among many proposed resolutions  

  • Improvements in how we leverage known issues, eliminating re-work, improving access and findability

  • Improvements in how we solve new issues, suggestions for problem solving and collaboration to solve new issues quickly

  • Pervasive issues: facilitating root cause analysis and working with business owners on high impact improvements

  • Value of the knowledge base, such as article reuse rates, self-service success, and contribution in improving time to resolve

  • Archiving strategy for the knowledge base

Evolve Loop Articles

Earlier we discussed the complementary elements of a double loop process:  the Solve Loop and the Evolve Loop. Each loop generates knowledge. To recap, Solve Loop articles are created and improved by knowledge workers while they are working on issues. It is very difficult to assess the potential future value of the knowledge created in the moment of interaction.  If a question is worth answering or a problem is worth solving, it is worth capturing in the knowledge base.  Other peoples' use of that knowledge will define its value.  If it is reused it will contribute to the patterns or clusters that emerge in the knowledge domain analysis.

 

Solve Loop articles are developed just-in-time based on demand. Evolve Loop articles are created as a result of the Knowledge Domain Analysis process based on the patterns and trends that emerge over time. Evolve Loop articles are high-value content because they are derived from the patterns of use, the clustering of KCS articles around a common theme or issue, and critical processes and procedures. While high-value, Evolve Loop articles generally represent a very small percentage of the total knowledge base.

 

The usage and pattern analysis performed in the Evolve Loop also identifies product quality and serviceability improvements. By analyzing the root causes and aggregating symptom and usage frequency data, compelling data can drive product or documentation changes based on the actual customer experience.

Some examples of Evolve Loop content include:

  • Procedural or diagnostic articles or step-by-step processes (how to do a specific thing)

  • Resolution paths—a collection of linked procedural articles that defines a complex process (procedural or diagnostic)—created by Knowledge Domain Experts to address generic or high level symptoms, especially ones that are addressed in an unwieldy number of Solve Loop KCS articles

  • High impact issues -  ones that are pervasive or cause outages or articles about new or strategic processes, policies, products or services

  • KCS articles created to fill knowledge gaps:  articles on topics or issues users are looking for that does not exist. Typically identified through self-service and search analytics. 

New vs. Known Analysis

The new vs. known analysis is another example of the continuous improvement processes in the Evolve Loop. The new vs. known process can help assess the health and effectiveness of an organization’s KCS practices.  This is an example of the kind of process to be done as Knowledge Domain Analysis.

 

The goal of KCS is to capture and reuse the knowledge gained through interactions – solve it once, use it often. 

 

Ideally, we would like to use our knowledge to solve new requests, not known issues. As an organization adopts KCS and integrates use of the knowledge base into the interaction process, we see the internal reuse of knowledge increase and we can establish a baseline for the new vs. known ratio.  As we start to deliver knowledge through a self-service model, external reuse increases and internal reuse should decrease; we are solving known issues through self-service. Understanding the ratio of new vs. known request becomes an indicator of the health of the knowledge flow and the effectiveness of the self-service model.

Objective

Identify opportunities to reduce the resources spent on known issues and accelerate the resolution of new issues.

  • Reduce the resources spent on known issues. This is a function of improving customer use and success with the self-service model.
  • Improve the speed and accuracy in solving new issues. This is a function of getting the right resources working on the issue as quickly as possible.

By looking at incidents closed from the perspective of new vs. known and analyzing incidents in each category we can identify:

  • The percentage of new vs. known issues being worked on in the support center. This creates a baseline against which we can measure the impact of future improvements.
  • The characteristics of known issues and assess why they were not solved through self-service.
  • The characteristics of new issues and identify opportunities to improve the speed and accuracy of the problem-solving process.

Scope

The scope of the analysis should include the following:

  • Support centers for internal and/or external customer support
  • First point of contact (level 1), first point of escalation (level 2), second point of escalation (level 3)
  • Hardware, software, networking, services

The Approach

The new vs. known study is something that should be done periodically over the course of a year, probably not more than once a quarter.  

 

The study is done by product area or product family; it is a sampling technique. It is recommended that you do a pilot with two or three product areas to get a feel for the process.  For the pilot, it is ideal to have the group of SMEs together in a conference room for a day.  This allows you to discuss and resolve points of confusion quickly. Follow on analysis can be coordinated via conference calls.

Four Steps

Step 1: Scope Definition
  • Identify the product areas
Step 2: Data Collection
  • Incidents closed over the last 30-60 days in the product family being examined.
  • Build a report that lists all incidents/incidents closed. This report should include incidents with and without articles linked. If possible, this report should exclude “no trouble” found or “cancelled by customer” types of incidents. Ideally the report has the following fields (see the new vs. known write up and spreadsheet on the KCS Academy Resources page):
    • Incident/incident ID (links to the incident)
    • Incident title or summary
    • Incident close code
    • Article ID of linked article/document if there is one (links to the article)
    • Article title
    • Article resolution summary (if available)
    • (Links to the incident and article means the team members doing the analysis can click the ID to see the incident or article.  If this is not possible, then a cut and paste of incident IDs and article IDs can work.)
    • Fields to capture analysis
Step 3: Incident Analysis
  • Identify 2-3 Subject Matter Experts (SMEs) for each product family you are focusing on
  • Develop alignment and understanding with the SMEs on the purpose and intent of the analysis
  • SMEs will need access to both the incident management system and the knowledge base to review incidents and articles online.
  • Work through a few examples together to get a feel for the process and a common understanding of the analysis categories (this is critical and always requires some discussion and examples)
  • SMEs review incidents and articles in their product area and categorize them using the new vs. known spreadsheet (4-6 hours)
  • We want a random sampling of closed incidents (with and without articles linked).  Usually a sample size of 10-20% is sufficient.  It is amazing how quickly the trends and patterns emerge. Doing a larger sample size is only interesting if the trends and patterns have not stabilized.
Step 4: Identify and Discuss Opportunities
  • What is the percentage of new vs. known being handled?
  • What things can we do to remove known issues from the incoming incident workload?
  • Analyze and sort the data in the spreadsheet. Following are some common findings:
    • Knowledge capture: Is the collective knowledge of the organization being captured and reused? Is there an opportunity/need to increase the capture rate?
    • Link rate: Is the KB being used and are articles being linked to incidents? Do the numbers align with/validate what is being reported.
    • Publish rate: How many articles are being used internally that are not available to customers? Is there an opportunity to publish more or publish faster?
    • Customer use of self-service: how often do customers use self-service before they open an incident? Can we improve the rate at which customers use self-service?
    • Findability: Are there issues with findability of articles that are available to the customer? Did they use self-service but were unsuccessful? Test: using the customer perspective or incident information to search, can you find the article externally?
    • Navigation: If the self-service model involves a web support portal, is the navigation of the site aligned with the customer intent? Are there choices for customers on how they access content: index, FAQs, search? Is there an easy way to move from self-service to assisted support: click to open an incident, click to chat?
    • Diagnostics: how often are diagnostics required to identify the issue as known? Is there an opportunity to improve the information the product provides to help customers be more successful with problem identification/resolution? Or, to help the support center resolve issues quickly?
  • Improvements to the problem-solving process used for new issues.  Analyze and sort the data in the spreadsheet to see what it took to fix:
    • Escalation?
    • Diagnostics?
    • Recreation?
  • Feedback to development about product improvements that would have a significant impact on the customer experience, the incident volume or the problem isolation and solving process.

Key Concepts and Definitions

  • What constitutes "known"?
    • For the purposes of this study known means captured and findable
    • Incident closed with existing content (linked to a pre-existing article)
    • In some environments it may be interesting to identify “known but not captured.” This would be helpful if there is a lot of “tribal knowledge” (things that are known by all) that are not in the knowledge base. (Note: if this condition exists it is an indicator that knowledge workers are not really doing KCS.  If the question is being asked, it should be in the KB)
  • What constitutes a legitimate link?
    • In its simplest form, a link is a KCS knowledge base article that resolves the question or problem raised by the customer.
    • As search engines have become more sophisticated, and documentation is indexed and linkable at the word or sentence level, some organizations are linking a sentence or paragraph that resolves the issue to the incident as the resolution. 
    • Expanded criteria for “link:” a resolution that is specific to the issue, findable, linkable, and resides in a maintained repository

Guidelines and Definitions for Assessing Incidents

(Columns in the sample spreadsheet on the KCS Academy Resources page):

Primary fields (relevant to most organizations and important to the analysis):

Relevant incident? - no or blank

  • Is this incident relevant to the new vs. known study?
  • This is a way for people to flag incidents that should not be included in the study data. For example, incident is written in a foreign language (can’t be read), incident was closed by customer without resolution, incident was duplicate, incident was administrative 

Incident has an article linked- yes or no?

  • Yes: an article is linked to the incident (doesn’t mater if it is correct or not)
  • No: nothing is linked to the incident

Pre-existing article or document linked to incident (known) - yes or no?

  • The article linked to the incident existed before the incident open date  (the article was not created as a result of this incident)

Known but not captured (optional) – yes or blank

  • Tribal knowledge (things that are known by all) but are not in the knowledge base.  Capture the obvious ones; it is hard to know what is known but not captured. Don't spend a lot of time trying to figure this out.

Correct article or document linked to incident – yes or no?

  • Yes: the article is relevant to the incident. Does the resolution in the article solve the issue documented in the incident? Diagnostic articles may be linked but a Y should be entered only if an article is linked that includes the resolution.
  • Linking to a “formal document” (like a diagnostic guide or installation guide) is fine so long as the knowledge worker didn’t add any value to the answer and the link can be done to the specific sentence or paragraph that provides the resolution
  • No: an article is linked but it is not specific or relevant to the incident
  • Blank: no article linked to this incident

No article linked but one existed – yes or blank

  • An article was in the knowledge base when this incident was resolved/closed

Article linked is “internal use only”– yes or blank

  • Yes: the article will never be visible to customers. It is a security risk or technically too complex for customer user; it is visible only to internal knowlege workers

Correct article was visible to customer – yes, no, or blank

  • Yes: resolution to the issue documented is in an article that is visible to customers
  • No: article exists but was not published to the web. Article is still in draft or approved state and has not made it through the life cycle to be visible to customers yet
  • Blank: no article exists

External article or document – yes or blank

  • Yes: an article for this issue is available and visible to customers (it may or may not be linked to the incident)

 

Secondary fields (may not be relevant to all organizations and not critical to the objectives of the analysis):

Diagnostics run

  • Diagnostics include any diagnostics: general systems diagnostic tools or product specific diagnostics that had to be run to collect additional information.  Do not include the use of system logs or data the system normally captures.

Required problem recreation

  • Support recreated the problem in a lab

Required problem recreation by the customer

Required collaboration with others

Escalation required

Multi-vendor (MV) information/documentation required

Multi-vendor (MV) contact required

Hardware, field dispatch required

Hardware, parts ordered

Issue type:

  • How-to or usability questions
  • Installation
  • Configuration
  • Defect

What it took to fix:

  • Time to resolve (work minutes, if available)
  • An escalation (L1 to L2, L2 to L3)
  • Collaboration (conversation, IM, email, other)
  • Research
  • Recreate the issue
  • Ran diagnostics

 

Identifying and Plugging Content Gaps

Another type of Evolve Loop content is articles that fill content gaps in the self-service model. Use of self-service introduces some interesting dynamics:

  • Requestors will use a good web site to resolve issues they would not have called about. The demand for help is far greater than the number of requests that come into assisted support (the support center or service desk).

  • When requestors use self-service, there are issues they will not be able to solve. However, they will not always take the time to pursue an answer through the assisted channel. 

  • Unsolved issues represent gaps in the knowledge base (an article does not exist) or findability issues (an article exists but the requestor could not find it) 

     

Part of the Knowledge Domain Analysis is to identify content gaps on the web through web analytics that captures search strings. Whenever possible, we want to create articles that resolve requestor issues that were pursued on the web and not resolved. We could also refine existing articles based on how the requestor was searching for the answer—this improves the findability.

 

The Evolve Loop content processes are critical for continuous learning, innovation, and improvement. They leverage the Solve Loop content, create incremental value for the organization, and help to elevate awareness and sensitivity to the requestor or customer experience in the organization. 

Viewing 2 of 2 comments: view all
I'm wondering if it's appropriate to combine New vs. Known analysis with Process Integration Indicators (PII), which goes through a closely related process to generate closely related insights? I know my work on PII was explicitly inspired and built on the New vs. Known process.
Posted 14:44, 6 Jun 2016
I believe some elements of the New vs Known analysis can be automated, assuming you have a good integration between the ticketing system and knowledgebase. I agree that everything that requires human judgement is a candidate for merging into the PII.

The description and example spreadsheet provided for the New vs Known analysis implies that it is a manual task to identify which incidents are new and which are known. I believe the knowledgebase and ticketing system should work together to identify this for us:

- When a new article is created and linked to an incident for the first time, the incident should be tagged as new.
- When the article is linked to another incident, the incident should be tagged as known.

In this manner we can produce a new vs known rate much more often than the suggested once per quarter, and over a much wider range of incidents than we would be able to analyse manually. The PII would then act as a deeper manual check, over a subset of the incidents. Edited 07:30, 5 Aug 2016
Posted 07:26, 5 Aug 2016
Viewing 2 of 2 comments: view all
You must to post a comment.
Last modified
10:43, 17 Apr 2016

Tags

This page has no custom tags.

Classifications

This page has no classifications.