Home > Retired: KCS Practices Guide v5.3 > Section 2 KCS Practices and Techniques > The Evolve Loop > Practice 5 Content Health > Technique 6: New vs. Known Analysis

Technique 6: New vs. Known Analysis

Introduction

The new vs. known analysis is an example of the continuous improvement processes in the Evolve Loop. The new vs. known process can help assess the health and effectiveness of a support organization’s KCS practices.  This is an example of the kind of process the Knowledge Domain Expert (KDE) would facilitate.

 

The goal of KCS is to capture and reuse the knowledge gained through customer interactions – solve it once, use it often. 

 

Ideally, we would like to use our support resources to solve new issues, not known issues. As an organization adopts KCS and integrates use of the knowledge base into the problem-solving process, we see the internal reuse of knowledge increase and we can establish a baseline for the new vs. known ratio.  As we start to deliver knowledge to customers through a self-service model, external reuse increases and internal reuse should decrease; we are solving known issues through self-service. Understanding the ratio of new vs. known incidents becomes an indicator of the health of the knowledge flow and the effectiveness of the self-service model.

 

Objective

Identify opportunities to reduce the resources spent on known issues and accelerate the resolution of new issues.

  • Reduce the resources spent on known issues. This is a function of improving customer use and success with the self-service model.
  • Improve the speed and accuracy in solving new issues. This is a function of getting the right resources working on the issue as quickly as possible.

By looking at incidents closed from the perspective of new vs. known and analyzing incidents in each category we can identify:

  • The percentage of new vs. known issues being worked on in the support center. This creates a baseline against which we can measure the impact of future improvements.
  • The characteristics of known issues and assess why they were not solved through self-service.
  • The characteristics of new issues and identify opportunities to improve the speed and accuracy of the problem-solving process.

Scope

The scope of the analysis should include the following:

  • Support centers for internal and/or external customer support
  • First point of contact (level 1), first point of escalation (level 2), second point of escalation (level 3)
  • Hardware, software, networking, services

The Approach

The new vs. known study is something that should be done periodically over the course of a year, probably not more than once a quarter.  

 

The study is done by product area or product family; it is a sampling technique. It is recommended that you do a pilot with two or three product areas to get a feel for the process.  For the pilot, it is ideal to have the group of SMEs together in a conference room for a day.  This allows you to discuss and resolve points of confusion quickly. Follow on analysis can be coordinated via conference calls.

Four Steps

Step 1: Scope Definition
  • Identify the product areas
Step 2: Data Collection
  • Incidents closed over the last 30-60 days in the product family being examined.
  • Build a report that lists all incidents/incidents closed. This report should include incidents with and without articles linked. If possible, this report should exclude “no trouble” found or “cancelled by customer” types of incidents. Ideally the report has the following fields (see the new vs. known spreadsheet on the Consortium web site):
    • Incident/incident ID (links to the incident)
    • Incident title or summary
    • Incident close code
    • Article ID of linked article/document if there is one (links to the article)
    • Article title
    • Article resolution summary (if available)
    • (Links to the incident and article means the team members doing the analysis can click the ID to see the incident or article.  If this is not possible then a cut and paste of incident IDs and article IDs can work.)
    • Fields to capture analysis
Step 3: Incident Analysis
  • Identify 2-3 Subject Matter Experts (SMEs) for each product family you are focusing on
  • Develop alignment and understanding with the SMEs on the purpose and intent of the analysis
  • SMEs will need access to both the incident management system and the knowledge base to review incidents and articles online.
  • Work through a few examples together to get a feel for the process and a common understanding of the analysis categories (this is critical and always requires some discussion and examples)
  • SMEs review incidents and articles in their product area and categorize them using the new vs. known spreadsheet (4-6 hours)
  • We want a random sampling of closed incidents (with and without articles linked).  Usually a sample size of 10-20% is sufficient.  It is amazing how quickly the trends and patterns emerge. Doing a larger sample size is only interesting if the trends and patterns have not stabilized.
Step 4: Identify and Discuss Opportunities
  • What is the percentage of new vs. known being handled?
  • What things can support do to remove known issues from the incoming incident workload?
  • Analyze and sort the data in the spreadsheet. Following are some common findings:
    • Knowledge capture: Is the collective knowledge of the organization being captured and reused? Is there an opportunity/need to increase the capture rate?
    • Link rate: Is the KB being used and are articles being linked to incidents? Do the numbers align with/validate what is being reported.
    • Publish rate: How many articles are being used internally that are not available to customers? Is there an opportunity to publish more or publish faster?
    • Customer use of Self-Service: how often do customers use self-service before they open an incident? Can we improve the rate at which customers use self-service?
    • Findability: Are there issues with findability of articles that are available to the customer; did they use self-service but were unsuccessful? Test: using the customer perspective or incident information to search can you find the article externally?
    • Navigation: If the self-service model involves a web support portal, is the navigation of the site aligned with the customer intent? Are there choices for customers on how they access content: index, FAQs, search? Is there an easy way to move from self-service to assisted support: click to open an incident, click to chat?
    • Diagnostics: how often are diagnostics required to identify the issue as known? Is there an opportunity to improve the information the product provides to help customers be more successful with problem identification/resolution? Or, to help the support center resolve issues quickly?
  • Improvements to the problem-solving process used for new issues.  Analyze and sort the data in the spreadsheet to see what it took to fix:
    • Escalation?
    • Diagnostics?
    • Recreation?
    • Feedback to development about product improvements that would have a significant impact on the customer experience, the incident volume or the problem isolation and solving process.

Key concepts and definitions

  • What constitutes "known"?
    • For the purposes of this study known means captured and findable
    • Incident closed with existing content (linked to a pre-existing article)
    • In some environments it may be interesting to identify “known but not captured.” This would be helpful if there is a lot of “tribal knowledge” (things that are known by all) that are not in the knowledge base. (Note: if this condition exists it is an indicator that Support Analysts are not really doing KCS.  If the question is being asked it should be in the KB)
  • What constitutes a legitimate link?
    • In its simplest form, a link is a KCS knowledge base article that resolves the question or problem raised by the customer.
    • As search engines have become more sophisticated and documentation is indexed and linkable at the word or sentence level, some organizations are linking a sentence or paragraph that resolves the issue to the incident as the resolution. 
    • Expanded criteria for “link:” a resolution that is specific to the issue, findable, linkable and resides in a maintained repository

Guidelines and definitions for assessing incidents

(Columns in the sample spreadsheet: click here for spreadsheet or see attachments below)

Primary fields (relevant to most organizations and important to the analysis):

Relevant incident? - no or blank

  • Is this incident relevant to the new vs. known study?
  • This is a way for people to flag incidents that should not be included in the study data. For example, incident is written in a foreign language (can’t be read), incident was closed by customer without resolution, incident was duplicate, incident was administrative 

Incident has an article linked- yes or no?

  • Yes: an article is linked to the incident (doesn’t mater if it is correct or not)
  • No: nothing is linked to the incident

Pre-existing article or document linked to incident (known) - yes or no?

  • The article linked to the incident existed before the incident open date  (the article was not created as a result of this incident)

Known but not captured (optional) – yes or blank

  • Tribal knowledge (things that are known by all) but are not in the knowledge base.  Capture the obvious ones, it is hard to know what is known but not captured, don't spend a lot of time trying to figure this out.

Correct article or document linked to incident – yes or no?

  • Yes: the article is relevant to the incident. Does the resolution in the article solve the issue documented in the incident? Diagnostic articles may be linked but a Y should be entered only if an article is linked that includes the resolution.
  • Linking to a “formal document” (like a diagnostic guide or installation guide) is fine so long as the Support Analyst didn’t add any value to the answer and the link can be done to the specific sentence or paragraph that provides the resolution
  • No: an article is linked but it is not specific or relevant to the incident
  • Blank: no article linked to this incident

No article linked but one existed – yes or blank

  • An article was in the knowledge base when this incident was resolved/closed

Article linked is “internal use only”– yes or blank

  • Yes: the article will never be visible to customers. It is a security risk or technically too complex for customer user; it is visible only to Support Analysts

Correct article was visible to customer – yes, no, or blank

  • Yes: resolution to the issue documented is in an article that is visible to customers
  • No: article exists but was not published to the web. Article is still in draft or approved state and has not made it through the life cycle to be visible to customers yet
  • Blank: no article exists

External article or document – yes or blank

  • Yes: an article for this issue is available and visible to customers (it may or may not be linked to the incident)

 

Secondary fields (may not be relevant to all organizations and not critical to the objectives of the analysis):

Diagnostics run

  • Diagnostics include any diagnostics: general systems diagnostic tools or product specific diagnostics that had to be run to collect additional information.  Do not include the use of system logs or data the system normally captures

Required problem recreation

  • Support recreated the problem in a lab

Required problem recreation by the customer

Required collaboration with others

Escalation required

Multi-vendor (MV) information/documentation required

Multi-vendor (MV) contact required

Hardware, field dispatch required

Hardware, parts ordered

Issue type:

  • How to or usability questions
  • Installation
  • Configuration
  • Defect

What it took to fix

  • Time to resolve (work minutes, if available)
  • An escalation (L1 to L2, L2 to L3)
  • Collaboration (conversation, IM, email, other)
  • Research
  • Recreate the issue
  • Ran diagnostics
Viewing 3 of 3 comments: view all
There is a section "Four Steps" with no text just underneath Four Steps. I think this implies that the blocks of content below Four Steps are the 4 steps. However, there are 6 blocks of content below 4 steps. I am finding it difficult to understand the purpose of the Four Steps statement.
Posted 13:54, 18 Sep 2015
Hi Jeremy,

Thank you for your input. I have edited the steps to include the step numbers and reformatted it.

Melissa George
Posted 05:56, 21 Sep 2015
Thank you Melissa, this revision is definitely helpful. I appreciate it!
Posted 07:31, 21 Sep 2015
Viewing 3 of 3 comments: view all
You must to post a comment.
Last modified
05:54, 21 Sep 2015

Tags

This page has no custom tags.

Classifications

This page has no classifications.