Skip to main content
Consortium for Service Innovation

Analyzing New vs Known

We have mentioned several times that the benefits of KCS are realized at multiple levels. The first is the efficiency gained by the organization through reuse of knowledge. Second is the delivery of knowledge (known issues) to requestors and improving requestor use and success with self-service. And third is identifying opportunities to improve products, policies, and services based on the customer experience.

It is important to understand the nature of the work that is coming into the assisted model (support center). One of the key distinctions we can make is the percentage of incidents being opened that are about known problems or questions versus those that are new.

Ideally, we would like to use our knowledge workers to solve new issues, not known issues. As an organization adopts KCS and integrates use of the knowledge base into the problem solving process, we see the internal reuse of knowledge increase, and we can establish a baseline for the new vs. known ratio. As we start to deliver knowledge to requestors through a self-service model, external reuse increases, and internal reuse should decrease. Requestors begin solving more known issues through the use of self-service, and demand for responder assistance for known issues becomes less frequent. Understanding the ratio of new vs. known incidents being handled becomes an indicator of the health of the knowledge flow and the effectiveness of the self-service model.

To assess known vs. new requires data from both the incident tracking system and the knowledge base.

Objective

Identify opportunities to reduce knowledge worker time spent on known issues and accelerate the resolution of new issues.

  • Reduce the resources spent on known issues: this is a function of improving customer use and success with the self-service model.
  • Improve the speed and accuracy in solving new issues: this is a function of getting the right resources working on the issue as quickly as possible.

By looking at incidents closed from the perspective of new vs. known and analyzing incidents in each category, we can identify:

  • The percentage of new vs. known issues being worked on by knowledge workers. This creates a baseline against which we can measure the impact of future improvements.
  • Determine the characteristics of known issues and assess why they were not solved through self-service.
  • Determine the characteristics of new issues and identify opportunities to improve the speed and accuracy of the problem-solving process.

The Approach

The New vs. Known Analysis is something that should be done periodically over the course of a year, probably not more than once a quarter.

The analysis is a sampling technique that is done on a specific knowledge domain. It is recommended that you do a pilot with two or three domain areas to get a feel for the process. For the pilot, it is ideal to have a small group of SMEs (Subject Matter Experts) together in a conference room for a day. This allows you to discuss and resolve points of confusion quickly. Follow-on analysis can be coordinated via conference calls.

Four Steps

1) Scope definition

  • Identify the knowledge domain

2) Data collection

  1. Incidents closed over the last 30-60 days in the knowledge domain being examined.
  2. Build a report that lists all cases/incidents closed. This report should include incidents with and without articles linked. If possible, this report should exclude "no trouble found" or "canceled by customer" types of incidents. Ideally, the report has the following fields:
    • Case/incident ID (links to the incident)
    • Incident title or summary
    • Incident close code
    • Article ID of linked article/document if there is one (links to the article)
    • Article title
    • Article resolution summary (if available)
    • Fields to capture analysis
    • Note: Links to the incident and article means the team members doing the analysis can click the ID to see the incident or article. If this is not possible than a cut and paste of incident IDs and article IDs can work.

Resource: The Consortium for Service Innovation provides a spreadsheet template that includes these fields as well the fields and definitions to capture to the analysis and creates graphs to visualize the data.   See the example new vs. known spreadsheet

3) Incident Analysis:

  1. Identify two or three SMEs for each knowledge domain you are focusing on
  2. Develop alignment and understanding with the SMEs on the purpose and intent of the analysis
  3. SMEs will need access to both the incident management system and the knowledge base to review incidents and articles online.
  4. Work through a few examples together to get a feel for the process and a common understanding of the analysis categories (this is critical and always requires some discussion and examples)
  5. SMEs review incidents and articles in their product area and categorize them using the New vs. Known spreadsheet (4-6 hours)
  6. It is very important to get a random sampling of closed incidents (with and without articles linked). To ensure a random sample the report of cases closed should not be sorted based on any particular criteria.  Usually, a sample size of 10-20% is sufficient. It is amazing how quickly the trends and patterns emerge. Doing a larger sample size is only interesting if the trends and patterns have not stabilized.

4) Identify and discuss opportunities:

  1. What is the percent of new vs. known issues being handled?
  2. What things can support do to remove known issues from the incoming incident workload?
    1. Analyze and sort the data in the spreadsheet. Following are some common findings:
      1. Knowledge creation: Is the collective knowledge of the organization being captured and reused? Is there an opportunity/need to increase the creation/modification rate?
      2. Link rate: Is the knowledge base being used and are articles being linked to incidents? Do the numbers align with/validate the reuse rate?
      3. Link accuracy: are the articles that are being linked relevant to the incident? (Organizations that put a goal on linking almost always have lower link accuracy than those who don’t)
      4. Publish rate: How many articles are being used internally that are not available to customers? Is there an opportunity to publish more or publish faster?
      5. Findability: Are there issues with findability of articles that are available to the requestor (they used self-service but were unsuccessful)? Test: using the requestor perspective or incident information to search, can you find the article externally? (This is typically a symptom of knowledge workers not capturing the context of the requestor.)
      6. Navigation: If the self-service model involves a web support portal, is the navigation of the site aligned with the requestor intent? Are there choices for requestors on how they access content: index, FAQs, search? Is there an easy way to move from self-service to assisted support: click to open an incident, click to chat?
      7. Diagnostics: how often are diagnostics required to identify the issue as known? Is there an opportunity to improve the information the product provides to help requestors be more successful with problem identification/resolution? Or, to help the support center resolve issues quickly?
  3. What improvements can be made to the problem solving process used for new?
    1. Analyze and sort the data in the spreadsheet to see what it took to fix it:
      1. Escalation?
      2. Diagnostics?
      3. Reproduction?
  4. What feedback should be provided to development, product management, legal, and/or marketing about improvements that would have a significant impact on the requestor experience, the incident volume, or the problem isolation and solving process?

Typical Agenda for the Pilot Analysis Session

9:00 am Welcome and objectives
9:30 am Work through a few examples as a group
10:00 am Work in teams on assessing/categorizing incidents
Noon Lunch
1:00 pm Check in on where we are, numbers of incidents done
1:30 pm Continue categorization
3:00 pm Review and analyze the trends that have emerged and discuss opportunities
4:30 Adjourn

What Constitutes Known?

  • For the purposes of this study, known means:
    • Incidents closed with correct article linked (linked to a pre-existing article)
    • Correct article exists, but is not linked
    • In some environments, it may be interesting to identify “known but not captured.” This would be helpful if there is a lot of “tribal knowledge” (things that are known by all) that are not in the knowledge base. (Note: if this condition exists it is an indicator that they are not really doing KCS. If a question is being asked, it should be in the knowledge base.)
  • Some Consortium members determine what is known by using the date stamp of when the incident was opened compared to the creation date of the article. If the article existed before the incident, that is considered "known."  This data stamp method relies on good linking and link accuracy practices.   It is recommended that you first do the manual new vs known analysis until you are confident in your organization's linking practices.  Many companies struggle with knowledge workers linking reference articles and not resolution articles.
  • Was this article helpful?