Home > KCS v6 Practices Guide > Section 2 The KCS Practices > The Evolve Loop > Practice 5: Content Health > Technique 5.10: Content Health Indicators

Technique 5.10: Content Health Indicators

As the organization gets started with KCS adoption, the KCS Coach plays a major role in the quality of the knowledge base content by reviewing the articles created by the KCS Candidates who do not yet have the competencies to put articles in a Validated state. The Coach's goal is to support the KCS Candidates in learning to do the Solve Loop, adhering to the content standard, and using the most effective problem solving process. The Coach has succeeded when the KCS Candidates are consistently and efficiently creating articles that adhere to the Content Standard. KCS proposes a competency or licensing program that uses the Article Quality Index and the Process Integration Indicators (PII) to assess the knowledge worker's ability to create quality articles and follow the KCS workflow. By earning a license, the knowledge workers are recognized for their KCS understanding and capability, thereby earning rights and privileges in the system.   

 

The licensing program ensures that people understand the KCS workflow and the content standard. This program contributes to the level of quality and consistency of the articles in the knowledge base. 

 

While there are many checks and balances in the KCS methodology to ensure quality articles, there are five key elements that contribute to article quality: 

  1. A content standard that defines the organization's requirements for good articles  
  2. Article Quality Index (AQI, below) for measuring the quality of articles 
  3. Process Integration Indicators (PII) - are we following the KCS workflow
  4. The licensing and coaching model  
  5. A broad and balanced performance assessment model

The Article Quality Index

Especially for large and distributed teams, organizations must have consistent quality metrics for rating the article quality and performance of those contributing. These metrics can be customized and evolve over time, but should be consistent with the content standard for a "good article," quantifiable to facilitate reporting, and shared with both the individual and management. To begin, we suggest these basic checks:

  • Unique - not a duplicate article, no other article with same content whose create date preceded this articles created date (this is a critical part of the AQI)

  • Complete - complete problem/environment/cause/resolution description and types

  • Content Clear - statements are complete thoughts, not sentences

  • Title Reflects Article - title contains description of main environment, and main issue (cause if available)

  • Links Valid - hyperlinks are persistently available to the intended future audience

  • Metadata Correct - metadata set appropriately: article state, visibility, type or other key metadata defined in the content standard

     

As the network of knowledge contributors grows more comfortable with evaluating articles, it captures and scores metrics in an Article Quality Index (AQI) spreadsheet. The AQI is a score for people or groups based on reviews of the knowledge they have contributed for a specified period. It can be used as an indicator of the quality of the knowledge. Tangible, quantified information like this improves the quality of feedback to enhance skills development and drive article quality.

 

This matrix can be customized to suit an organization's requirements. A consumer product may need more emphasis on usability and formatting compared to a highly technical audience.

 

Start simple. Here is an example of an AQI focused on the big six items:


Simple AQI Sample

 

Over time, as our KCS adoption matures and the organization gets good at the basics, we might add some additional or more granular metrics. We find that the content standard is 70-80% common across organizations and 20-30% tailored to a specific organization. Some of the factors in the criteria will be influenced by the knowledge management technology being used in the environment. The content standard sets the criteria for article quality, and must be tailored to the environment and the tools being used. Following is a sample list of criteria used for article assessment. While it is based on a mature KCS environment, it should be considered only as an example.

  • Environment content adheres to standard - product names or version levels adhere to the content standard, and is not mixed with problem description

  • Not customer specific - should not be customer/installation specific such as a node name or internal system identifier (unless the KB is being used for a specific installation)

 

Many mature organizations develop a weighting system for this more complex criterion as the violations do not all have the same impact.  For example, a duplicate article is a more serious error than an article that is too wordy.  Again, the criteria and weighting should be done based on the needs of the organization and should be considered only after the organization has had some experience with the AQI sampling and scoring process.  Don't over-engineer the AQI process. Start simple and evolve it based on experience!

 

In the example tables, the errors carry a weight of 2.  This is done to enable the scores to better reflect differentiation between those doing well and those who need some help. In this example, anyone with an AQI score below 90 should get some attention from a Coach.  If they are consistently below 80 they are at risk of losing their KCS license. It is important to monitor trends over time on the AQI scores for both teams and individuals.  


Mature AQI Sample

 

Some key ideas to note in this example (for more details, see Practice 7: Performance Assessment):

  • Compare the number of articles reviewed for each creator. A legitimate sample size is important. Creators Chuck and Ed may have too few articles to be fairly weighed.

  • In the first row, the summary, organizational performance is visible. Apparently, many articles are duplicates, incomplete, or unusable. This result could mean more group training on searching and documenting content is required.

  • One contributor, Kim, is a prolific contributor, but also leads in the top three categories of problems. Attention from a KCS Coach is merited.

     

Evolve the quality index items being scored based on experience. As an organization matures in its use and confidence in KCS, it becomes easier to pay attention to more granular or refined content considerations like versioning, global distribution, use of multimedia, and measuring team-based contribution in addition to individual contribution.

 

Some organizations add weights to the different review elements to reflect the relative importance of each item. See the Progress Software case study on the KCS Academy Resources page for an example.

Knowledge Sampling to Create AQI Scores

To create the AQI scores, a group of qualified reviewers (usually the KCS Coaches) participate in regular knowledge sampling of articles from the knowledge base. While the articles are selected randomly, it is important to be sure to sample articles from each individual.

Here is a typical process:

  • Develop a checklist and scoring system - AQI

  • Evaluate a sample of articles

  • Calculate a Article Quality Index and develop summary reports

  • Provide regular feedback to the knowledge workers on their AQI scores with comments from the Coach who did the scoring

  • Provide periodic feedback to leadership

 

During rollout and training, the frequency of this monitoring should be weekly. It will take more time due to the high number of KCS Candidates (people learning KCS). During the learning phase of adoption, the AQI shoud be used for learning and growth of the knowledge workers.  Once the organization has matured, the frequency is typically monthly and should not consume more than a few hours of time per month per reviewer. Note that what the organization focuses on for article quality will change over time. The elements for assessment at the beginning of a KCS adoption will be more basic than those things the organization will focus on two years into the knowledge journey.

 

There are a number of considerations for monitoring quality in organizations. Article quality monitoring, discussed above, is defined as assessing an article's compliance with the content standard. Other areas to consider include the Process Integration Indicators (PII), case documentation and handling, customer interaction, and technical accuracy.  Organizations have various ways to monitor the quality of these important elements of the process. 

 

As organizations reflect on their processes in the Evolve Loop, they are identifying key monitoring elements and ways to integrate monitoring across the processes.  One element that is emerging as critical to monitor on a regular basis is link accuracy. This is part of the PII, which is also done using a sampling technique and can be integrated into the AQI process.  Assessing link accuracy also shows up in the New vs. Known Analysis.  

Link rates (percentage of cases closed with an article linked) and link accuracy (the article resolves the issue raised in the case) are the key enabling elements for identifying the top issues that are driving support cost (case volume) and user disruption. In order to provide credible and actionable input to product management and development about the highest impact issues, we need to have link rates that exceed 60% and link accuracy that exceeds 90%.  Link accuracy is more important than link rate.       

Reviewing Articles Through Use

The KCS Principle of demand driven and the Core Concept of collective ownership combine to create efficiency.  The idea that people feel a sense of responsibility for the quality of the articles they interact with is critical. The cost and delay of the alternative: that someone else owns article quality and that it is someone else's responsibility to review it, is prohibitive. This sense of collective responsibility is reinforced through coaching, the competency program, communications from the leaders, the performance assessment program, and the recognition programs. The new hero in the organization is the person who creates value through their contribution to the knowledge base, not the person who knows the most and has the longest line outside their cube.

Feedback to the Knowledge Worker

Knowledge workers must have visibility to their AQI results so they understand where to self-correct. AQI results are also a key tool for the Coaches as it helps them identify opportunities for learning and growth.  

Assessing the Value of Articles

As we move through the KCS adoption phases, the knowledge base will grow. We will want a way to assess the value of the articles in the knowledge base. There are three perspectives to keep in mind when assessing the value of articles: frequency of reuse, frequency of reference and the value of the collection of articles.  The reuse frequency is a strong indicator of the value of an individual article and is fairly easy to assess.  The frequency of reference is equally important and is much harder to assess.  The value of the collection of articles has to be looked at from a systemic point of view.

Article Value Based on Reuse 

The value of any particular article can be measured by the number of times it is used to resolve an issue. If we are linking articles to incidents, we can easily calculate the reuse count.  As we move to Phase 4 - Leverage, of the KCS adoption, measuring the reuse of articles becomes much more difficult because customers using the article through self-service do not link articles to incidents nor do they show much interest in answering the oft-asked question, "Was this article helpful?"  To assess the value of individual articles in a self-service model, we have to infer value based on a number of factors. 

 

A few Consortium members have developed article value calculators that take into account the following:

  • Page views

  • Internal links

  • Customer feedback (member experience indicates that customers provide feedback on a tiny percent of articles viewed: 1-2%)

Article Value Based on Reference 

The second perspective is the value of the collection of articles.  Even though a specific article may not be the resolution to the issue, an article about a similar issue may provide some insight or remind us of an approach or diagnostic technique that we know but had not thought about. The frequency of reference is extremely valuable and hard to measure. 

The Value of the Collection of Articles 

The indicators for the value of the collection of content can be calculated based on the rate of customers' use and success with self-service.  More specifically, support organizations often look at the subset of the self-service success rate that represents issues for which the customer would have opened an incident had they not found an answer through self-service. This is often referred to with the unfortunate vocabulary of  "call avoidance" or "case deflection."   This avoidance or deflection view represents a vendor-centric view of support, not a customer-centric view.  A customer-centric view does not avoid or deflect customers; it promotes customer success through the path of least resistance and greatest success - for the customer!

How Good is Good Enough?

One of the things we learned from W. Edwards Deming, the father of the quality revolution, is that quality is assessed against a standard or criteria. Quality is not a standalone, universal thing; it is specific to a purpose. In order to manage the quality of our output, we have to know the criteria for what is acceptable and what is not. In the KCS methodology, the quality criteria for knowledge articles is defined in the content standard.  However, not all knowledge articles are equal in their importance or purpose.  Most organizations deal with different types of knowledge, and not all types of articles have the same criteria for quality. For example, some knowledge articles capture the experience of people getting their work done (where we have a high tolerance for variability and interpretation), while other types of articles describe company policies or regulatory requirements imposed by law for which we have no tolerance for variability or interpretation.  

 

How good is good enough?  Well, it depends on the type of information we are dealing with. By identifying in very broad categories the different types of information and their related compliance requirements, we can define both the criteria for a quality article and the governance we need for each type of article.  A word of caution here: we do not want to over-engineer the number of article types. We want to start with the minimum, which is often just two: experience-based and compliance-based content.  Then adjust the article types and criteria based on our experience. Each organization that adopts KCS must define what is good enough for their various audiences and the types of knowledge they deal with.  Not all knowledge articles will have the same quality requirements.     

 

To understand article quality issues better, the Consortium conducted a survey of its members' customers. The survey participants were approximately 67% large enterprises (highly complex business production environments of over 300 users) and 27% small to medium businesses (business production environments less than 300 users) from the Americas, Europe, Middle East, and Africa. The remaining 6% were consumers.

 

This survey assessed customer needs and quality criteria with respect to web-delivered KCS articles providing technical knowledge. This KCS article content could be in the form of known problems, technical updates, or other knowledge base articles.  Almost all of the respondents were already comfortable using web self-help, so they may be considered advanced users. Based on experience, however, we believe the results can be extrapolated to reflect knowledge base content as a whole.

 

Customer response to the survey indicates articles need to be good enough be findable and usable, or what we call "sufficient to solve."

Getting the Basics Right

To begin with, we examined the basic content requirement—the material that must be included in the KCS article. Respondents chose the following, mostly in the category of "accuracy," as "very important." Responses are listed in priority order:

  • Technically accurate and relevant

  • Problem and solution description

  • Cause of problem

  • Complete information

  • Quickly found

  • Clarity of content

  • Valid hyperlinks

  • Configuration information

  • Vendor's sense of confidence in the answer

     

Considered "somewhat important," mostly in the category of "editing and format," were:

  • Compound vs. single thoughts

  • Complete sentences vs. short statements

  • Date created

  • Correct spelling

  • Grammar

  • Last modified

  • No duplication of information

  • Frequency of usage

  • Punctuation

     

Of "least importance," perhaps not surprising in a technical audience, were the attributes:

  • Legal disclosures

  • Correct trademarks

  • Date last used

Impact on Company Image

Most respondents considered editorial format somewhat important. Since the process involved in achieving editorial perfection can be time-consuming and delay access to information, we decided to assess the impact on corporate image of publishing KCS article information at various levels of editorial quality. The results were revealing. The majority of respondents:

  • Disagreed with or were neutral to the statement: "I have a lesser image of a company that withholds support information access in order to technically validate it."  (In other words, the majority of respondents did not fault a company for withholding information that was not technically validated.)

  • Agreed with the statement: " I have a lesser image of a company that withholds support information access in order to achieve editorial perfection."

  • Agreed with the statement: "To gain knowledge faster, I would like an option to select to see support information that has not been fully validated."

  • Agreed with the statement: "To gain knowledge faster, I would be willing to take responsibility for using any of the incomplete information should there be mistakes." Note: To mitigate risk from sharing this knowledge, many support organizations require customers to accept a disclosure agreement before seeing the KCS article.

  • Would have a higher or at least the same opinion when asked: "If the support information were marked as being in draft format, what opinion would you have of a company that shared everything they know, even if it had editorial mistakes?"

Time/Value Tradeoff: KCS Recommendations

From this survey feedback coupled with other experience implementing KCS, the Consortium feels confident recommending that organizations invest in content speed and accuracy over presentation and format. We should strive for timely and accurate knowledge, ensure we are investing appropriately in training, have a good balance of competencies, develop a licensing model (see the roles section in Practice 7, Performance Assessment), and follow the recommendations for maintaining just-in-time KCS article quality through a sampling process and the creation of the KCS Article Quality Index.

 

When it comes to information completeness and degree of validation, organizations must individually assess the risk-benefit tradeoff of sharing information early. The Consortium's findings should not be used as a substitute for asking customers about their needs in this area. In our experience, the just-in-time information model has become increasingly accepted as the business community has embraced open source, monthly and quarterly software releases, and extended and open beta-testing programs. Appropriate disclaimers, click-to-acknowledge interfaces, and a clear indication of KCS article status (confidence) are all ways to make the KCS article visible earlier and let the customer determine their own risk profile for the situation.

Viewing 2 of 2 comments: view all
Re: "there are five key elements that contribute to article quality," a sixth one that is of paramount importance is the Improve practice of the Solve Loop: Just-In-Time Solution Quality, and Every Use is a Review.
Posted 15:38, 10 Jun 2016
A couple of points about the AQI sample spreadsheets:

1. The Simple Sample as shown does not multiply issues by 2 in calculating the Group Quality Index. If it did the GCI would be 79.65%. The Mature Sample spreadsheet does multiply issues by 2 for the GQI.

2. The Mature Sample spreadsheet has an invalid entry, "X4", in Fran's last score. I assume this should be 13, i.e. no issues, the equivalent of her score in the v5 version of this spreadsheet.
Posted 15:45, 29 Mar 2017
Viewing 2 of 2 comments: view all
You must to post a comment.
Last modified
18:33, 18 Apr 2016

Tags

This page has no custom tags.

Classifications

This page has no classifications.