Home > Retired: KCS Practices Guide v5.3 > Section 2 KCS Practices and Techniques > The Evolve Loop > Practice 5 Content Health > Technique 4: Managing KCS Article Quality

Technique 4: Managing KCS Article Quality

As the organization gets started with KCS adoption, the KCS Coach plays a major role in the quality of the knowledge base content by reviewing the Support Analysts' workflow and articles. The Coach's goal is to support the Analysts in learning to use KCS processes, adhering to the content standard, and using the most effective problem solving process. The Coach has succeeded when the Support Analysts are efficiently and independently creating high quality articles. A KCS competency program tracked through the Article Quality Index is an effective way to track Analysts' skills and abilities to create quality articles.


The competency program ensures that people understand the KCS processes and the content standard. This program contributes to the level of quality and consistency in the knowledge base. As a higher percentage of the team demonstrates that they can consistently create findable and usable articles in the workflow, the need for a Coach diminishes.


There are five elements that contribute to quality KCS articles. AQI is the key metric for measuring the quality of articles; that's what we'll discuss here. The second element is the licensing/coaching model. All users are licensed and coached to create the best possible articles. The third element is having a good content standard that spells out the organization's requirements for good articles. The standard may spell out formatting standards, for example. The fourth KCS element contributing to quality articles is a broad and balanced performance assessment model. Finally, the fifth element is "flag it or fix it." When an article is found that needs attention, the user should either fix it at the time it is noticed or flag it to be fixed later.

Scoring and the Article Quality Index

Especially for large and distributed support teams, organizations must have consistent quality metrics for rating the article quality and staff performance. These metrics can be customized and evolve over time, but should be consistent with the content standard for a "good article, " quantifiable to facilitate reporting, and shared with both the individual and management. To begin, we suggest these basic checks:

  • Duplicate Article - an article existed in the knowledge base before this one was created (this is a critical part of the AQI)

  • Complete problem/environment/cause/resolution description and types

  • Content clarity—statements are complete thoughts, not sentences

  • Title reflects article content for easy recognition

  • Correct hyperlinks - Hyperlinks are persistently available to the intended audience

  • Metadata set appropriately - article state, type or other key metadata defined in the content standard


As an organization grows more comfortable with evaluating articles, it captures and scores metrics in an Article Quality Index (AQI) spreadsheet. The AQI is a score for people or groups based on reviews of the knowledge they have contributed for a specified period. It can be used as an indicator of the quality of the knowledge. Tangible, quantified information like this improves the quality of feedback to enhance skills development and drive article quality.


This matrix can be customized to suit an organization' requirements. A consumer product may need more emphasis on usability and formatting compared to a highly technical audience.


Start simple. Here is an example of an AQI focused on the big six items:


Sample— Simple Article Quality Check List


# Articles having this type of problem








Article Quality Index (AQI)




Total Issues found

Total possible points

Article creator

Articles Reviewed

Duplicate Article

Incomplete Description or Type

Poor Clarity, Compound Thoughts

Title does not Reflect Article Content

Invalid Links

Incorrect Properties
























































Group Quality Index








90% is the goal


Note: AQI is calculated using issues found times 2 to create more differentiation

between those doing well and those who aren't



Over time, as our KCS adoption matures and the organization gets good at the basics, we might add some additional or more granular metrics. We find that the content standard is 70-80% common across organizations and 20-30% tailored to a specific organization. Some of the factors in the criteria will be influenced by the knowledge management technology being used in the environment. The content standard sets the criteria for article quality, and must be tailored to the environment and the tools being used. Following is a sample list of criteria used for article assessment. While it is based on a mature KCS environment, it should be considered only as an example.

  • Too thin - not enough content has been captured to make the problem distinct

  • Duplicate - reviewer found duplicate article whose create date preceded this articles created date

  • Incomplete - critical information is missing

  • Compound - content is written with multiple thoughts

  • Mixed environment content with problem description

  • Environment content not to standard - product names or version levels do not adhere to the content standard

  • Fix not complete or usable

  • Wordy - articles are indented to be very crisp, complete thoughts not complete sentences, achieving a balance of concise but not ambiguous (this supports the usability goal)

  • Too specific - should not be customer/installation specific such as a node name or internal system identifier (unless the KB is being used for a specific installation)

  • Customer can not see reference - information or material referenced in the article is not accessible by the user

  • Hyperlink not correct or not visible to the intended audience

  • Article not appropriate for the audience - the visibility properties are not set correctly (how this is done is tool specific)

  • Article attributes or metadata not set correctly (this will be tool specific)


Many mature organizations develop a weighting system for this more complex criterion as the violations do not all have the same impact.  For example, a duplicate article is a more serious error than an article that is too wordy.   Again, the criteria and weighting should be done based on the needs of the organization and should be considered only after the organization has had some experience with the AQI sampling and scoring process.  Don't over-engineer the AQI process. Start simple and evolve it based on experience!


In the table below the errors carry a weight of 2 - this is done to enable the scores to better reflect differentiation between those doing well and those who need some help. In this example, anyone with an AQI score below 90 should get some attention from a Coach.  If they are consistently below 80 they are at risk of losing their KCS license. It is important to monitor trends over time on the AQI scores for both teams and individuals.  


Sample: Mature Article Quality Check List


Some key ideas to note in this example (for more details, see Practice 7: Performance Assessment):

  • Compare the number of articles reviewed for each creator. A legitimate sample size is important. Creators Chuck and Ed may have too few articles to be fairly weighed.

  • In the first row, the summary, organizational performance is visible. Apparently, many articles are duplicates, incomplete, or unusable. This result could mean more group training on searching and documenting content is required.

  • One contributor, Kim, is a prolific contributor, but also leads in the top three categories of problems. Attention from a KCS Coach is merited.


Evolve the quality index items being scored based on experience. As an organization matures in its use and confidence in KCS, it becomes easier to pay attention to more granular or refined content considerations like versioning, global distribution, use of multimedia, and measuring team-based contributions rather than individual or rapid processing of feedback.


Some organizations add weights to the different review elements to reflect the relative importance of each item. See the Progress Software case study on the Consortium website for an example.

Knowledge Sampling

To maintain KCS article quality, a group of qualified reviewers, usually a mix of Coaches and Knowledge Domain Experts, participate in a regular knowledge sampling of KCS articles from the knowledge base. While the KCS articles are selected randomly, we recommend having a sample from each individual.

Here is a typical process:

  • Develop a checklist and scoring system

  • Evaluate a sampling of KCS articles

  • Calculate a KCS Article Quality Index and develop summary reports

  • Provide feedback from Coaches to Support Analysts and management


During rollout and training, the frequency of this monitoring should be weekly. It will take more time due to the high number of KCS Candidates. Once the organization has matured, the frequency is typically monthly and should not consume more than a few hours of time per month per reviewer. Note that what the organization focuses on for KCS article quality will change over time. The elements for assessment at the beginning of a KCS adoption will be more basic than those things the organization will focus on two years into the knowledge journey.


There are a number of considerations for monitoring quality in support organizations. We have just discussed article quality monitoring and we have defined that as assessing an article's compliance with the content standard. Other areas to consider include case documentation and handling, customer interaction, the problem solving process, and technical accuracy.  Organizations have various ways to monitor the quality of these important elements of the support process. 


As organizations reflect on their processes in the Evolve Loop, they are identifying critical monitoring elements and ways to integrate quality monitoring across the support processes.  One element that is emerging as critical to monitor on a regular basis is linking accuracy.  Assessing linking accuracy has been part of the Evolve Loop process in the New vs. Known analysis.  This analysis is typically done on a quarterly or semi annual basis.  However, because linking accuracy is so critical to the other Evolve Loop analysis activities. organizations are assessing linking accuracy on a more regular basis. By including the assessment of "is the article linked to the case resolving the issue documented in the case" as part of the on going service monitoring process they are able to influence linking accuracy in timely manner.  


Linking rates (% of cases closed with an article linked) and linking accuracy (the article resolves the issue raised in the case) are the key enabling elements for identifying the top issues that are driving support cost (case volume) and customer disruption. In order to provide credible and actionable input to product management and development about the highest impact issues, we need to have high linking rates and high linking accuracy.       

Reviewing KCS Articles through Use

The concept of demand-driven review is fundamental to the KCS processes. The idea that people feel a sense of responsibility for what they find in the knowledge base is critical because the cost of the alternative (that someone else owns it) is prohibitive. This sense of collective responsibility is reinforced through coaching, the competency program, communications from the leaders, the performance assessment program, and the reward and recognition program. The new hero in the organization is the one who creates value through contribution to the knowledge base, not the one who knows the most and has the longest line outside their cube.

Internal Feedback

Judicious sharing of the AQI results can help contributors understand where to self-correct and help Coaches know what to look for and concentrate on. Monthly assessments and their evolution can help managers gauge progress overall. See Practice 7, Performance Assessment, for more information.

Integrating Customer Feedback

The most powerful and valuable feedback about KCS articles comes from the customers or end-users. Every time they acknowledge getting value from a KCS article, that feedback should be visible to all the Support Analysts who contributed to the KCS article: the creator, as well as people who reused and modified the article. If a customer or end-user flags a KCS article as incomplete or confusing, that KCS article must be queued for rework.


In order to promote trust and transparency and to increase the credibility of the KCS articles, some organizations are making feedback visible to the end-user. A ranking system can be put in place similar to what Amazon.com does with product reviews or Trip Advisor and Yelp provide for user reviews of hotels and restaurants. This information can feed into the triangulation model and radar charts—see the Performance Assessment practice for details.


An underlying premise of KCS is "the best people to create and maintain the knowledge base are the people who use it every day."  As organizations enter Phase 4, Leverage, of the KCS adoption model and make the majority of what they know available to customers through a self-service model, the premise stated above still holds. 


This raises the question of how to engage customers as part of the process. In fact, as organizations mature to the point where a large portion of their articles are published in a just-in-time manner (lots of KCS Publishers across the organization), good customer feedback mechanisms become critical.  Customers must become part of the quality management process for KCS articles.   Here are some of the ways member companies have implemented this when allowing customers to comment on articles:

  • Some make comments private and ask the customer if they want to be contacted about the comment. If the customer checks the "contact me" box, the system opens an incident for that customer and it goes into the normal incident handling process. This approach is probably feasible only for high complexity/low volume environments.

  • Some make the comment public with a wiki-like section on each article that allows customers to contribute their experience and opinions and see the comments of others

  •  Some allow trusted customers (often identified through the customer forums) to create and modify articles in the knowledge base. The source of the article or modification is indicated in the article. 


How Good is Good Enough?

To understand article quality issues better, the Consortium conducted a survey of its members' customers. The survey participants were approximately 67% large enterprises (highly complex business production environments over 300 users) and 27% small to medium businesses (business production environments less than 300 users) from the Americas, Europe, Middle East, and Africa. The remaining 6% were consumers.


This survey assessed customer needs and quality criteria with respect to web-delivered KCS articles providing technical knowledge. This KCS article content could be in the form of known problems, technical updates, or other knowledge base articles.

Almost all of the respondents were already comfortable using web self-help, so they may be considered advanced users. Based on experience, however, we believe the results can be extrapolated to reflect knowledge base content as a whole.

Customer response to the survey indicates articles need to be good enough be findable and usable, or what we call "sufficient to solve."

Getting the Basics Right

To begin with, we examined the basic content requirement—the material that must be included in the KCS article. Respondents chose the following, mostly in the category of "accuracy," as "very important." Responses are listed in priority order:

  • Technically accurate and relevant

  • Problem and solution description

  • Cause of problem

  • Complete information

  • Quickly found

  • Clarity of content

  • Valid hyperlinks

  • Configuration information

  • Vendor's sense of confidence in the answer


Considered "somewhat important," mostly in the category of "editing and format," were:

  • Compound vs. single thoughts

  • Complete sentences vs. short statements

  • Date created

  • Correct spelling

  • Grammar

  • Last modified

  • No duplication of information

  • Frequency of usage

  • Punctuation


Of "least importance," perhaps not surprising in a technical audience, were the attributes:

  • Legal disclosures

  • Correct trademarks

  • Date last used


Impact on Company Image

Most respondents considered editorial format somewhat important. Since the process involved in achieving editorial perfection can be time-consuming and delay access to information, we decided to assess the impact on corporate image of publishing KCS article information at various levels of editorial quality. The results were revealing. The majority of respondents:

  • Disagreed with or were neutral to the statement: "I have a lesser image of a company that withholds support information access in order to technically validate it."  (In other words, the majority of respondents did not fault a company for withholding information that was not technically validated.)

  • Agreed with the statement: " I have a lesser image of a company that withholds support information access in order to achieve editorial perfection."

  • Agreed with the statement: "To gain knowledge faster, I would like an option to select to see support information that has not been fully validated."

  • Agreed with the statement: "To gain knowledge faster, I would be willing to take responsibility for using any of the incomplete information should there be mistakes." Note: To mitigate risk from sharing this knowledge, many support organizations require customers to accept a disclosure agreement before seeing the KCS article.

  • Would have a higher or at least the same opinion when asked: "If the support information were marked as being in draft format, what opinion would you have of a company that shared everything they know, even if it had editorial mistakes?"


Time/Value Tradeoff: KCS Recommendations

From this survey feedback coupled with other experience implementing KCS, the Consortium feels confident recommending that organizations invest in content speed and accuracy over presentation and format. We should strive for timely and accurate knowledge, ensure we are investing appropriately in training, have a good balance of competencies, develop a licensing model (see the roles section in Practice 7, Performance Assessment), and follow the recommendations for maintaining just-in-time KCS article quality through a sampling process and the creation of the KCS Article Quality Index.


When it comes to information completeness and degree of validation, organizations must individually assess the risk-benefit tradeoff of sharing information early. The Consortium's findings should not be used as a substitute for asking customers about their needs in this area. In our experience, the just-in-time information model has become increasingly accepted as the business community has embraced open source, monthly and quarterly software releases, and extended and open beta-testing programs. Appropriate disclaimers, click-to-acknowledge interfaces, and a clear indication of KCS article status (confidence) are all ways to make the KCS article visible earlier and let the customer determine their own risk profile for the situation.

Viewing 1 of 1 comments: view all
The third paragraph in this article starts by stating there are 5 elements that contribute to quality KCS articles. The second, third, fourth and fifth elements are listed. However, there is no listing for what the first element is. Could this article be evolved to clearly state what the first element is?
Posted 11:01, 4 Nov 2015
Viewing 1 of 1 comments: view all
You must to post a comment.
Last modified
11:02, 29 May 2014


This page has no custom tags.


This page has no classifications.