Skip to main content
Consortium for Service Innovation

July 2025: Introducing PAR 2.0

Process Adherence Review = Process ALIGNMENT Review

An Update to Technique 6.5: KCS Process Adherence Review from the KCS v6 Practices Guide

KCS describes a double loop process by which we can 

  • answer questions quickly,
  • deliver answers where people are looking for them, and
  • drive improvements into products and services by revealing the most frequently asked questions and recurring issues so that we can eliminate them at the source.

To realize these benefits (Evolve Loop goodness), knowledge workers engage in the Solve Loop activities of reusing, improving, capturing, and structuring in their workflow. The Process Alignment Review is one way to know if those activities are happening effectively and contributing to meaningful outcomes.

The Evolve Loop Practice of Process Integration says we want to integrate knowledge management activities into our workflow and make that workflow easy, intuitive, and repeatable through the functionality and integration of our tools. 


The KCS Process Alignment Review technique is important because we want to know how often and how well we are contributing to the Solve Loop. How often did we do the right thing?


- The ‘Why’ Behind Each Practice

The Process Alignment Review (PAR) is a quality process within KCS. Using the PAR, we sample and evaluate recently closed cases for their alignment with the Solve Loop workflow. This evaluation process is used in two ways:

  1. During coaching: providing the coach insight into the Solve Loop behavior of the people they're coaching, enabling meaningful feedback that supports proficiency development (not conversations about numbers)
  2. On an ongoing basis for KCS program quality monitoring (where trending total outcomes over time might be interesting)

PAR is most valuable as a tool to promote learning and growth by highlighting coaching opportunities. It offers insight into behaviors and reveals how deeply Solve Loop activities are embedded in daily work.

The combination of the Content Standard Checklist and the PAR provide coaches with the perspective they need to help people improve the value they create. While the Content Standard Checklist evaluates articles for their successful implementation of the Content Standard, the PAR evaluates cases for their successful implementation of the Solve Loop.

Who Does PAR?

The Process Alignment Review checklist needs to be tailored to the workflow the organization has defined. In an ideal world, PAR would live within an existing case quality program or QA process. If case quality checks are something your organization is already doing, make sure PAR is part of that! If not, the PAR should be performed by coaches.

The PAR may be analyzed for individuals or teams. From a Program Management standpoint, it can be helpful to leverage PAR to understand overall maturity of the KCS program and program-level health and trends.

This update is intended to make PAR more understandable and easier to use: to ensure the labeling and calculations are more straightforward, the outputs are actionable, and the starting point is more accessible. 

In this version of PAR, each case represents an opportunity to engage in the Solve Loop. We identify the opportunity and whether or not the knowledge worker met it. As accurate reuse is one of the big engines that drives KCS benefit, we provide options for thinking about how you might implement a Process Alignment Review with various levels of complexity (Essential, Expanded, and Comprehensive), which should be applied as appropriate for the environment and the license level of the knowledge worker(s).

Please spend time with the PAR Glossary before beginning. The Glossary covers covers all the elements of the new Comprehensive PAR model; the Essential and Expanded versions of the PAR use a subset of those elements. Note that the Outcomes section includes "What to do about it" for each outcome!

Process Alignment Model

Use knowledge worker-level data to identify opportunities for celebrating success or starting a coaching conversation.  High-level program data can be observed and tracked over time to evaluate the overall health of your Solve Loop.

The Member-only Process Alignment Workbook has several example PAR models for you to copy and use in your environment. Consortium Members: log in to access link.

Essential

This demonstration model shows a PAR framework evaluating Reuse and Relevant Reuse. This is a helpful view for programs (or waves) that are just beginning the KCS journey, and for outsourced environments where coaching opportunities are limited. It provides a sense of whether or not the KCS workflow is being followed, who understands the workflow, and how much we can trust the patterns and trends that are developing.

clipboard_e21533c8f284f0c515ef0d0cff6c1acd5.png

In this example, we looked at 18 of Al's recently closed cases, and discovered that two of them did not have knowledge opportunities.  Of the remaining 16 cases, 12 of them had opportunities for reuse (the article existed before the case was opened). Of those, Al reused articles for 10 of his cases, and 8 of those were relevant links: 8 cases had a link to a Resolution article that describes the solution to the case. 

This makes Al's reuse rate 83% (10 cases reused an article out of 12 opportunities, so 10 divided by 12) and his reuse accuracy 80% (8 out of the 10 cases he reused were relevant reuse, so 8 divided by 10).    

What kinds of conversations might a coach have with Al about his experience? It's not about the numbers. Al is searching and reusing accurately most of the time, which is great! A coach might ask Al to demonstrate his process of searching; perhaps more context in the search would return more results. Al might also benefit from a conversation about why accurate reuse is important. 

PAR is not a grade or a score on a knowledge worker, it is an indicator of engagement and provides opportunities for learning.

- Monique Cadena, Consortium Innovator

Expanded

This demonstration model shows a PAR framework evaluating Solve Loop reuse and create activities. Waiting to add article creation as a part of PAR can help reinforce the fact that creation is not the primary goal of KCS and can help avoid the temptation to put a goal on that particular activity.

clipboard_e33f997495418616bbf597924a7b37636.png

Comprehensive

This demonstration model shows a more mature PAR framework evaluating Solve Loop reuse, reuse & improve, and create activities. This view provides visibility to all of the activities of the Solve Loop, and can help pinpoint specifically which parts of the workflow are proving troublesome.

clipboard_e439c4f29ea81bfdae125261ad73a0a5a.png

PAR 2.0 Glossary

Evaluations

Closed Cases

Definition: Closed cases are the basis for the PAR assessment.

What It Means: These are cases randomly selected from those closed by individuals or teams within a recent timeframe (such as the last 30 or 60 days). This group of cases is also called the sample size. In general, a larger sample provides a more accurate picture of what's happening, but only up to a point. While the sample may not be statistically significant, it can still offer valuable insights for learning and improvement.

Cases without Knowledge Opportunities

Definition: The number of cases not requiring knowledge to solve. 

What It Means: Cases in which knowledge would not have been appropriate to link to from the case. For example, cases that are purely transactional ("what's my account balance today?") or in which we don't know what happened ("never mind, it started working again") may not benefit from knowledge.

Opportunities

Reuse

Definition: The number of opportunities to Reuse an existing knowledge article as-is. 

What It Means: "Reuse" implies that the article existed before it should have been used to solve this case. In the Comprehensive model, cases in this category exclude ones in which the reusable article was not sufficient to solve and should have been Improved. In that model a case may be a Reuse opportunity, or a Reuse & Improve opportunity, but not both.

Reuse and Improve

Definition: The number of opportunities to Improve an existing knowledge article (through Reuse). 

What It Means: "Reuse" implies that the article existed before it was used to solve this case. Cases in this category include only ones in which the linked article was not sufficient to solve and should have been Improved. A case may be a Reuse opportunity, or a Reuse & Improve opportunity, but not both.

Create

Definition: The number of opportunities to Create a new knowledge article. 

What It Means: No article that should have been reused or reused and improved existed in the knowledge base when the case was closed, but the case should have been closed with an article. In this case, a new article should have been created while the relevant information was captured in the case workflow, prior to case closure.

Activities

Cases with Reuse

Definition: The number of cases with at least one article linked when the linking happened by reusing existing knowledge as-is and did not result in knowledge creation or improvement. 

What It Means: Someone reused knowledge when it was the appropriate thing to do.  This metric does not attest to the applicability of the linked article: that is instead assessed in Relevant Reuse.

Cases with Relevant Reuse

Definition: The number of cases that reuse an article that resolves the closed case. 

What It Means: "Relevant," in this sense, means there is at least one link from the case to a Resolution article that describes how to handle this case--that is, links only to Reference articles don't count as towards Relevant Reuse for the purpose of the PAR evaluation. (Review what the KCS v6 Knowledge Domain Analysis Reference Guide says about Resolution and Reference articles.)

Cases with Reuse and Improve

Definition: The number of cases identified as Reuse and Improve opportunities that Reused and Improved an article. 

What It Means: How many times did we Reuse and Improve knowledge when a relevant, linked article wasn't sufficient to solve? (This model assumes that if people are improving the article, it's relevant to the case. As you're doing the PAR analysis, validate this assumption.)

Cases with Article Creation

Definition: The number of cases in which an article was created and linked to the case.

What It Means: How many times did we contribute new knowledge when new knowledge is needed? (This model assumes that if people are contributing an article from the case, it's relevant to the case. As you're doing the PAR analysis, check if this assumption is valid.)

Cases with Solve Loop Activity

Definition: The number of cases that link to one or more articles.  These would include those linked while being Reused, Reused and Improved or Created. 

What It Means: A summary measure of Solve Loop activity.  It may be compared with the program's reported Link Rate.

Outcomes

Reuse Rate

Definition: The percentage of Articles Reused (Linked) compared with the total opportunities to Reuse. 

What It Means: Note that although this sounds similar to the KCS Link Rate metric, it is different. This compares cases that reuse articles (without improvement) with the total number of times reuse should have happened. If we're following the Solve Loop perfectly, Reuse Rate should be 100%.

What to Do About It: If KCS participants are missing opportunities to reuse, make sure they understand the benefits to themselves, their colleagues, and their customers of doing so.  Remind them of a big WIIFM: if they reuse an article, they don't need to retype anything that's already in that article, streamlining case documentation.  Confirm that search is effective enough that people can reliably find the articles they need.  If it's not, consider search tuning, assuring better alignment to the Content Standard, or even a technology upgrade.

Reuse and Improve Rate

Definition: The percentage of Articles Reused and Improved compared with the total opportunities to Reuse and Improve. 

What It Means: What percentage of the time are we following the Improve practice of the Solve Loop, making edits directly if appropriately licensed or flagging if not? If we're following the Solve Loop perfectly, Reuse & Improve Rate should be 100%.

What to Do About It: Most organizations find the Improve practice to be the most challenging part of the Solve Loop.  If people are missing opportunities to reuse and improve, make sure to reinforce the principles of collective ownership (you're not editing "someone else's" article) and how essential demand-driven "every use is a review" is to knowledge quality and effectiveness in KCS.  Also check that your technology isn't putting speed bumps in the way of improving content.

Create Rate

Definition: The percentage of Articles Created and Linked compared with the total opportunities to Create. 

What It Means: What percentage of the time are we contributing new knowledge when new knowledge is needed? If we're following the Solve Loop perfectly, Create Rate should be 100%. Note that in the very early days of KCS within a knowledge domain, when most cases require article creation and users are coming up the learning curve, this rate might be low...but it should increase over time.

What to Do About It: If KCS participants are missing opportunities to create, make sure they understand the benefits to themselves, their colleagues, and their customers of doing so. They build their brand, they speed closure through reuse and self-service, and they get to help people while they sleep.  Their case work becomes less redundant and more interesting. Ensure that people know how to capture in the workflow rather than starting the article from scratch at the end of the case.  Check to make sure your technology facilitates capturing in the workflow and doesn't require copy and pasting or retyping to contribute a new article.  And make sure your knowledge workers understand "sufficient to solve."  We don't need a perfect article; we need one that captures how we helped our one specific customer.  Trust the process: as content is used more, it gets reviewed more, and it evolves to be suitable for more people and situations.

Reuse Accuracy

Definition: The percentage of of relevant Articles Reused that are relevant to the Case. 

What It Means: Reuse accuracy should be very nearly 100%.

What to Do About It: If reuse accuracy is lower than desired, make sure that managers aren't setting expectation on the Link Rate--never put goals on activities!  Stress that a high link rate with low link accuracy is far worse than a low link rate, because inaccurate links make Knowledge Domain Analysis pointless.  Make sure participants understand that they're expected to link to a Resolution article, not just Reference articles.

Solve Loop Index

Definition: The percentage of contribution Activities (Reuse + Reuse & Improve + Create) compared to contribution Opportunities (Reuse + Reuse & Improve + Create).

What It Means: For what percent of opportunities to use knowledge in the workflow are we doing so and following the Solve Loop?

What to Do About It: This is a single number that provides a snapshot of how well a team (or sometimes an individual) is following the Solve Loop. In most environments, a Solve Loop Index of 80%+ would be healthy. As mentioned in Create Rate, launching a new knowledge domain with a sparsely-populated knowledge base will tend to have a lower Solve Loop Index temporarily, until the most frequently used content is created and participants go through coaching and their own learning curves.

The Solve Loop Index has all the strengths and weaknesses of any other single number: it provides a quick and easily digestible data point, especially for executive reporting, but it doesn't tell you what to do.  To take action to improve the Solve Loop Index, look for opportunities with Reuse Rate, Reuse & Improve Rate, Create Rate, and Reuse Accuracy.

Thank You!

Thanks to everyone who contributed to this update, including:

  • Monique Cadena
  • Kristin Cline
  • Janine Deegan
  • Adam Hansen
  • Adam Mullen
  • Kelly Murray
  • Mitzi Newland
  • Padma Prasad
  • Christina Roosen
  • Jason Rowland
  • Dave Steward
  • Tomer Shoshan
  • Sander van der Moolen
  • Jacob Watts
  • Sue Wong
  • Larry Yang

Special thanks to David Kay and Jennifer Crippen, whose years of real world experience, thoughtfulness, and contributions continue to make the methodology understandable, usable, and relevant.

  • Was this article helpful?