Skip to main content
Consortium for Service Innovation

Technique 5.12: Self-Service Measures

Measuring self-service success and the self-service experience is hard. In the assisted model, we can count events or cases and the linking of articles gives us a view of article use. In online communities, we can count posts and responses, which have a strong correlation to requests and responses.  In the self-service model, we can count activity like searches and page views and sessions, but they don't have a one-to-one relationship to issues pursued and resolved. So, we have to infer things about the self-service experience from a number of different sources. And, just like in Performance Assessment, where the creation of value can not be directly counted, we find a triangulation model is very useful.

There are a number of things we want to measure about our self-service mechanism.  

  • User's view
    • What value is being realized by those who use self-service?
    • What is the experience of those who use it?
    • How often is self-service used before a case is opened?
    • How often are users of self-service finding things that are helpful?
  • Internal view
    • What value is the organization realizing:
      • How much demand is being satisfied through self service success?
      • How much demand is being satisfied through self-service success that would have come to the assisted model (cost reduction)?
    • What is the pattern of article use - what articles are valuable to the users?
    • What impact is self-service having on the nature of the work that still comes into the assisted path (new versus known ratio)?

The Measures

Assessing the self-service experience and value relies on a combination of data analysis, user feedback, and observation. 

Data analysis:

  • User behavior patterns
    • Click stream analysis
  • Volume variation

Direct user feedback:

  • Surveys
  • Comments and feedback from users

Observation:

  • Usability tests  

Because none of the self-service mentioned above are precise - that is, none of them by themselves directly represent the user experience, we have to look at them together using the triangulation concept. For the above measures, it is the trends that are most important, not the absolute value.  And, it is our ability to correlate the different perspectives to gain confidence in our assessment of the user experience.  

As we discussed in Technique 5.10: Content Health Indicators, we need a way to assess the value of the articles in the knowledge base as it grows. The three perspectives discussed in Assessing the Value of Articles are relevant here as well: frequency of reuse, frequency of reference, and the value of the collection of articles.  The articles available through self-service should be included in value assessment.

Integrating Feedback

The most powerful and valuable feedback about KCS articles comes from the audience using them. Every time a user acknowledges getting value from a KCS article, that feedback should be visible to all who contributed to the KCS article: the creator, as well as people who reused and modified the article. If an end-user flags a KCS article as incomplete or confusing, that KCS article must be queued for rework.

In order to promote trust and to increase the credibility of the KCS articles, some organizations are making feedback visible to all audiences. A ranking system can be put in place similar to what Amazon.com does with product reviews, or Trip Advisor and Yelp provide for user reviews of hotels and restaurants. This information can feed into the triangulation model for assessing the self-service experience.

An underlying premise of KCS is "the best people to create and maintain the knowledge base are the people who use it every day."  As organizations begin to Build Proficiency in KCS and make the majority of what they know available to users through a self-service model, that premise still holds.

This raises the question of how to engage users as part of the process. In fact, as organizations mature to the point where a large portion of their articles are external in a just-in-time manner (lots of KCS Publishers across the organization publishing in the moment), good user feedback mechanisms become critical.  Users become part of the quality management process for KCS articles.   Here are some of the ways member companies have implemented this when allowing users to comment on articles:

  • Some make comments private and ask the user if they want to be contacted about the comment. If the user checks the "contact me" box, the system opens an incident for that customer and it goes into the normal incident handling process. This approach is probably feasible only for high complexity/low volume environments.
  • Some make the comment public with a wiki-like section on each article that allows users to contribute their experience and opinions and see the comments of others
  • Some allow trusted users (often identified through the community forums) to create and modify articles in the knowledge base. The source of the article or modification is indicated in the article. 
  • Some have segmented the knowledge base and have a governance model in place that allows all users to contribute to open-source type content.
  • Was this article helpful?