Skip to main content
Consortium for Service Innovation

Self-Service Measures

Measuring self-service success and the self-service experience is hard. In the assisted model, we can count events or interactions and the citation of articles gives us a view of article use. In online communities, we can count posts and responses, which have a strong correlation to requests and responses.  In the self-service model, we can count activity like searches and page views and sessions, but they don't have a one-to-one relationship to issues pursued and resolved. So, we have to infer things about the self-service experience from a number of different sources. And, just like in Performance Assessment, where the creation of value can not be directly counted, we find a triangulation model is very useful.

There are a number of things we want to measure about our self-service mechanism.  

  • User's view
    • What value is being realized by those who use self-service?
    • What is the experience of those who use it?
    • How often is self-service used before a case is opened?
    • How often are users of self-service finding things that are helpful?
  • Internal view
    • What value is the organization realizing:
      • How much demand is being satisfied through self service success?
      • How much demand is being satisfied through self-service success that would have come to the assisted model (cost reduction)?
    • What is the pattern of article use - what articles are valuable to the users or cited in generated answers that users approve of?
    • What impact is self-service having on the nature of the work that still comes into the assisted path (new versus known ratio)?

The Measures

Assessing the self-service experience and value relies on a combination of data analysis, user feedback, and observation. 

Data analysis:

  • User behavior patterns
    • Click stream analysis
  • Volume variation
  • Citations in successful generated answers

Direct user feedback:

  • Surveys
  • Comments and feedback from users

Observation:

  • Usability tests  

Because none of the self-service measures mentioned above are precise - that is, none of them by themselves directly represent the user experience, we have to look at them together using the triangulation concept. For the above measures, it is the trends that are most important, not the absolute value.  And, it is our ability to correlate the different perspectives to gain confidence in our assessment of the user experience.  

As we discussed in Technique 5.10: Content Health Indicators, we need a way to assess the value of the articles in the knowledge base as it grows. The three perspectives discussed in Assessing the Value of Articles are relevant here as well: frequency of reuse, frequency of reference, and the value of the collection of articles.  The articles available through self-service should be included in value assessment.

Limitations

Given the varying maturity of self-service deployments and resources available, we provide "good," "better," and "best" measurement options.

Every Consortium Member company who worked on this project brought their own handful of indicators - and everyone felt their indicators could be improved. Different business models need different metrics. For example, when looking at clickstream analysis, everyone has different levels of sophistication or a different journey map as to when/where clickstream analysis starts.

Not all unsuccessful self-service or community attempts result in case creation, and not all successful self-service or community engagements represent an avoided incident. While we attempt to delineate attempts between self-service, community, and assisted interactions, another perspective to consider is parallel solving while a case is open. We cannot accurately measure all scenarios at scale.

Due to the complexity, the most useful strategy is to trend against yourself. Consider how you can establish a baseline for your organization and measure your progress against it.

Questions That Require Assumptions to Answer

  • How many customers with support demand never made it somewhere that we can measure their engagement? (e.g. they started in Google and stayed there)
  • What does a successful engagement look like? There are numerous possibilities for a successful pattern of engagement for various personas (break/fix versus goal-oriented tasks versus long-form learning) and from different origins (Google, Direct, Click Navigation vs. Search, In-Product Help, Generated Answers).
  • Are anonymous users customers? Members experimenting with this report that after adjusting for bots, approximately 90% of hits to their public self-service content comes from external search engines (like Google). Based on clickstream data and surveys, it appears a very large percentage of those hits are from customers who did not take the effort to log into the support portal.
  • Sessions: We measure by sessions, but not all sessions are equal. We aim to measure sessions that provide value, not just the total quantity of sessions. This may mean figuring out how to remove non-valuable / ineligible sessions from your count.
  • Data sources: Measurement tools (Google Analytics versus SEMrush versus Internal measures) may have visibility or measurement parameters that differ.
  • Effort: What is a high effort versus low effort visit? It's a subjective measurement that constantly evolves.
  • Goals: Measurement goals change based on user persona or use case.
  • Bounce Rate: Bounce is not a good qualitative measure because a single-page session can be successful. Bounce calculations can be impacted by event tracking. Better calculations are time on page, scroll percentage, and other contextual measures.
  • Feedback: Limited value due to low participation (~1.5%). Article or generated answer feedback lacks user context. Are customers rating the solution quality or their overall experience regarding their issue?
  • Context: Not all metrics are actionable or have a clear 'why'. When self-service fails, did customers not find what they were looking for or not understand the answer they received? Often times, we have to drill down on supporting or related metrics to understand the context of the data.
  • Was this article helpful?