Skip to main content
Consortium for Service Innovation

Technique 6.3: Search Technology for KCS

Because searching is an integral part of the KCS Solve Loop, it's important to provide technology that allows users to search the knowledge base effectively.  Rightly or wrongly, users often blame their search technology for the difficulty they have finding relevant content.  If users aren't confident of search, they're less likely to seek to understand what we collectively know, less likely to review and improve content while using it, and more likely to capture duplicate knowledge.

Search engines are designed to return relevant content on the basis of a query.  Search engines will sort the list of documents they return on the basis of how closely they calculate each returned document matches the query.  When search works well, the most relevant documents will be at the top of the list of results.

In their simplest form, search engines look for literal matches between words in the query and words in the document.  Search engines can be made more sophisticated, for example by matching simple variations on terms (for example, matching "run" with "running"), irregular variations (matching "run" with "ran"), synonyms (matching "run" with "jog"), or concepts ("run a program" matches "execute software" but not "a jogging fitness program").

Sorting by relevance, or ranking, is very important because users rarely look at more than the first several results (or, at most, the first several pages of results).  So documents that are ranked low are effectively excluded from search results.

Relevance ranking can use many factors to assess the closeness of match between query and document.  For example:

  • How many of the query terms appear in the document
  • How often those terms appear
  • How rare or meaningful those words are in the documents being searched (e.g., "0x32565" is more unique than "Error," so the query "Error 0x32565" will be a closer match to "Code 0x32565" than to "Error -135")
  • The proximity in which the words appear to each other
  • The location of the words; for example, words in the title are presumed to be more meaningful than words buried in the text.  The Consortium has proposed that good practice may be to rank matches in the Issue and Environment section higher than matches in the Resolution or Cause section, because the user is presumed to not yet know the Resolution or Cause.
  • The closeness of match of concepts (not just the words themselves) contained in the query terms and documents
  • The presumed quality or reputation of the document, based on link counts, ratings, age of the last view, or other similar factors

Though there are as many algorithms as there are vendors, search quality must be measured by the success users have navigating through the knowledge base.

It is important to understand how the search engine works, so trainers and Coaches can advise all knowledge contributors and users on the best ways of using search.  For example, should we use many words or few?  Should we use sentences and natural language, or just keywords?  How sensitive is search to specific words, or are general concepts sufficient?  Coaches must be prepared to model, and provide feedback on, technology-specific aspects of search.

"Search" for Support:  What's Different

The nature of human languages—and especially English—makes search challenging in any domain.  For example, if we say "stock," are we asking about a financial instrument, part of a gun, or a soup base?  And is "running in to the bank" a common errand, or a navigational error in a kayak?  Humans unconsciously disambiguate competing meanings based on context, but context is hard to program into machines.

Internet search engines leverage the structure of the web itself, and the behavior of users, to increase relevance.  With over 100 million websites and hundreds of millions of users searching every day, Internet search has an almost inconceivably large dataset to mine.  Unfortunately, KCS knowledge bases have neither the web's structure nor its volume of use, so Internet search approaches don't work well for them.  We often hear, "Can't search work just like Google?"  Because organizational knowledge bases do not have the volume of activity, our answer is "no."

If search is hard in general, search for support is doubly so.  Users know some symptoms of their problem, and they may know something about when and where the problem occurs, but they don't really know the answer they're looking for.  This is the basis for the Consortium's contention that search should look first in Problem and Environment sections, at least for articles using the KCS proposed structure. The search technology also needs to support people who know something about the resolution or cause of an issue and allow them the option to search the Resolution and Cause fields.

The good news is: support domains are constrained.  People will ask about anything in Internet search, but in KCS knowledge bases, they're typically asking about exceptions that occur with a defined set of products and services.  This simplifies the "stock" problem, if technology knows how to take advantage of it.

Key Considerations for Search Technology

The sophistication of search technology required for a sustainable KCS implementation varies based on the size of our knowledge base, the complexity of the domain (i.e., how subtle can the nuances be between non-duplicate content), the technical astuteness and the persistence of our users.  Generally speaking, very simple technology often suffices for a knowledge base of fewer than 1,000 basic articles, while collections over 100,000 articles in a deeply technical subject area strain the limits of current technology.

Here are some considerations for selecting search technology:

  • Is it important to be able to search other resources at the same time as the knowledge base?  In other words, should a single search return results from documentation, community forums, and defects?
  • Will a simple keyword search suffice, or do we need to support synonyms or concept-based search?  Does the size and complexity of our domain require even more advanced approaches to finding results?
  • How much of a burden does the search technology impose on the content developer who is capturing, structuring, and improving content?  Must they enter careful metadata or keyword fields, or will search handle the content automatically?  Can knowledge be captured "at the speed of conversation?"
  • What reports are available to drive Evolve Loop content development, especially to fill self-service gaps?
  • What options does the KCS program team, or another team, have to tune and refine the search experience? What reports are available to help them do this?

Planning for the Ongoing Effort of Search Tuning

Sophisticated search tools may deliver excellent experiences, and in some cases, they're the only way to sustain KCS.  But they do require ongoing effort to maintain and tune.  Since KCS content changes and evolves over time, so too must search.

Planning for this maintenance effort is a key component of the Process Integration practice in the Evolve Loop.  Generally, a person on, or working in partnership with, the KCS program team, coordinating closely with knowledge developers, should be responsible for this ongoing end user experience optimization.  Failure to plan for this task can turn a "smart" search tool into a dumb one, indeed.

The following tasks should be performed in an ongoing cycle:

Identify Search Experiences Weaknesses

Sources include:

  • Informal conversations with knowledge developers
  • Search analytics - looking for "no match found" queries
  • A formal hill climbing process which evaluates the results of frequent requestor queries.

Take Action

  • Is there a knowledge gap?  Let a Knowledge Domain Expert know.
  • Are multiple articles with different resolutions being returned for a set of symptoms?  This is usually because the environment statements do not include the characteristics that distinguish one article from another.  Use these as examples for the Coaches and KCS Publishers to highlight the importance of including the distinguishing characteristics in the article.
  • Is content difficult to read, or not in the requestor's context?  Diagnose why this isn't being fixed naturally in the Solve Loop, and take corrective action.  Also, consider revising the search engine's dictionary or concept map to bridge the gap between different users' terminology.
  • Are important or definitive articles not showing up at the top of results lists?  Implement search tuning options such as "best bets," "managed answers," or other ways of making important (generally Evolve Loop) content more prominent in results.
  • Are requestors struggling to troubleshoot using search results in particular important areas?  Consider creating value-added Evolve Loop content such as multimedia, "active" content, or diagnostic KCS articles that link together in a resolution paths.

Evaluate the Effectiveness of Your Actions

  • Make sure the initial problem has been corrected, using the same methods used to identify the problem in the first place.
  • Was this article helpful?