Because searching is an integral part of the KCS Solve Loop, it's important to provide technology that allows users and customers to search the knowledge base effectively. Rightly or wrongly, users often blame their search technology for the difficulty they have finding relevant content. If users aren't confident of search, they're less likely to seek to understand what we collectively know, less likely to review and improve content while using it, and more likely to capture duplicate knowledge.
Search engines are designed to return relevant content on the basis of a query. Search engines will sort the list of documents they return on the basis of how closely they calculate each returned document matches the query. When search works well, the most relevant documents will be at the top of the list of results.
In their simplest form, search engines look for literal matches between words in the query and words in the document. Search engines can be made more sophisticated, for example by matching simple variations on terms (for example, matching "run" with "running"), irregular variations (matching "run" with "ran"), synonyms (matching "run" with "jog"), or concepts ("run a program" matches "execute software" but not "a jogging fitness program").
Sorting by relevance, or ranking, is very important because users rarely look at more than the first several results (or, at most, the first several pages of results). So documents that are ranked low are effectively excluded from search results.
Relevance ranking can use many factors to assess the closeness of match between query and document. For example,
How many of the query terms appear in the document
How often those terms appear
How rare or meaningful those words are in the documents being searched (e.g., "0x32565" is more unique than "Error," so the query "Error 0x32565" will be a closer match to "Code 0x32565" than to "Error -135")
The proximity in which the words appear to each other
The location of the words; for example, words in the title are presumed to be more meaningful than words buried in the text. The Consortium has proposed that good practice may be to rank matches in the Issue and Environment section higher than matches in the Resolution or Cause section, because the user is presumed to not yet know the Resolution or Cause.
The closeness of match of concepts (not just the words themselves) contained in the query terms and documents
The presumed quality or reputation of the document, based on link counts, ratings, age of the last view, or other similar factors
Though there are as many algorithms as there are vendors, search quality must be measured by the success users have navigating through the knowledge base.
It is important to understand how the search engine works, so trainers and Coaches can advise all knowledge contributors and users on the best ways of using search. For example, should we use many words or few? Should we use sentences and natural language, or just keywords? How sensitive is search to specific words, or are general concepts sufficient? Coaches must be prepared to model, and provide feedback on, technology-specific aspects of search.
The nature of human languages—and especially English—makes search challenging in any domain. For example, if we say "stock," are we asking about a financial instrument, part of a gun, or a soup base? And is "running in to the bank" a common errand, or a navigational error in a kayak? Humans unconsciously disambiguate competing meanings based on context, but context is hard to program into machines.
Internet search engines like Google, Bing, and Yahoo! leverage the structure of the web itself, and the behavior of users, to increase relevance. With over 100 million websites and hundreds of millions of users searching every day, Internet search has an almost inconceivably large dataset to mine. Unfortunately, KCS knowledge bases have neither the web's structure nor its volume of use, so Internet search approaches don't work well for them. We often hear, "Can't search work just like Google?" Because support knowledge bases do not have the volume of activity, our answer is "no."
If search is hard in general, search for support is doubly so. Users know some symptoms of their problem, and they may know something about when and where the problem occurs, but they don't really know the answer they're looking for. This is the basis for the Consortium's contention that search should look first in Problem and Environment sections, at least for articles using the KCS proposed structure. The search technology also needs to support people who know something about the resolution or cause of an issue and allow them the option to search the Resolution and Cause fields.
The good news is, support domains are constrained. People will ask about anything in Internet search, but in KCS knowledge bases, they're typically asking about exceptions that occur with a defined set of products and services. This simplifies the "stock" problem, if technology knows how to take advantage of it.
The sophistication of search technology required for a sustainable KCS implementation varies based on the size of our knowledge base, the complexity of the domain (i.e., how subtle can the nuances be between non-duplicate content), the technical astuteness and the persistence of our users. Generally speaking, very simple technology often suffices for a knowledge base of fewer than 1,000 basic articles, while collections over 100,000 articles in a deeply technical subject area strain the limits of current technology.
Here are some considerations for selecting search technology:
Is it important to be able to search other resources at the same time as the knowledge base? In other words, should a single search return results from documentation, community forums, or defects as well?
Will a simple keyword search suffice, or do we need to support synonyms, concept-based search, or does the size and complexity of our domain require even more advanced approaches to finding results?
How much of a burden does the search technology impose on the content developer who is capturing, structuring, and improving content? Must they enter careful metadata or keyword fields, or will search handle the content automatically? Can knowledge be captured "at the speed of conversation?"
What reports are available to drive Evolve Loop content development, especially to fill customer self-service gaps?
What options does the KCS program team, or another team, have to tune and refine the search experience? (See below.) What reports are available to help them do this?
Sophisticated search tools may deliver excellent experiences, and in some cases, they're the only way to sustain KCS. But they do require ongoing effort to maintain and tune. Since KCS content changes and evolves over time, so too must search.
Planning for this maintenance effort is a key component of the Process Integration practice in the Evolve loop. Generally, a person on, or working in partnership with, the KCS program team, coordinating closely with knowledge developers, should be responsible for this ongoing customer experience optimization. Failure to plan for this task can turn a "smart" search tool into a dumb one, indeed.
The following tasks should be performed in an ongoing cycle: