I just read a post from Mary Ellen at Librarian of Fortune. It talks about the different perspectives of Librarians vs. the general public regarding what makes good search results.
This is a typical haunting thought for those concerned with search engines and their quality. I want to share an insight which basically comes down to this surprising fact: a good search engine is more than a great algorithm. Half the work goes into the user interface, facets, autocomplete, the data structure of the index, and all the other features which supports users in developing a good query.
Poor quality query + Great Search Algorithm = Poor Results
Great quality query + Poor Search Algorithm = Good Results
Needless to say, if both the quality of the query and the matching algorithms are good, results will probably be optimal.
Am I blaming poor search results quality on ignorant users who couldn’t develop an advanced search query like ["poor search * quality" -site:google.com -"marketing"] on their own? No, of course. It’s up to the software to make it easy to develop optimal queries, using suggestions, autocomplete, search facets, etc.
I have always been an avid searcher and loved to do research at the library during my college and grad school days in the late 80s and early 90s. Then I worked for Infoseek, Netscape, and a bunch of web search engine companies in the 90s. I often had the exact same thought as Ellen in her post.
The difference between the synthetic mind of the average search user and the analytic mind of the research pro is the following, I believe: The average search user would rather receive two relevant hits for his query and nothing else. At least that’s what the marketers “monetizing” the search engines with ads at Google and other companies would like to think He/she has little time and always wishes that the “magic of search” would guess the exact desired result even when his/her query doesn’t actually express it very precisely. The pro user is different. He/she wants to use his/her brain to analyze and discover the boundary of the semantic field being searched. Example: search for “Japanese Raw Fish Cooking” might lead him to other words, like “sushi” which will become part of the final, ideal advanced search query such as ["Japanese Raw Fish Cooking" AND "Sushi"].
This concept is important because now that I work for a company that manufactures custom search engines, we find customers that want all sorts of preciseness. We say that our software we can tune the “search corridor” to be rather large or precise. But beyond this optimizing between the conflicting influences of noise (too many irrelevant results) and silence (two few results, or even zero results), what became obvious for me over the years, was that just like in the library, a good search sessions involved careful development of one or several ideal queries. One often starts a session with a vague idea of what the search query should be. That’s why I am a big fan of Google’s instant search, its suggest tool, faceted search and other such tools which, in essence, guide the user towards the ideal query. We also have developed such an autocomplete tool which we even offer as a SaaS tool for ecommerce (Exorbyte Commerce – sorry shameless plug here).
Made for Big Players
IE 11: Transer store into “IE 10 Comp
In our regular checks of the functionality of exorbyte Commerce Search in popular web…
Enhancements & Optimizations in July 2
In the course of regular enhancements and optimizations, and in addition to various t…
Exorbyte Commerce Search is Best Of 2013
The “Initiative Mittelstand” (Initiative SME) awards the Innovation-Prize-IT to compa…