Deepcrawl is now Lumar. Read more.
DeepcrawlはLumarになりました。 詳細はこちら

Ask the Expert: JP Sherman Answers Your Site Search Questions

Lumar Webinars - Learn SEO and Digital Marketing

Earlier this month, Red Hat’s Manager of Search & Findability JP Sherman joined our host and CAB Chairman Jon Myers on a DeepCrawl webinar to discuss all things regarding internal search. JP’s expert talk was packed with so much knowledge that we weren’t able to cover all of the questions submitted in the webinar itself. However, JP has kindly taken the time to answer all of your questions afterwards, which we’ve included in this post.

If you want to learn even more about site search, you can read our recap of the webinar itself.
 

What is the next step after you fix all of your technical issues?

I’ll challenge the “fix all of your issues” benchmark with a bit of nuance. I’d say that once you’re confident that your data is reliable and somewhat accurate, I’d start experimentation with UI/UX and trying different ranking and relevancy models. For example, how much should a title in a document weigh? How about keyword meta tags? What kind of SERP snippets can you design and do they help with CTR?

I’d experiment with mobile as well. Do people look for different things on mobile vs. desktop? Should mobile have a unique ranking system? When I hear “technical issues” I tend to see these as analytics issues, indexing & ranking issues & UI/UX issues as my critical elements to begin with. So, once you have these issues as stable elements of the search function, look to improve the experience of the customer and look at ways to measure the efficacy.
 

How do you check the performance of fixed issues on a website?

There are two basic ways I check the performance. Firstly, I look at the click-through rate (CTR) as a whole. If there’s an issue with site-search, you can generally see a change in the CTR. Now, if you’re able to focus these fixed issues to a general type of query such as “how to” or “task-oriented” queries, I create a keyword filter, add a good sample size of these types of queries so I can get an intent-based CTR.

The second way is to look at conversions. There will be times where CTR increases, but conversions decrease. This is where I’d start looking at these pages and applying CRO optimization. I generally find that when you start looking at the results of what you’ve changed, that’ll lead towards more specific questions.
 

How much of a snippet should be shown?

There’s no hard standard on how much of a snippet should be shown. I’ve run experiments on snippet length that revealed that users want a different experience on desktop vs. mobile. Run a simple A/B test based on your best experience, your intuition and collect data. Don’t forget mobile, I’ve run experiments where my desktop results improved and my mobile results crashed.
 

What tools do you use to help you with your findability measurement?

I use Adobe Analytics using dynamic tag management (DTM), where I’ve built custom reports using that data. Right now, I’m not aware of any third party tools that have a strong site-search focus to capture and display data for a marketing perspective. There are some very good third party tools to monitor search technology, Coveo, Lucidworks being two good examples, but they are focused mainly on the enterprise space. I’d love to see a tool emerge that would help with site-search for small to mid-size businesses.
 

How are you pulling data for click-through rates on results? Is there any way to do this via Google Analytics?

Using Adobe Analytics, I look at the number of instances (the number of times a search was performed – which translates to the number of times our search platform, Solr, received a request) and the number of clicks on the search results. Then I do a simple percentage calculation on those two metrics. This isn’t too dissimilar to how Google collects its data. Google has some good documentation on site-search analytics, where it calculates the number of searches. In Google Analytics, I create a filter that shows me which pieces of content had the prior page be the SERP page. This gives me the number of clicks. I then do the percentage calculation and that’s how I get my CTR from Google Analytics. I haven’t used Google Analytics too much at an enterprise level, so they may now have a calculated metric for search CTR.
 

What are your most trusted tools for measuring site search data?

Being perfectly honest, I haven’t found a good third party tool to measure site search data, so I use Adobe Analytics and I’m researching ways to build something to measure site-search for Google Analytics.
 

What have you found that works on mobile that doesn’t on desktop?

This is a great question! We did some experimentation with the SERP UI. We tested two things:

A thin line separator between SERP snippets and a very light variant of grey on alternating snippets. The idea was to see if a subtle visual signal helped the user differentiate between snippets and to see if that UI element influenced CTR. We tested each thing on desktop and mobile.

The results showed me that the alternating light grey was garbage on both desktop and mobile, the line between snippets on desktop had no effect and the line on mobile showed a slight improvement. What I’ve learned in this is that influencing CTR is really a game of fractions of a percent.

Looking at the trendlines of the control group and the experimental group showed that the slight line on mobile held up over time as a signal for separation. I’ve tested fonts, font colors, font size and even metadata like “document type”, “date”, “verified” and others to see if they influenced CTR. All of my successes gave a consistent lift of less than a percent in CTR. So, expect tiny results.
 

Users will be used to the way Google answers their questions – how do you try to achieve something like that with internal search?

We built a curated knowledge graph using JSON. Ok, so to be totally honest, it’s not a true knowledge graph, which I’d consider to be dynamically generated or built by machines. We use a curated knowledge graph that triggers off of a keyword. We built different graph templates like product, security and product suites. Each template is filled with curated information and lives in a JSON file that is triggered upon a user query.

We also built mobile designs for that knowledge graph. Once I’ve converted that into something more dynamic, I’ll be testing an answer function. For example, recognizing an error code in the search query that delivers an answer. So, I admit my solution doesn’t scale easily other than the fact that adding a few new ones a week is generally easy, doesn’t take much time and the metrics show that they’re useful.
 

Are there reports from site search appliances that would guide tuning for relevance?

That’s such a good question. It’s also one that exposes my ignorance. I hope that as I mature in this topic, I’ll be able to collaborate with a data scientist. All of my reports are quantitative and don’t provide any real prediction capability. Most of my relevance information comes from noticing something weird, wrong or broken and working with subject matter experts, our search dev team and building an experiment.

The most interesting relevance tuning experience was a project where we looked at standard fields in a content template as well as the metadata and adjusted relevancy weights on those templates. By tuning each field in terms of weight and relevancy, we were able to improve our CTR.

Future work on that experiment will take into account things like the content type. For example, our solution content is good for troubleshooting while our documentation content is good to accomplish tasks. As we build a better query intent model, we can look at mapping an algorithm-identified query intent to a particular content type, then apply the tuning model specific to that type of content.
 

What do you think are the most important factors for ecommerce businesses (fashion) looking to optimize mainly for visual search?

I’d honestly defer this question to an expert in UI/UX, but from the conversations I’ve had with experts in that field, I’d focus on visual attributes like color, contrast and image size. I’d love to experiment with fashion images in a SERP against a white background for good contrast or a more emotionally engaging background like a beach.

I have had some fashion clients and commonality with them is that fashion is more aspirational and emotion-driven rather than feature driven. So, I’d experiment with the imagery while at the same time, designing the SERP snippet to support the image. In text search, people scan the text in the snippet for less than a second. I’d hypothesize that for fashion with an image search, people would linger more on the page, increasing dwell time. I’d even test some “in-snippet” features like the color selection, sizing information or other attributes of the clothing that’d be considered a facet.

Thanks again to JP for such an insightful webinar, and for all of those that attended and submitted brilliant questions.
 

Be the First to Know About the Latest Insights From Google

Hangout notes

Loop Me In!

Newsletter

Get the best digital marketing & SEO insights, straight to your inbox