Martes, Abril 28, 2015

Search Trends: Are Compound Queries the Start of the Shift to Data-Driven Search?

Posted by Tom-Anthony

The Web is an ever-diminishing aspect of our online lives. We increasingly use apps, wearables, smart assistants (Google Now, Siri, Cortana), smart watches, and smart TVs for searches, and none of these are returning 10 blue links. In fact, we usually don't end up on a website at all.

Apps are the natural successor, and an increasing amount of time spent optimising search is going to be spent focusing on apps. However, whilst app search is going to be very important, I don't think it is where the trend stops.

This post is about where I think the trends take us—towards what I am calling "Data-Driven Search". Along the way I am going to highlight another phenomenon: "Compound Queries". I believe these changes will dramatically alter the way search and SEO work over the next 1-3 years, and it is important we begin now to think about how that future could look.

App indexing is just the beginning

With App Indexing Google is moving beyond the bounds of the web-search paradigm which made them famous. On Android, we are now seeing blue links which are not to web pages but are deep links to open specific pages within apps:


This is interesting in and of itself, but it is also part of a larger pattern which began with things like the answer box and knowledge graph. With these, we saw that Google was shifting away from sending you somewhere else but was starting to provide the answer you were looking for right there in the SERPs. App Indexing is the next step, which moves Google from simply providing answers to enabling actions—allow you to do things.

App Indexing is going to be around for a while—but here I want to focus on this trend towards providing answers and enabling actions.

Notable technology trends

Google's mission is to build the "ultimate assistant"—something that anticipates your needs and facilitates fulfilling them. Google Now is just the beginning of what they are dreaming of.

So many of the projects and technologies that Google, and their competitors, are working on are converging with the trend towards "answers and actions", and I think this is going to lead to a really interesting evolution in searches—namely what I am calling "Data-Driven Search".

Let's look at some of the contributing technologies.

Compound queries: query revisions & chained queries

There is a lot of talk about conversational search at the moment, and it is fascinating for many reasons, but in this instance I am mostly interested in two specific facets:

  • Query revision
  • Chained queries

The current model for multiple queries looks like this:

You do one query (e.g. "recipe books") and then, after looking at the results of that search, you have a better sense of exactly what it is you are looking for and so you refine your query and run another search (e.g. "vegetarian recipe books"). Notice that you do two distinct searches—with the second one mostly completely separate from the first.

Conversational search is moving us towards a new model which looks more like this, which I'm calling the Compound Query model:

In this instance, after evaluating the results I got, I don't make a new query but instead a Query Revision which relates back to that initial query. After searching "recipe books", I might follow up with "just show me the vegetarian ones". You can already do this with conversational search:

Example of a "Query Revision"—one type of Compound Query

Currently, we only see this intent revision model working in conversational search, but I expect we will see it migrate into desktop search as well. There will be a new generation of searchers who won't have been "trained" to search in the unnatural and stilted keyword-oriented that we have. They'll be used to conversational search on their phones and will apply the same patterns on desktop machines. I suspect we'll also see other changes to desktop-based search which will merge in other aspects of how conversational search results are presented. There are also other companies working on radical new interfaces, such as Scinet by Etsimo (their interface is quite radical, but the problems it solves and addresses are ones Google will likely also be working on).

So many SEO paradigms don't begin to apply in this scenario; things like keyword research and rankings are not compatible with a query model that has multiple phases.

This new query model has a second application, namely Chained Queries, where you perform an initial query, and then on receiving a response you perform a second query on the same topic (the classic example is "How tall is Justin Bieber?" followed by "How old is he?"—the second query is dependent upon the first):

Example of a Chained Query—the second type of Compound Query

It might be that in the case of chained queries, the latter queries could be converted to be standalone queries, such that they don't muddy the SEO waters quite as much as as queries that have revisions. However, I'm not sure that this necessarily stands true, because every query in a chain adds context that makes it much easier for Google to accurately determine your intent in later queries.

If you are not convinced, consider that in the example above, as is often the case in examples (such as the Justin Bieber example), it is usually clear from the formulation that this is explicitly a chained query. However—there are chained queries where it is not necessarily clear that the current query is chained to the previous. To illustrate this, I've borrowed an example which Behshad Behzadi, Director of Conversational Search at Google, showed at SMX Munich last month:

Example of a "hidden" Chained Query—it is not explicit that the last search refers to the previous one.

If you didn't see the first search for "pictures of mario" before the second and third examples, it might not be immediately obvious that the second "pictures of mario" query has taken into account the previous search. There are bound to be far more subtle examples than this.

New interfaces

The days of all Google searches coming solely via a desktop-based web browser are already long since dead, but mobile users using voice search are just the start of the change—there is an ongoing divergence of interfaces. I'm focusing here on the output interfaces—i.e., how we consume the results from a search on a specific device.

The primary device category that springs to mind is that of wearables and smart watches, which have a variety of ways in which they communicate with their users:

  • Compact screens—devices like the Apple Watch and Microsoft Band have compact form factor screens, which allow for visual results, but not in the same format as days gone by—a list of web links won't be helpful.
  • Audio—with Siri, Google Now, and Cortana all becoming available via wearable interfaces (that pair to smart phones) users can also consume results as voice.
  • Vibrations—the Apple Watch can give users directions using vibrations to signal left and right turns without needing to look or listen to the device. Getting directions already covers a number of searches, but you could imagine this also being useful for various yes/no queries (e.g. "is my train on time?").

Each of these methods is incompatible with the old "title & snippet" method that made up the 10 blue links, but furthermore they are also all different from one another.

What is clear is that there is going to need to be an increase in the forms in which search engines can respond to an identical query, with responses being adaptive to the way in which the user will consume their result.

We will also see queries where the query may be "handed off" to another device: imagine me doing a search for a location on my phone and then using my watch to give me direction. Apple already has "Handover"which does this in various contexts, and I expect we'll see the concept taken further.

This is related to Google increasingly providing us with encapsulated answers, rather than links to websites—especially true on wearables and smart devices. The interesting phenomenon here is that these answers don't specify a specific layout, like a webpage does. The data and the layout are separated.

Which leads us to...

Cards

Made popular by Google Now, cards are prevalent in both iOS and Android, as well as on social platforms. They are a growing facet of the mobile experience:

Cards provide small units of information in an accessible chunk, often with a link to dig deeper by flipping a card over or by linking through to an app.

Cards exactly fit into the paradigm above—they are more concerned with the data you will see and less so about the way in which you will see it. The same cards look different in different places.

Furthermore, we are entering a point where you can now do more and more from a card, rather than it leading you into an app to do more. You can response to messages, reply to tweets, like and re-share, and all sorts of things all from cards, without opening an app; I highly recommend this blog post which explores this phenomenon.

It seems likely we'll see Google Now (and mobile search as it becomes more like Google Now) allowing you to do more and more right from cards themselves—many of these things will be actions facilitated by other parties (by way of APIs of schema.org actions). In this way Google will become a "junction box" sitting between us and third parties who provide services; they'll find an API/service provider and return us a snippet of data showing us options and then enable us to pass back data representing our response to the relevant API.

Shared screens

The next piece of the puzzle is "shared screens", which covers several things. This starts with Google Chromecast, which has popularised the ability to "throw" things from one screen to another. At home, any guests I have over who join my wifi are able to "throw" a YouTube video from their mobile phone to my TV via the Chromecast. The same is true for people in the meeting rooms at Distilled offices and in a variety of other public spaces.

I can natively throw a variety of things: photos, YouTube videos, movies on Netflix etc., etc. How long until that includes searches? How long until I can throw the results of a search on an iPad on to the TV to show my wife the holiday options I'm looking at? Sure we can do that by sharing the whole screen now, but how long until, like photos of YouTube videos, the search results I throw to the TV take on a new layout that is suitable for that larger screen?

You can immediately see how this links back to the concept of cards and interfaces outlined above; I'm moving data from screen to screen, and between devices that provide different interfaces.

These concepts are all very related to the concept of "fluid mobility" that Microsoft recently presented in their Productivity Future Vision released in February this year.

An evolution of this is if we reach the point that some people have envisioned, whereby many offices workers, who don't require huge computational power, no longer have computers at their desks. Instead their desks just house dumb terminals: a display, keyboard and mouse which connect to the phone in their pockets which provides the processing power.

In this scenario, it becomes even more usual for people to be switching interfaces "mid task" (including searches)—you do a search at your desk at work (powered by your phone), then continue to review the results on the train home on the phone itself before browsing further on your TV at home.

Email structured markup

This deserves a quick mention—it is another data point in the trend of "enabling action". It doesn't seem to be common knowledge that you can use structured markup and schema.org markup in emails, which works in both Gmail and Google Inbox.

Editor's note: Stay tuned for more on this in tomorrow's post!

The main concepts they introduce are "highlights" and "actions"—sound familiar? You can define actions that become buttons in emails allowing people to confirm, save, review, RSVP, etc. with a single click right in the email.

Currently, you have to apply to Google for them to whitelist emails you send out in order for them to mark the emails up, but I expect we'll see this rolling out more and more. It may not seem directly search-related but if you're building the "ultimate personal assistant", then merging products like Google Now and Google Inbox would be a good place to start.

The rise of data-driven search

There is a common theme running through all of the above technologies and trends, namely data:

  • We are increasingly requesting from Search Engines snippets of data, rather than links to strictly formatted web content
  • We are increasingly being provided the option for direct action without going to an app/website/whatever by providing a snippet of data with our response/request

I think in the next 2 years small payloads of data will be the new currency of Google. Web search won't go away anytime soon, but large parts of it will be subsumed into the data driven paradigm. Projects like Knowledge Vault, which aims to dislodge the Freebase/Wikipedia (i.e. manually curated) powered Knowledge Graph by pulling facts directly from the text of all pages on the web, will mean mining the web for parcels of data become feasible at scale. This will mean that Google knows where to look for specific bits of data and can extract and return this data directly to the user.

How all this might change the way users and search engines interact:

  1. The move towards compound queries will mean it becomes more natural for people to use Google to "interact" with data in an iterative process; Google won't just send us to a set of data somewhere else but will help us sift through it all.
  2. Shared screens will mean that search results will need to be increasingly device agnostic. The next generation of technologies such as Apple Handover and Google Chromecast will mean we increasingly pass results between devices where they may take on a new layout.
  3. Cards will be one part of making that possible by ensuring that results can rendered in various formats. Users will become more and more accustomed to interacting with sets of cards.
  4. The focus on actions will mean that Google plugs directly into APIs such that they can connect users with third party backends and enable that right there in their interface.

What we should be doing

I don't have a good answer to this—which is exactly why we need to talk about it more.

Firstly, what is obvious is that lots of the old facets of technical SEO are already breaking down. For example, as I mentioned above, things like keyword research and rankings don't fit well with the conversational search model where compound queries are prevalent. This will only become more and more the case as we go further down the rabbit hole. We need to educate clients and work out what new metrics help us establish how Google perceive us.

Secondly, I can't escape the feeling that APIs are not only going to increase further in importance, but also become more "mainstream". Think how over the years ownership of company websites started in the technical departments and migrated to marketing teams—I think we could see a similar pattern with more core teams being involved in APIs. If Google wants to connect to APIs to retrieve data and help users do things, then more teams within a business are going to want to weigh in on what it can do.

APIs might seem out of the reach and unnecessary for many businesses (exactly as websites used to...), but structured markup and schema.org are like a "lite API"—enabling programmatic access to your data and even now to actions available via your website. This will provide a nice stepping stone where needed (and might even be sufficient).

Lastly, if this vision of things does play out, then much of our search behaviour could be imagined to be a sophisticated take on faceted navigation—we do an initial search and then sift through and refine the data we get back to drill down to the exact morsels we were looking for. I could envision "Query Revision" queries where the initial search happens within Google's index ("science fiction books") but subsequent searches happen in someone else's, for example Amazon's, "index" ('show me just those with 5 stars and more than 10 reviews that were released in the last 5 years').

If that is the case, then what I will be doing is ensuring that Distilled's clients have a thorough and accurate "indexes" with plenty of supplementary information that users could find useful. A few years ago we started worrying about ensuring our clients' websites have plenty of unique content, and this would see us worrying about ensuring they have a thorough "index" for their product/service. We should be doing that already, but suddenly it isn't going to be just a conversion factor, but a ranking factor too (following the same trend as many other signals, in that regard)

Discussion

Please jump in the comments, or tweet me at @TomAnthonySEO, with your thoughts. I am sure many of the details for how I have envisioned this may not be perfectly accurate, but directionally I'm confident and I want to hear from others with their ideas.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Walang komento:

Mag-post ng isang Komento