Google Says it Has Now Tracked 4 Billion Store Visits From Ads

Onimod Global shares how the company says thousands more advertisers will gain access to store visits data as a result of improved measurement techniques and machine learning-powered modeling. Read more

Google Releases New Video on How to Hire an SEO Consultant

This 11-plus-minute video by Google’s Maile Ohye explains what questions to ask and what to look for during that hiring process of an SEO consulting firm, including useful insights, red flags and more. Read more

Google Search Algorithm Update February 7th

We just had some Google algorithm update a week ago potentially targeting spammy links I believe. And now a week later, around February 7th, yesterday, it seems there was another algorithm update. This update doesn’t seem specific to links or spam but rather just a normal unconfirmed Google update where ranking changes shift based on something changing at Google.

I do not believe it is related to the mobile bug because most of the automated tracking tools only track desktop search.

There is some chatter, the chatter in the SEO community is not YET that hot but it might heat up throughout the day as people check their analytics and tools.

An ongoing WebmasterWorld thread has these posts:

SERPs movements again in our vertical. We’re seeing some recoveries from previous penguin casualties and some domain crowding. Spam STILL having a huge positive impact.

Yesterday (Tue 7th) I saw a huge spike in organic traffic, ~30% over avg, and 18% increase from previous record day in November. It’s a Canadian financial-related site. Increases from both Google.ca as well as other search engines/

Here is a post on Twitter that even caught Gary Illyes attention:

Screen Shot 02-08-17 at 01.51 PM

 

And here are the tracking tools showing changes on the 7th, note Mozcast is well behind in terms of tracking so this might be related to the link spam update we covered last week?

Mozcast:

click for full size

SERP Metrics:

click for full size

Algoroo:

click for full size

Accuranker:

click for full size

RankRanger:

click for full size

Have you noticed any changes over the past 24 hours?

 

 

 

HT SE Roundtable

Introducing the Mobile-Friendly Test API from Google

Google has finally released Mobile-Friendly Test API for webmasters so developers can now build their own tools around the mobile-friendly testing tool to see if pages are mobile-friendly.

Google’s John Mueller said, “The API method runs all tests, and returns the same information — including a list of the blocked URLs — as the manual test.” He added, “The documentation includes simple samples to help get you started quickly.”

With so many users on mobile devices, having a mobile-friendly web is important to us all. The Mobile-Friendly Test is a great way to check individual pages manually. They’re happy to announce that this test is now available via API as well.

The Mobile-Friendly Test API lets you test URLs using automated tools. For example, you could use it to monitor important pages in your website in order to prevent accidental regressions in templates that you use. The API method runs all tests, and returns the same information – including a list of the blocked URLs – as the manual test. The documentation includes simple samples to help get you started quickly.

Google hopes this API makes it easier to check your pages for mobile-friendliness and to get any such issues resolved faster.

We know the importance of mobile friendliness. It is one of the most important SEO trends in 2017. Google has given enough signs that mobile friendliness is critical for your website. For example, Mobilegeddon, mobile-first index, mobile friendliness warning, and so on.

So, If you still don’t have a mobile friendly website, Mobile-Friendly Test API is another reason to make one now. This proves Google is serious about mobile friendliness of your website.

For more information on how we can improve your website to become more mobile-friendly, contact a Digital Marketing expert at Onimod Global today.

How Much Did Google’s Possum Update Affect Your Local SEO?

When Onimod Global noticed significant changes in Google’s Possum updates two months ago, we knew big changes were on the way for our  clients’ local search results. Read more

Official Google Webmaster Central Blog – Here’s to more HTTPS on the web!

Security has always been critical to the web, but challenges involved in site migration have inhibited HTTPS adoption for several years. In the interest of a safer web for all, Google have worked alongside many others across the online ecosystem to better understand and address these challenges, resulting in real change. A web with ubiquitous HTTPS is not the distant future. It’s happening now, with secure browsing becoming standard for users of Chrome.

Today, they’re adding a new section to the HTTPS Report Card in our Transparency Report that includes data about how HTTPS usage has been increasing over time. More than half of pages loaded and two-thirds of total time spent by Chrome desktop users occur via HTTPS, and we expect these metrics to continue their strong upward trajectory.

Percentage of pages loaded over HTTPS in Chrome

As the remainder of the web transitions to HTTPS, Google will continue working to ensure that migrating to HTTPS is a no-brainer, providing business benefit beyond increased security. HTTPS currently enables the bestperformance the web offers and powerful features that benefit site conversions, including both new features such as service workers for offline support and web push notifications, and existing features such as credit card autofill and the HTML5 geolocation API that are too powerful to be used over non-secure HTTP. As with all major site migrations, there are certain steps webmasters should take to ensure that search ranking transitions are smooth when moving to HTTPS. To help with this, they’ve posted two FAQs to help sites transition correctly, and will continue to improve thei web fundamentals guidance.

We’ve seen many sites successfully transition with negligible effect on their search ranking and traffic. Brian Wood, Director of Marketing SEO at Wayfair, a large retail site, commented: “We were able to migrate Wayfair.com to HTTPS with no meaningful impact to Google rankings or Google organic search traffic. We are very pleased to say that all Wayfair sites are now fully HTTPS.” CNET, a large tech news site, had a similar experience: “We successfully completed our move of CNET.com to HTTPS last month,” said John Sherwood, Vice President of Engineering & Technology at CNET. “Since then, there has been no change in our Google rankings or Google organic search traffic.”

Webmasters that include ads on their sites also should carefully monitor ad performance and revenue during large site migrations. The portion of Google ad traffic served over HTTPS has increased dramatically over the past 3 years. All ads that come from any Google source always support HTTPS, including AdWords, AdSense, or DoubleClick Ad Exchange; ads sold directly, such as those through DoubleClick for Publishers, still need to be designed to be HTTPS-friendly. This means there will be no change to the Google-sourced ads that appear on a site after migrating to HTTPS. Many publishing partners have seen this in practice after a successful HTTPS transition. Jason Tollestrup, Director of Programmatic Advertising for the Washington Post, “saw no material impact to AdX revenue with the transition to SSL.”

As migrating to HTTPS becomes even easier, Google will continue working towards a web that’s secure by default. Don’t hesitate to start planning your HTTPS migration today!

For more information on this topic or to answer any questions you may have, contact an Onimod Global digital marketing expert today.

How Google Search is Helping Users Easily Access Content on Mobile

In Google Search, their goal is to help users quickly find the best answers to their questions, regardless of the device they’re using. Today, they’re announcing two upcoming changes to mobile search results that make finding content easier for users.

Simplifying mobile search results

Two years ago, they added a mobile-friendly label to help users find pages where the text and content was readable without zooming and the tap targets were appropriately spaced. Since then, they’ve seen the ecosystem evolve and they recently found that 85% of all pages in the mobile search results now meet this criteria and show the mobile-friendly label. To keep search results uncluttered, they’ll be removing the label, although the mobile-friendly criteria will continue to be a ranking signal. Google said “We’ll continue providing the mobile usability report in Search Console and the mobile-friendly test to help webmasters evaluate the effect of the mobile-friendly signal on their pages.”

Helping users find the content they’re looking for

Although the majority of pages now have text and content on the page that is readable without zooming, Google recently seen many examples where these pages show intrusive interstitials to users. While the underlying content is present on the page and available to be indexed by Google, content may be visually obscured by an interstitial. This can frustrate users because they are unable to easily access the content that they were expecting when they tapped on the search result.

Pages that show intrusive interstitials provide a poorer experience to users than other pages where content is immediately accessible. This can be problematic on mobile devices where screens are often smaller. To improve the mobile search experience, after January 10, 2017, pages where content is not easily accessible to a user on the transition from the mobile search results may not rank as highly.

Here are some examples of techniques that make content less accessible to a user:

  • Showing a popup that covers the main content, either immediately after the user navigates to a page from the search results, or while they are looking through the page.
  • Displaying a standalone interstitial that the user has to dismiss before accessing the main content.
  • Using a layout where the above-the-fold portion of the page appears similar to a standalone interstitial, but the original content has been inlined underneath the fold.

Examples of interstitials that make content less accessible


An example of an intrusive popup

An example of an intrusive standalone interstitial

Another example of an intrusive standalone interstitial

 

By contrast, here are some examples of techniques that, used responsibly, would not be affected by the new signal:

  • Interstitials that appear to be in response to a legal obligation, such as for cookie usage or for age verification.
  • Login dialogs on sites where content is not publicly indexable. For example, this would include private content such as email or unindexable content that is behind a paywall.
  • Banners that use a reasonable amount of screen space and are easily dismissible. For example, the app install banners provided by Safari and Chrome are examples of banners that use a reasonable amount of screen space.

Examples of interstitials that would not be affected by the new signal, if used responsibly


An example of an interstitial for cookie usage

An example of an interstitial for age verification

An example of a banner that uses a reasonable amount of screen space

 

Google previously explored a signal that checked for interstitials that ask a user to install a mobile app. As they continued our development efforts, they saw the need to broaden our focus to interstitials more generally. Accordingly, to avoid duplication in our signals, Google removed the check for app-install interstitials from the mobile-friendly test and have incorporated it into this new signal in Search.

Remember, this new signal is just one of hundreds of signals that are used in ranking. The intent of the search query is still a very strong signal, so a page may still rank highly if it has great, relevant content.

 

 

H/T: Google Webmaster

All About the New Google RankBrain Algorithm

Yesterday, news emerged that Google was using a machine-learning artificial intelligence system called “RankBrain” to help sort through its search results. Wondering how that works and fits in with Google’s overall ranking system? Here’s what we know about RankBrain.

The information covered below comes from three original sources and has been updated over time, with notes where updates have happened. Here are those sources:

First is the Bloomberg story that broke the news about RankBrain yesterday. Second, additional information that Google has now provided directly to Search Engine Land. Third, our own knowledge and best assumptions in places where Google isn’t providing answers. We’ll make clear where these sources are used, when deemed necessary, apart from general background information.

What is RankBrain?

RankBrain is Google’s name for a machine-learning artificial intelligence system that’s used to help process its search results, as was reported by Bloomberg and also confirmed to us by Google.

What is machine learning?

Machine learning is where a computer teaches itself how to do something, rather than being taught by humans or following detailed programming.

What is artificial intelligence?

True artificial intelligence, or AI for short, is where a computer can be as smart as a human being, at least in the sense of acquiring knowledge both from being taught and from building on what it knows and making new connections.

True AI exists only in science fiction novels, of course. In practice, AI is used to refer to computer systems that are designed to learn and make connections.

How’s AI different from machine learning? In terms of RankBrain, it seems to us they’re fairly synonymous. You may hear them both used interchangeably, or you may hear machine learning used to describe the type of artificial intelligence approach being employed.

So RankBrain is the new way Google ranks search results?

No. RankBrain is part of Google’s overall search “algorithm,” a computer program that’s used to sort through the billions of pages it knows about and find the ones deemed most relevant for particular queries.

What’s the name of Google’s search algorithm?

http://searchengineland.com/figz/wp-content/seloads/2014/08/google-hummingbird1-ss-1920-800x450.jpg

It’s called Hummingbird, as we reported in the past. For years, the overall algorithm didn’t have a formal name. But in the middle of 2013, Google overhauled that algorithm and gave it a name, Hummingbird.

So RankBrain is part of Google’s Hummingbird search algorithm?

That’s our understanding. Hummingbird is the overall search algorithm, just like a car has an overall engine in it. The engine itself may be made up of various parts, such as an oil filter, a fuel pump, a radiator and so on. In the same way, Hummingbird encompasses various parts, with RankBrain being one of the newest.

In particular, we know RankBrain is part of the overall Hummingbird algorithm because the Bloomberg article makes clear that RankBrain doesn’t handle all searches, as only the overall algorithm would.

Hummingbird also contains other parts with names familiar to those in the SEO space, such as Panda, Penguin and Payday designed to fight spam, Pigeon designed to improve local results, Top Heavy designed to demote ad-heavy pages, Mobile Friendly designed to reward mobile-friendly pages and Pirate designed to fight copyright infringement.

I thought the Google algorithm was called “PageRank”

PageRank is part of the overall Hummingbird algorithm that covers a specific way of giving pages credit based on the links from other pages pointing at them.

PageRank is special because it’s the first name that Google ever gave to one of the parts of its ranking algorithm, way back at the time the search engine began, in 1998.

What about these “signals” that Google uses for ranking?

Signals are things Google uses to help determine how to rank webpages. For example, it will read the words on a webpage, so words are a signal. If some words are in bold, that might be another signal that’s noted. The calculations used as part of PageRank give a page a PageRank score that’s used as a signal. If a page is noted as being mobile-friendly, that’s another signal that’s registered.

All these signals get processed by various parts within the Hummingbird algorithm to figure out which pages Google shows in response to various searches.

How many signals are there?

Google has fairly consistently spoken of having more than 200 major ranking signals that are evaluated that, in turn, might have up to 10,000 variations or sub-signals. It more typically just says “hundreds” of factors, as it did in yesterday’s Bloomberg article.

If you want a more visual guide to ranking signals, see our Periodic Table Of SEO Success Factors:

http://searchengineland.com/figz/wp-content/seloads/2015/06/periodic-table-of-seo-2015-800x548.jpg

It’s a pretty good guide, we think, to general things that search engines like Google use to help rank webpages.

And RankBrain is the third-most important signal?

That’s right. From out of nowhere, this new system has become what Google says is the third-most important factor for ranking webpages. From the Bloomberg article:

RankBrain is one of the “hundreds” of signals that go into an algorithm that determines what results appear on a Google search page and where they are ranked, Corrado said. In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query, he said.

What are the first- and second-most important signals?

When this story was originally written, Google wouldn’t tell us. Our assumption was this:

My personal guess is that links remain the most important signal, the way that Google counts up those links in the form of votes. It’s also a terribly aging system, as I’ve covered in my Links: The Broken “Ballot Box” Used By Google & Bing article from the past.

As for the second-most important signal, I’d guess that would be “words,” where words would encompass everything from the words on the page to how Google’s interpreting the words people enter into the search box outside of RankBrain analysis.

That turned out to be pretty much right.

What exactly does RankBrain do?

From emailing with Google, I gather RankBrain is mainly used as a way to interpret the searches that people submit to find pages that might not have the exact words that were searched for.

Didn’t Google already have ways to find pages beyond the exact query entered?

Yes, Google has found pages beyond the exact terms someone enters for a very long time. For example, years and years ago, if you’d entered something like “shoe,” Google might not have found pages that said “shoes,” because those are technically two different words. But “stemming” allowed Google to get smarter, to understand that shoes is a variation of shoe, just like “running” is a variation of “run.”

Google also got synonym smarts, so that if you searched for “sneakers,” it might understand that you also meant “running shoes.” It even gained some conceptual smarts, to understand that there are pages about “Apple” the technology company versus “apple” the fruit.

What about the Knowledge Graph?

The Knowledge Graph, launched in 2012, was a way that Google grew even smarter about connections between words. More important, it learned how to search for “things not strings,” as Google has described it.

Strings means searching just for strings of letters, such as pages that match the spelling of “Obama.” Things means that instead, Google understands when someone searches for “Obama,” they probably mean US President Barack Obama, an actual person with connections to other people, places and things.

The Knowledge Graph is a database of facts about things in the world and the relationships between them. It’s why you can do a search like “when was the wife of obama born” and get an answer about Michele Obama as below, without ever using her name:

http://searchengineland.com/figz/wp-content/seloads/2015/10/when_was_the_wife_of_obama_born_-_Google_Search-800x573.png

How’s RankBrain helping refine queries?

The methods Google already uses to refine queries generally all flow back to some human being somewhere doing work, either having created stemming lists or synonym lists or making database connections between things. Sure, there’s some automation involved. But largely, it depends on human work.

The problem is that Google processes three billion searches per day. In 2007, Google said that 20 percent to 25 percent of those queries had never been seen before. In 2013, it brought that number down to 15 percent, which was used again in yesterday’s Bloomberg article and which Google reconfirmed to us. But 15 percent of three billion is still a huge number of queries never entered by any human searcher — 450 million per day.

Among those can be complex, multi-word queries, also called “long-tail” queries. RankBrain is designed to help better interpret those queries and effectively translate them, behind the scenes in a way, to find the best pages for the searcher.

As Google told us, it can see patterns between seemingly unconnected complex searches to understand how they’re actually similar to each other. This learning, in turn, allows it to better understand future complex searches and whether they’re related to particular topics. Most important, from what Google told us, it can then associate these groups of searches with results that it thinks searchers will like the most.

Google didn’t provide examples of groups of searches or give details on how RankBrain guesses at what are the best pages. But the latter is probably because if it can translate an ambiguous search into something more specific, it can then bring back better answers.

How about an example?

While Google didn’t give groups of searches, the Bloomberg article did have a single example of a search where RankBrain is supposedly helping. Here it is:

What’s the title of the consumer at the highest level of a food chain

To a layperson like myself, “consumer” sounds like a reference to someone who buys something. However, it’s also a scientific term for something that consumes food. There are also levels of consumers in a food chain. That consumer at the highest level? The title — the name — is “predator.”

Entering that query into Google provides good answers, even though the query itself sounds pretty odd:

http://searchengineland.com/figz/wp-content/seloads/2015/10/What%E2%80%99s_the_title_of_the_consumer_at_the_highest_level_of_a_food_chain_-_Google_Search-794x600.png

Now consider how similar the results are for a search like “top level of the food chain,” as shown below:

http://searchengineland.com/figz/wp-content/seloads/2015/10/top_level_of_the_food_chain_-_Google_Search-594x600.png

Imagine that RankBrain is connecting that original long and complicated query to this much shorter one, which is probably more commonly done. It understands that they are very similar. As a result, Google can leverage all it knows about getting answers for the more common query to help improve what it provides for the uncommon one.

Let me stress that I don’t know that RankBrain is connecting these two searches. I only know that Google gave the first example. This is simply an illustration of how RankBrain my be used to connect an uncommon search to a common one as a way of improving things.

Can Bing do this, too, with RankNet?

Back in 2005, Microsoft starting using its own machine-learning system, called RankNet, as part of what became its Bing search engine of today. In fact, the chief researcher and creator of RankNet was recently honored. But over the years, Microsoft has barely talked about RankNet.

You can bet that will likely change. It’s also interesting that when I put the search above into Bing, given as an example of how great Google’s RankBrain is, Bing gave me good results, including one listing that Google also returned:

http://searchengineland.com/figz/wp-content/seloads/2015/10/What%E2%80%99s_the_title_of_the_consumer_at_the_highest_level_of_a_food_chain_-_Bing-800x585.png

One query doesn’t mean that Bing’s RankNet is as good as Google’s RankBrain or vice versa. Unfortunately, it’s really difficult to come up with a list to do this type of comparison.

Any more examples?

Google did give us one fresh example: “How many tablespoons in a cup?” Google said that RankBrain favored different results in Australia versus the United States for that query because the measurements in each country are different, despite the similar names.

I tried to test this by searching at Google.com versus Google Australia. I didn’t see much difference, myself. Even without RankBrain, the results would often be different in this way just because of the “old-fashioned” means of favoring pages from known Australian sites for those searchers using Google Australia.

Does RankBrain really help?

Despite my two examples above being less than compelling as testimony to the greatness of RankBrain, I really do believe that it probably is making a big impact, as Google is claiming. The company is fairly conservative with what goes into its ranking algorithm. It does small tests all the time. But it only launches big changes when it has a great degree of confidence.

Integrating RankBrain, to the degree that it’s supposedly the third-most important signal, is a huge change. It’s not one that I think Google would do unless it really believed it was helping.

When Did RankBrain start?

Google told us that there was a gradual rollout of RankBrain in early 2015 and that it’s been fully live and global for a few months now.

What queries are impacted?

In October 2015, Google told Bloomberg that a “very large fraction” of the 15 percent of queries it normally never sees before were processed by RankBrain. In short, 15 percent or less.

In June 2016, news emerged that RankBrain was being used for every query that Google handles. See our story about that:

Is RankBrain always learning?

All learning that RankBrain does is offline, Google told us. It’s given batches of historical searches and learns to make predictions from these.

Those predictions are tested, and if proven good, then the latest version of RankBrain goes live. Then the learn-offline-and-test cycle is repeated.

Does RankBrain do more than query refinement?

Typically, how a query is refined — be it through stemming, synonyms or now RankBrain — has not been considered a ranking factor or signal.

Signals are typically factors that are tied to content, such as the words on a page, the links pointing at a page, whether a page is on a secure server and so on. They can also be tied to a user, such as where a searcher is located or their search and browsing history.

So when Google talks about RankBrain as the third-most important signal, does it really mean as a ranking signal? Yes. Google reconfirmed to us that there is a component where RankBrain is directly contributing somehow to whether a page ranks.

How exactly? Is there some type of “RankBrain score” that might assess quality? Perhaps, but it seems much more likely that RankBrain is somehow helping Google better classify pages based on the content they contain. RankBrain might be able to better summarize what a page is about than Google’s existing systems have done.

Or not. Google isn’t saying anything other than there’s a ranking component involved.

How do I learn more about RankBrain?

Google told us people who want to learn about word “vectors” — the way words and phrases can be mathematically connected — should check out this blog post, which talks about how the system (which wasn’t named RankBrain in the post) learned the concept of capital cities of countries just by scanning news articles:

http://searchengineland.com/figz/wp-content/seloads/2015/10/image00-800x593.gif

There’s a longer research paper this is based on here. You can even play with your own machine learning project using Google’s word2vec tool. In addition, Google has an entire area with its AI and machine learning papers, as does Microsoft.

 

H/T: Search Engine Land.

Google News: Search at I/O 16 Recap: Eight things you don’t want to miss

Two weeks ago, over 7,000 developers descended upon Mountain View for this year’s Google I/O, with a takeaway that it’s truly an exciting time for Search. People go to Google billions of times per day to fulfill their daily information needs. They’re focused on creating features and tools that we believe will help users and publishers make the most of Search in today’s world. As Google continues to evolve and expand to new interfaces, such as the Google assistant and Google Home, they want to make it easy for publishers to integrate and grow with Google.

In case you didn’t have a chance to attend their sessions, we put together a recap of all the Search happenings at I/O.

1: Introducing rich cards

They announced rich cards, a new Search result format building on rich snippets, that uses schema.org markup to display content in an even more engaging and visual format. Rich cards are available in English for recipes and movies and they’re excited to roll out for more content categories soon. To learn more, browse the new gallery with screenshots and code samples of each markup type or watch our rich cards devByte.

2: New Search Console reports

They want to make it easy for webmasters and developers to track and measure their performance in search results. Google launched a new report in Search Console to help developers confirm that their rich card markup is valid. In the report we highlight “enhanceable cards,” which are cards that can benefit from marking up more fields. The new Search Appearance filter also makes it easy for webmasters to filter their traffic by AMP and rich cards.

3: Real-time indexing

Users are searching for more than recipes and movies: they’re often coming to Search to find fresh information about what’s happening right now. This insight kickstarted their efforts to use real-time indexing to connect users searching for real-time events with fresh content. Instead of waiting for content to be crawled and indexed, publishers will be able to use the Google Indexing API to trigger the indexing of their content in real time. It’s still in its early days, but they’re excited to launch a pilot later this summer.

3: Getting up to speed with Accelerated Mobile Pages

Google provided an update on their use of AMP, an open source effort to speed up the mobile web. Google Search uses AMP to enable instant-loading content. Speed is important—over 40% of users abandon a page that takes more than three seconds to load. They announced that they’re bringing AMPed news carousels to the iOS and Android Google apps, as well as experimenting with combining AMP and rich cards. Stay tuned for more via their blog and github page.

In addition to the sessions, attendees could talk directly with Googlers at the Search & AMP sandbox.

 

5: A new and improved Structured Data Testing Tool

They updated the popular Structured Data Testing tool. The tool is now tightly integrated with the DevSite Search Gallery and the new Search Preview service, which lets you preview how your rich cards will look on the search results page.

6: App Indexing got a new home (and new features)

They announced App Indexing’s migration to Firebase, Google’s unified developer platform. Watch the session to learn how to grow your app with Firebase App Indexing.

7: App streaming

App streaming is a new way for Android users to try out games without having to download and install the app — and it’s already available in Google Search. Check out the session to learn more.

8. Revamped documentation

Google also revamped their developer documentation, organizing our docs around topical guides to make it easier to follow.

If you need any further updates on Google’s I/O 16 Recap, contact an Onimod Global specialist today.

 

 

Google Webmaster: Tie your sites together with property sets in Search Console

Mobile app, mobile website, desktop website — how do you track their combined visibility in search? Until now, you’ve had to track all of these statistics separately. Search Console is introducing the concept of “property sets,” which let you combine multiple properties (both apps and sites) into a single group to monitor the overall clicks and impressions in search within a single report.

It’s easy to get started:

  1. Create a property set
  2. Add the properties you’re interested in
  3. The data will start being collected within a few days
  4. Profit from the new insights in Search Analytics!

Property Sets will treat all URIs from the properties included as a single presence in the Search Analytics feature. This means that Search Analytics metrics aggregated by host will be aggregated across all properties included in the set. For example, at a glance you’ll get the clicks and impressions of any of the sites in the set for all queries.

This feature will work for any kind of property in Search Console. Use it to gain an overview of your international websites, of mixed HTTP / HTTPS sites, of different departments or brands that run separate websites, or monitor the Search Analytics of all your apps together: all of that’s possible with this feature.

Don’t just listen to us, here’s what we heard from one of the beta-testers:

It was one of my most important demands since the beginning of Webmaster Tools / Search Console. And I love the way it is given to us. I see that the remarks of beta-testers have also been understood by Google engineers. So thank you so much! — Olivier Andrieu (Abondance)

Google will be rolling this out over the next couple of days. If you have multiple properties verified in Search Console, we hope this feature makes it easier for you to keep track. If you have any questions, feedback, or ideas, visit Google in the webmaster help forum, or read the help documentation for this new feature!