On February 19th, MozCast measured a dramatic decrease (40% day-to-day) in SERPs with featured snippets with no immediate signs of recovery. Here's a two-week view (February 10-23):
Here's a 60-day view highlighting that all-time low in our 10K keyword dataset:
I could trace the graph further back, but let's get down to business – this is the lowest featured snippet prevalence rate in our dataset since we started collecting reliable data in the summer of 2015.
Are we losing our minds?
After the year we've all had, it's always good to check our sanity. In this case, other records on the same date showed a decrease, but the severity of the drop varied dramatically. So I checked our STAT data across desktop queries (US only) – over two million daily SERPs – and found that:
STAT saw an 11% decrease from the previous year. Interestingly, if we factor in a second, smaller decline on Feb. 13, we've seen an overall decline of 16% since Feb 10. While MozCast is only available on the desktop, STAT has access to mobile data. Here is the desktop / mobile comparison:
While mobile SERPs had a higher overall prevalence in STAT, the pattern was very similar, with a 9% decrease compared to February 19 and a decrease of about 12% since February 10. Note that while there is significant overlap, the desktop and mobile records may contain different search phrases. While the desktop dataset currently spans around 2.2 million daily SERPs, the mobile value is closer to 1.7 million.
Note that the MozCast 10K keywords are (intentionally) shifted towards shorter, more competitive phrases, while STAT has a lot more "long-tail" phrases. This explains the overall higher prevalence in STAT, as longer phrases usually contain questions and other natural language queries that are more likely to result in featured snippets.
Why the big difference?
What's driving MozCast's 40% decline and, presumably, competitiveness? First things first, we have hand-checked a number of these losses and there is no evidence of measurement errors. One helpful aspect of the 10K MozCast keywords is that they are evenly spread across 20 historical Google Ads categories. While some changes affect industry categories in a similar way, the loss of Featured Snippet had a dramatic impact:
Competitive health conditions have lost more than two-thirds of their featured snippets. It turns out that many of these terms had other salient features, such as: B. Medical Knowledge Panels. Here are some high volume terms that lost featured snippets in the Health category:
While Finance initially had a much lower prevalence of featured snippets, Finance SERPs also recorded massive losses on February 19. Some high volume examples are:
- Risk management
- Investment funds
- red ira
Like the "Health" category, these terms have a knowledge panel in the right column of the desktop with some basic information (mainly from Wikipedia / Wikidata). Again, these are competitive "head" terms that Google displayed multiple SERP features before Feb 19th.
Both the health and finance search phrases closely match what is known as the YMYL content areas (your money or your life) which, in Google's own words, "… possibly the future happiness, health, financial stability or security of one Can affect person ". In these areas, Google is clearly concerned about the quality of the responses it is providing.
What about indexing passages?
Could this be related to the "Passage Indexing" update that was released around February 10th? While we still don't know much about the implications of this update and this update affected rankings and most likely organic snippets of all kinds, there is no reason to believe that the update would affect whether or not a recommended snippet is displayed not for any given query. While the timelines overlap slightly, these events are most likely separate.
Is the snippet sky falling?
While the 40% decrease in recommended snippets in MozCast seems real, it mainly impacted shorter, more competitive terms and certain industry categories. For those in YMYL categories, it certainly makes sense to evaluate the impact on your rankings and search traffic.
In general, this is a common pattern in SERP functions. Google increases it over time, then hits a threshold where quality begins to suffer, and then decreases the volume. If Google has more confidence in the quality of its featured snippet algorithms, the volume may be turned up again. I certainly don't expect featured snippets to go away anytime soon, and they're still very common on lengthy natural language queries.
Also keep in mind that some of these featured snippets may just have been redundant. Before February 19th, someone who searched for "mutual funds" may have seen this featured snippet:
Google assumes a "What is / are …?" Question here, but "mutual funds" is a very ambiguous search that could have multiple intentions. At the same time, Google already showed a Knowledge Graph entity in the right column (on the desktop), presumably from trustworthy sources:
Why should both be shown, especially if Google has quality concerns in a category where they are very sensitive to quality issues? While losing these featured snippets can be a bit stinging, at the same time you should consider whether or not they really deliver. While this term may be great for vanity, how often will people at the beginning of a search journey – who may not even know what a mutual fund is – turn themselves into customers? In many cases, they may jump straight to the knowledge panel and not even consider the recommended snippet.
For Moz Pro customers, remember that you can easily track Featured Snippets on the SERP Features page (under Rankings in the left navigation) and use Featured Snippets to filter by keywords. You will get a report like this – look for the scissors icon to see where featured snippets appear and if you (blue) or a competitor (red) are capturing them:
Regardless of the impact, one thing remains true: Google gives and Google takes away. Unlike losing a ranking or featured snippet to a competitor, there is very little you can do to undo this type of profound change. For locations in heavily affected industries, we can only monitor the situation and try to assess our new reality.
Update: Drop by Word Count
I realized that we could look at the word count in the STAT data to test the theory that shorter searches (which are generally both more competitive and more ambiguous) were more affected by this update. Here is the breakdown of STAT's 2M desktop keywords (en-US) …
There's not much nuance here – 1-word queries were overloaded in this update, 2-word queries fell significantly higher than the STAT average, and 3+ word queries were hit a lot less. Why these queries were made is not so clear, but the implications for very short queries are clear.