Five search engine marketers have given their opinion on Google's core update for December 2020. The observations provide interesting feedback on what may have happened.

In my opinion, Google updates were increasingly less about ranking factors and more about improving the understanding of queries and websites.

Some have suggested that Google ranks search results at random in order to fool those trying to reverse engineer the Google algorithm.

I do not share this opinion.

Certain algorithm functions are difficult to see in search results. It's not easy to point to a search result and say that it has a ranking due to the BERT algorithm or neural matching.

However, it's easy to point out backlinks, E-A-T, or user experience as reasons to explain why a website is ranking or not when this stands out, even if the real reason might be more related to BERT.

Hence, Search Engine Results Pages (SERPs) can seem confusing and random to those examining the SERPs in search of traditional old school ranking factors to explain why pages are ranking in an update or why they have lost rankings.


Read on below

Of course, the Google updates seem unfathomable. The reasons for the rank of web pages have changed dramatically in recent years due to technologies such as natural language processing.

What if Google updates and no one sees what's changed?

In the past, Google changed something and the SEO community didn't notice.

For example, when Google added an algorithm like BERT, many couldn't see what had changed.

What if Google added something like the SMITH algorithm? How would the SEO community know?

SMITH is described in a Google Research article published in April 2020 and revised in October 2020. SMITH makes a long page of content easier to understand and outperforms BERT.

Here's what it says:

“In recent years, self-conscious models like Transformers and BERT have achieved state-of-the-art performance at the task of text matching.

However, these models are still limited to short text, such as a few sentences or a paragraph, because of the square computational complexity of self-attention to the length of the text entered.

In this article, we address the problem by suggesting the Siamese SMITH (Multi-Depth Transformer-based Hierarchical) encoder for long-form document matching.

Our experimental results with multiple benchmark datasets for long-form document matching show that our proposed SMITH model outperforms previous prior art models including hierarchical attention, attention-based hierarchical multi-depth recurring neural network, and BERT.

Compared to BERT-based baselines, our model can increase the maximum length of the input text from 512 to 2048. "


Read on below

I'm not saying that Google introduced the SMITH algorithm (PDF) or that it is related to the Passages algorithm.

I want to point out that the December 2020 core update includes the quality of seemingly unobservable changes.

If Google adds a new AI-based feature or updates an existing feature like BERT, can the search marketing community see it? Probably not.

And it is precisely this quality of unobservable changes that can indicate that changes may have to do with how Google understands web queries and web pages.

If so, it may mean that it may be useful to take a step back and take this into account instead of turning the usual easy-to-observe ranking factors (links of scraper locations, location speed, etc.) a little lower than the usual Ranking factors that have changed.

Insights into the Google Core update from December 2020

I thank those who had time to give their opinion. They have provided excellent information that can help you put the December core algorithm update in perspective.

Dave Davies (@oohloo)
Beanstalk Internet Marketing

Dave puts this update in the context of what Google announced shortly for the algorithm and how this could play a role in the fluctuations.

Dave offered:

“The core update from December 2020 was unique. Many websites we work with started with losses and ended with profits and vice versa.

So it clearly had something to do with a signal or signals that cascade. That is, where the change caused one result, but as soon as this new calculation found its way through the system, it produced another. Like recalculating PageRank, although it probably had nothing to do with PageRank.

Alternatively, Google may have made adjustments on the fly or made other changes during the rollout, but I find that less likely.

When we think about the timing and how it relates to the introduction of passage indexing and whether this is a core update, I suspect it is related to content interpretation systems and not links or signals in that direction.

We also know that Core Web Vitals will enter the algorithm in May 2021, so there might be elements to support this in the update, but these would not have the impact we all have seen at the moment as Web Vitals should be technically inert As a signal in this phase, the update would at least contain more than that.

As for the general response from the community, it was difficult to gauge beyond “it was big”. As you'd expect in any zero-sum scenario, if another person complains about a loss, the SERPs all the way up smiling.

I suspect that before the end of January it will be clear exactly what they have introduced and why. I think it has to do with future functions and skills, but I've been around long enough to know I could be wrong and I have to look carefully. "


Read on below

Steven Kang (@SEOSignalsLab)

Steven Kang, founder of the popular Facebook group at SEO Signals Lab, notes that nothing stands out in terms of similarities or symptoms between winners and losers.

“This one seems difficult. I find wins and losses. I would have to wait more for this one. "

Daniel K Cheung (@danielkcheung)
Team Leader, Prosperity Media

Daniel believes it helps to take a step back and see Google updates from the full view of the forest rather than the tree of the latest update and put those updates in the context of what we know in the search.

One example is the apparent decline in reports of manual actions in the Google Search Console. The implication is, does this mean Google will be better able to rate websites where they belong without resorting to manual penalties?

Here's how Daniel sees the latest update to Google's core algorithm:

“I think we, as search / discoverability people, need to stop looking at core updates as individual events and instead look at core updates as a continuum of ongoing testing and“ improvements ”to what we see in the SERPs.

So when I refer to the December Core Update, I want to emphasize that it is just one of many events.

For example, some affiliate marketers and analysts have found that sites previously "hit" by the May 2020 update have recovered when they rolled out in December. However, this was inconsistent.

Again, the problem is: we can't talk about websites that have won or lost as they are just single URLs.

A look at the mere visibility of an entire website does not give us any real clues.

There are murmurs of 301 redirects, PBNs, low quality backlinks, and bad content which is the reasons some sites have moved from page 1 to page 6-10 of the SERPs (practically invisible).

However, these practices have always been prone to the daily fluctuations of the algorithm.

What was really interesting over the course of 2020 is that there have been very few reports of manual penalties within GSC.

This has been eerily replaced with impression and click charts jumping off a cliff without deindexing the site.

In my humble opinion, core updates are less about targeting a specific set of practices and more about making the algorithm incrementally mature.

Now I'm not saying that Google gets it right 100% of the time – the algorithm clearly doesn't work and I don't think it ever will (due to human curiosity). "


Read on below

Cristoph cemper (@cemper)
CEO LinkResearchTools

Cristoph Cemper sees the latest update as affecting a variety of factors.

He announced the following:

“At a high level, Google is adjusting things that have a global impact on core updates.

This is:

a) Weight ratios for different types of compounds and their signals

I think the NoFollow 2.0 rollout from September 2019 is not yet complete, but has been optimized. That means how much power for which NoFollow in which context.

b) Answer fields, much more. Google is increasing its own real estate

c) Mass devaluation of PBN connection networks and very obvious traces of the “establishment of contact connections”.

Just because someone sent an outreach email doesn't make a paid link more natural, even if it was paid for in "content" or "exchange of services".

Michael Martinez (@seo_theory)
Founder of SEOTheory

Michael Martinez offered these insights:

“Based on what I've seen in online discussions, people are confused and frustrated. They don't really know what happened and few seem to have theories about why things have changed.

In general, it seems to me that Google has rewritten a number of its quality policy enforcement algorithms.

Nothing in particular on my mind, but other people's websites I've looked at seemed fine to me, not great. Some of the sites in our portfolio have gone up, others have gone down.

Again, it just felt like it was about enforcement or algorithmic interpretation of signals associated with their guidelines.

It's not about punishing something, but maybe trying different approaches to solving questions. "


Read on below

What happened in Google December 2020 Core Update?

The prospects for Google's core algorithm update are different. Most observers seem to agree that there are no obvious factors or changes.

And that's an interesting observation as it could mean that something related to AI or natural language processing has been refined or introduced. However, this is only speculation until Google specifically rules it out.


Please enter your comment!
Please enter your name here