What Is Keyword Cannibalization?

Keyword cannibalization refers to the situation where multiple pages or posts on a website target the same or very similar keywords, which can lead to several problems.

When this occurs, search engines like Google may have difficulty determining which page is the most relevant for a specific query, which can result in the individual pages competing against each other in search rankings.

This competition can have several negative effects:

  1. Diluted Page Authority: Instead of having one strong page that ranks well, a site might have multiple weaker pages. This division of ranking signals (like links and content depth) across multiple pages could lead to none of the pages achieving high rankings.
  2. Lowered Conversion Rates: If visitors land on a less relevant or less convincing page, they may be less likely to take desired actions, such as making a purchase or signing up for a newsletter.
  3. Confusion for Users and Search Engines: Multiple pages covering the same topic can confuse users who might not find the exact information they need, leading to a poor user experience. Similarly, search engines might struggle to identify the most appropriate page to display in search results.

To avoid keyword cannibalization, it’s crucial to have a clear SEO strategy that involves keyword mapping, ensuring that each important keyword or phrase is used optimally across the site’s content.

This typically involves planning which pages should target which keywords and ensuring that each page serves a unique and specific purpose.

March 2024: Google core update. Prepare your website!

google search algorithm

The March 2024 core update is a more complex update than Google’s usual core updates, involving changes to multiple core systems. It also marks an evolution in how we identify the helpfulness of content.

As this is a complex update, the rollout may take up to a month. It’s likely there will be more fluctuations in rankings than with a regular core update, as different systems get fully updated and reinforce each other.

New spam policies

Google spam policies are designed to address practices that can negatively impact the quality of Google’s search results. Three new spam policies against bad practices have been growing in popularity: expired domain abuse, scaled content abuse, and site reputation abuse.

Expired domain abuse

Expired domain abuse is where an expired domain name is purchased and repurposed primarily to manipulate Search rankings by hosting content that provides little to no value to users. For example, someone might purchase a domain previously used by a medical site and repurpose that to host low quality casino-related content, hoping to be successful in Search based on the domain’s reputation from a previous ownership.

Expired domain abuse isn’t something people accidentally do. It’s a practice employed by people who hope to rank well in Search with low-value content by using the past reputation of a domain name. These domains are generally not intended for visitors to find them in any other way but through search engines. It’s fine to use an old domain name for a new, original site that’s designed to serve people first.

Scaled content abuse

Scaled content abuse is when many pages are generated for the primary purpose of manipulating Search rankings and not helping users. This abusive practice is typically focused on creating large amounts of unoriginal content that provides little to no value to users, no matter how it’s created.

This new Google search policy builds on the previous spam policy about automatically-generated content, ensuring that Google can take action on scaled content abuse as needed, no matter whether content is produced through automation, human efforts, or some combination of human and automated processes.

Site reputation abuse

Site reputation abuse is when third-party pages are published with little or no first-party oversight or involvement, where the purpose is to manipulate Search rankings by taking advantage of the first-party site’s ranking signals.

Such third-party pages include sponsored, advertising, partner, or other third-party pages that are typically independent of a host site’s main purpose or produced without close oversight or involvement of the host site, and provide little to no value to users.

Google new policy doesn’t consider all third-party content to be a violation, only that which is hosted without close oversight and which is intended to manipulate Search rankings..

To allow time for site owners to prepare for this change, this new policy will take effect starting May 5, 2024.

Helpful content and Google Search results FAQ

Is there a single “helpful content system” that Google Search uses for ranking?

Our work to improve the helpfulness of content in search results began with what we called our “helpful content system” that was launched in 2022. Our processes have evolved since. There is no one system used for identifying helpful content. Instead, our core ranking systems use a variety of signals and systems.

How can I check if my content is helpful?

Our help page on how to create helpful, reliable people-first content has questions that you can use to self-assess your content.

Do Google’s core ranking systems assess the helpfulness of content on a page-level or site-wide basis?

Our core ranking systems are primarily designed to work on the page level, using a variety of signals and systems to understand the helpfulness of individual pages. We do have some site-wide signals that are also considered.

Will removing unhelpful content help my other content rank better?

Our systems work primarily at the page level to show the most helpful content we can, even if that content is on sites also hosting unhelpful content. This said, having relatively high amounts of unhelpful content might cause other content on the site to perform less well in Search, to a varying degree. Removing unhelpful content might contribute to your other pages performing better.

If I remove unhelpful content, do I have to wait for a core update to see potential ranking improvements?

Ranking changes can happen at any time for a variety of reasons. We regularly update our core ranking systems. Content across on the open web changes, which our systems process. Because of this, there’s no set timeline as to how long it might take for potential improvement to be reflected in ranking.

Bubble chart

Bubble chart can help you understand which queries are performing well for your site, and which could be improved.

Analyzing Search performance data is always a challenge, but even more so when you have plenty of long-tail queries, which are harder to visualize and understand. Bubble chart helps uncover opportunities to optimize site’s Google Search performance.

bubble chart

Your website title on Google SERP

One of the primary ways people determine which search results might be relevant to their query is by reviewing the titles of listed web pages.

That’s why Google Search works hard to provide the best titles for documents in our results to connect searchers with the content that creators, publishers, businesses, and others have produced.

SERP-titles

How titles are generated 

Last week, Google introduced a new system of generating titles for web pages. Before this, titles might change based on the query issued. This generally will no longer happen with the new system.

The new system is producing titles that work better for documents overall, to describe what they are about, regardless of the particular query.

Also, while Google have gone beyond HTML text to create titles for over a decade, the new system is making even more use of such text.

In particular, the text that humans can visually see when they arrive at a web page. The main visual title or headline shown on a page, content that site owners often place within <H1> tags or other header tags, and content that’s large and prominent through the use of style treatments.

Other text contained in the page might be considered, as might be text within links that point at pages.

Why more than HTML title tags are used

Why not just always use the HTML title tag? Google explained began going beyond the tag significantly back in 2012. HTML title tags don’t always describe a page well. In particular, title tags can sometimes be:

Very long.

“Stuffed” with keywords, because creators mistakenly think adding a bunch of words will increase the chances that a page will rank better.

Lack title tags entirely or contain repetitive “boilerplate” language. For instance, home pages might simply be called “Home”. In other cases, all pages in a site might be called “Untitled” or simply have the name of the site.

Overall, this update is designed to produce more readable and accessible titles for pages. In some cases, Google may add site names where that is seen as helpful.

In other instances, when encountering an extremely long title, Google might select the most relevant portion rather than starting at the beginning and truncating more useful parts.

A focus on good HTML title tags remains valid

Focus on creating great HTML title tags. Of all the ways that Google generates titles, content from HTML title tags is still by far the most likely used, more than 80% of the time.

But don’t forget, as with any system (i.e. me!), Google search titles won’t be always perfect.

Page experience to show on Google Search

There has been a 70% increase in the number of users engaging with Lighthouse and Page Speed Insights, and many site owners using Search Console’s Core Web Vitals report to identify opportunities for improvement.

From May 2021, the new page experience signals combine Core Web Vitals with Google’s existing search signals including mobile-friendliness, safe-browsing, HTTPS-security, and intrusive interstitial guidelines.

Search_Page_Experience_Graphic

A diagram illustrating the components of Search’s signal for page experience.

The change for non-AMP content to become eligible to appear in the mobile Top Stories feature in Search will also roll out in May 2021.

Any page that meets the Google News content policies will be eligible and it will be prioritized with great page experience, whether implemented using AMP or any other web technology, as we rank the results.

A visual indicator might highlights pages in search results that have great page experience.

A New Way of Highlighting Great Experiences in Google Search

Providing information about the quality of a web page’s experience can be helpful to users in choosing the search result that they want to visit.

Visual indicators on the results are another way to do the same, and we are working on one that identifies pages that have met all of the page experience criteria.

Conclusion
At Google, the mission is to help users find the most relevant and quality sites on the web. The goal with these updates is to highlight the best experiences and ensure that users can find the information they’re looking for.

My work is ongoing!

Web Vitals

Web Vitals is an initiative by Google to provide unified guidance for quality signals that are essential to delivering a great user experience on the web.

web-vitals

Core Web Vitals are the subset of Web Vitals that apply to all web pages, should be measured by all site owners, and will be surfaced across all Google tools. Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome.

Largest Contentful Paint (LCP): measures loading performance. To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
First Input Delay (FID): measures interactivity. To provide a good user experience, pages should have a FID of less than 100 milliseconds.
Cumulative Layout Shift (CLS): measures visual stability. To provide a good user experience, pages should maintain a CLS of less than 0.1.

Tools

web vitals

Google believes that the Core Web Vitals are critical to all web experiences. As a result, it is committed to surfacing these metrics in all of its popular tools. The following sections details which tools support the Core Web Vitals.

Field tools to measure Core Web Vitals
The Chrome User Experience Report collects anonymized, real user measurement data for each Core Web Vital. This data enables site owners to quickly assess their performance without requiring them to manually instrument analytics on their pages, and powers tools like PageSpeed Insights, and Search Console’s Core Web Vitals report.

Mobile first indexing

Most sites shown in search results are good to go for mobile-first indexing, and 70% of those shown in our search results have already shifted over.

Google will be switching to mobile-first indexing for all websites starting September 2020. In the meantime, they continue moving sites to mobile-first indexing when their systems recognize that they’re ready.

Most crawling for Search will be done with mobile smartphone user-agent. The exact user-agent name used will match the Chromium version used for rendering. 

Google guidance on making all websites work well for mobile-first indexing continues to be relevant, for new and existing sites. In particular, they recommend making sure that the content shown is the same (including text, images, videos, links), and that meta data (titles and descriptions, robots meta tags) and all structured data is the same.

Front end developer

This is not a brand new title or position, but it’s certainly moved in scope over the years.

​​“Front-end” essentially means web browser. I consider myself a front-end developer. As a front-end developer, you work very closely with web browsers and write the code that runs in them, specifically HTML, CSS, JavaScript, and the handful of other languages that web browsers speak (for instance, media formats like SVG).

Browsers don’t exist alone, they run on a wide landscape of devices. We learned that through the era of responsive design. And most importantly: users use those browsers on those devices.

Nobody is closer to the user than front-end developers. So front-end developers write code for people using browsers that run on a wide variety of devices.

​​Just dealing with this huge landscape of users, devices, and browsers is a job unto itself!

​​All the while, the “front-end” is still just the browser. The browsers languages, HTML, CSS, and JavaScript are still the core technologies at play.

​​Being a front-end developer is still caring about users who use those browsers on those devices. Their experience is our job. The tooling just helps us do it, hopefully.

​​So what are you doing as a front-end developer?

  • ​​You’re executing the design such that it looks good on any screen
  • ​​You’re applying semantics to content
  • ​​You’re building UI abstractly such that you can re-use parts and styles efficiently
  • ​​You’re considering the accessibility of what renders in the browser
  • ​​You’re concerned about the performance of the site, which means
  • you’re dealing with how big and how many resources are being used by the browser.
  • ​​Those things have always been true, and always will be, since they are fundamentally browser-level concerns and that’s what front-end is.

​​What’s changing is that the browser is capable of more and more work.

There are all sorts of reasons for that, like browser APIs getting more capable, libraries getting fancier, and computers getting better, in general.

Offloading work from the server to the browser has made more and more sense over the years (single page apps!).

​​Front-end development these days might also include:

  • ​​Architecting the entire site from the tiniest component to entire pages up to the URL level
  • ​​Fetching your own data from APIs and manipulate the data as needed for display
  • ​​Dealing with the state of the site on your own
  • ​​Mutating/changing data through user interaction and input and persist that data in state and back to the servers through APIs
  • ​​Those are all things that can be done in the browser now, much to the widening eyes of this old developer. That’s a heck of a haystack of responsibility when you consider it’s on top of all the stuff you already have to do.

​​While that haystack of jobs tends to grow over the years, the guiding light we have as front-end developers hasn’t changed all that much.

Our core responsibility is still taking care of users who use web browsers on devices.

User experience improvements with page speed in mobile search

For the slowest one-third of traffic,  user-centric performance metrics improve by 15% to 20% in 2018. As a comparison.

When a page is slow to load, users are more likely to abandon the navigation.

Speed improvements: 20% reduction in abandonment rate for navigations initiated from Search, a metric that site owners can now also measure via the Network Error Logging API available in Chrome.

In 2018, developers ran over a billion PageSpeed Insights audits to identify performance optimization opportunities for over 200 million unique urls.