Bubble chart can help you understand which queries are performing well for your site, and which could be improved.
Analyzing Search performance data is always a challenge, but even more so when you have plenty of long-tail queries, which are harder to visualize and understand. Bubble chart helps uncover opportunities to optimize site’s Google Search performance.
One of the primary ways people determine which search results might be relevant to their query is by reviewing the titles of listed web pages.
That’s why Google Search works hard to provide the best titles for documents in our results to connect searchers with the content that creators, publishers, businesses, and others have produced.
How titles are generated
Last week, Google introduced a new system of generating titles for web pages. Before this, titles might change based on the query issued. This generally will no longer happen with the new system.
The new system is producing titles that work better for documents overall, to describe what they are about, regardless of the particular query.
Also, while Google have gone beyond HTML text to create titles for over a decade, the new system is making even more use of such text.
In particular, the text that humans can visually see when they arrive at a web page. The main visual title or headline shown on a page, content that site owners often place within <H1> tags or other header tags, and content that’s large and prominent through the use of style treatments.
Other text contained in the page might be considered, as might be text within links that point at pages.
Why more than HTML title tags are used
Why not just always use the HTML title tag? Google explained began going beyond the tag significantly back in 2012. HTML title tags don’t always describe a page well. In particular, title tags can sometimes be:
Very long.
“Stuffed” with keywords, because creators mistakenly think adding a bunch of words will increase the chances that a page will rank better.
Lack title tags entirely or contain repetitive “boilerplate” language. For instance, home pages might simply be called “Home”. In other cases, all pages in a site might be called “Untitled” or simply have the name of the site.
Overall, this update is designed to produce more readable and accessible titles for pages. In some cases, Google may add site names where that is seen as helpful.
In other instances, when encountering an extremely long title, Google might select the most relevant portion rather than starting at the beginning and truncating more useful parts.
A focus on good HTML title tags remains valid
Focus on creating great HTML title tags. Of all the ways that Google generates titles, content from HTML title tags is still by far the most likely used, more than 80% of the time.
But don’t forget, as with any system (i.e. me!), Google search titles won’t be always perfect.
There has been a 70% increase in the number of users engaging with Lighthouse and Page Speed Insights, and many site owners using Search Console’s Core Web Vitals report to identify opportunities for improvement.
From May 2021, the new page experience signals combine Core Web Vitals with Google’s existing search signals including mobile-friendliness, safe-browsing, HTTPS-security, and intrusive interstitial guidelines.
A diagram illustrating the components of Search’s signal for page experience.
The change for non-AMP content to become eligible to appear in the mobile Top Stories feature in Search will also roll out in May 2021.
Any page that meets the Google News content policies will be eligible and it will be prioritized with great page experience, whether implemented using AMP or any other web technology, as we rank the results.
A visual indicator might highlights pages in search results that have great page experience.
A New Way of Highlighting Great Experiences in Google Search
Providing information about the quality of a web page’s experience can be helpful to users in choosing the search result that they want to visit.
Visual indicators on the results are another way to do the same, and we are working on one that identifies pages that have met all of the page experience criteria.
Conclusion
At Google, the mission is to help users find the most relevant and quality sites on the web. The goal with these updates is to highlight the best experiences and ensure that users can find the information they’re looking for.
Web Vitals is an initiative by Google to provide unified guidance for quality signals that are essential to delivering a great user experience on the web.
Core Web Vitals are the subset of Web Vitals that apply to all web pages, should be measured by all site owners, and will be surfaced across all Google tools. Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome.
Largest Contentful Paint (LCP): measures loading performance. To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading. First Input Delay (FID): measures interactivity. To provide a good user experience, pages should have a FID of less than 100 milliseconds. Cumulative Layout Shift (CLS): measures visual stability. To provide a good user experience, pages should maintain a CLS of less than 0.1.
Tools
Google believes that the Core Web Vitals are critical to all web experiences. As a result, it is committed to surfacing these metrics in all of its popular tools. The following sections details which tools support the Core Web Vitals.
Field tools to measure Core Web Vitals The Chrome User Experience Report collects anonymized, real user measurement data for each Core Web Vital. This data enables site owners to quickly assess their performance without requiring them to manually instrument analytics on their pages, and powers tools like PageSpeed Insights, and Search Console’s Core Web Vitals report.
Most sites shown in search results are good to go for mobile-first indexing, and 70% of those shown in our search results have already shifted over.
Google will be switching to mobile-first indexing for all websites starting September 2020. In the meantime, they continue moving sites to mobile-first indexing when their systems recognize that they’re ready.
Most crawling for Search will be done with mobile smartphone user-agent. The exact user-agent name used will match the Chromium version used for rendering.
Google guidance on making all websites work well for mobile-first indexing continues to be relevant, for new and existing sites. In particular, they recommend making sure that the content shown is the same (including text, images, videos, links), and that meta data (titles and descriptions, robots meta tags) and all structured data is the same.
This is not a brand new title or position, but it’s certainly moved in scope over the years.
“Front-end” essentially means web browser. I consider myself a front-end developer. As a front-end developer, you work very closely with web browsers and write the code that runs in them, specifically HTML, CSS, JavaScript, and the handful of other languages that web browsers speak (for instance, media formats like SVG).
Browsers don’t exist alone, they run on a wide landscape of devices. We learned that through the era of responsive design. And most importantly: users use those browsers on those devices.
Nobody is closer to the user than front-end developers. So front-end developers write code for people using browsers that run on a wide variety of devices.
Just dealing with this huge landscape of users, devices, and browsers is a job unto itself!
All the while, the “front-end” is still just the browser. The browsers languages, HTML, CSS, and JavaScript are still the core technologies at play.
Being a front-end developer is still caring about users who use those browsers on those devices. Their experience is our job. The tooling just helps us do it, hopefully.
So what are you doing as a front-end developer?
You’re executing the design such that it looks good on any screen
You’re applying semantics to content
You’re building UI abstractly such that you can re-use parts and styles efficiently
You’re considering the accessibility of what renders in the browser
You’re concerned about the performance of the site, which means
you’re dealing with how big and how many resources are being used by the browser.
Those things have always been true, and always will be, since they are fundamentally browser-level concerns and that’s what front-end is.
What’s changing is that the browser is capable of more and more work.
There are all sorts of reasons for that, like browser APIs getting more capable, libraries getting fancier, and computers getting better, in general.
Offloading work from the server to the browser has made more and more sense over the years (single page apps!).
Front-end development these days might also include:
Architecting the entire site from the tiniest component to entire pages up to the URL level
Fetching your own data from APIs and manipulate the data as needed for display
Dealing with the state of the site on your own
Mutating/changing data through user interaction and input and persist that data in state and back to the servers through APIs
Those are all things that can be done in the browser now, much to the widening eyes of this old developer. That’s a heck of a haystack of responsibility when you consider it’s on top of all the stuff you already have to do.
While that haystack of jobs tends to grow over the years, the guiding light we have as front-end developers hasn’t changed all that much.
Our core responsibility is still taking care of users who use web browsers on devices.
For the slowest one-third of traffic, user-centric performance metrics improve by 15% to 20% in 2018. As a comparison.
When a page is slow to load, users are more likely to abandon the navigation.
Speed improvements: 20% reduction in abandonment rate for navigations initiated from Search, a metric that site owners can now also measure via the Network Error Logging API available in Chrome.
In 2018, developers ran over a billion PageSpeed Insights audits to identify performance optimization opportunities for over 200 million unique urls.
First, it’s important that all elements in thesection are pre-loaded, before the visitor sees anything in browser, then all subsequent elements are ordered to load in a logical way. Any JavaScript inside thesection can slow down a page’s rendering. Here’s a look at the difference between an optimized and an unoptimized page load:
WHY IS THE LOAD ORDER IMPORTANT?
ASYNCHRONOUS LOADING: DEFER AND ASYNC TAGS
Asynchronous loading of JavaScript is a type of sync loading. It means that your website loads in a multi-streamed way.
Graphic demonstrating that asynchronous loading of JavaScript is a type of sync loading which means that your website loads in a multi-streamed way
When the browser finds the string with , it will stop creation of DOM and CSSOM models while the JavaScript is executed. This is why most JavaScript code is located after the main HTML code.
Important! Defer and Async tags are available only for external scripts (with src=”” tag). If you will try to use them for internal scripts like tags, defer and async will be ignored.
EXCLUDE UNUSED COMPONENTS OF .JS LIBRARIES.
javascript library components
Most developers use libraries like jQuery UI or jQuery Mobile as is. This means that the code includes all possible components of each library, when you may only need two or three.
A similar situation occurs with other JavaScript libraries as well. If you have the ability to manage what components will be included in your package of library, definitely do it. Your website will load much faster, and your visitors will get a better experience.
Google does not discourage publishing articles in the cases when they inform users, educate another site’s audience or bring awareness to your cause or company.
However, what does violate Google’s guidelines on link schemes is when the main intent is to build links in a large-scale way back to the author’s site.
Below are factors that, when taken to an extreme, can indicate when an article is in violation of these guidelines:
Stuffing keyword-rich links to your site in your articles
Having the articles published across many different sites; alternatively, having a large number of articles on a few large, different sites
Using or hiring article writers that aren’t knowledgeable about the topics they’re writing on
Using the same or similar content across these articles; alternatively, duplicating the full content of articles found on your own site (in which case use of rel=”canonical”, in addition to rel=”nofollow”, is advised)
When Google detects that a website is publishing articles that contain spammy links, this may change Google’s perception of the quality of the site and could affect its ranking.
Sites accepting and publishing such articles should carefully vet them, asking questions like:
Do I know this person?
Does this person’s message fit with my site’s audience?
Does the article contain useful content?
If there are links of questionable intent in the article, has the author used rel=”nofollow” on them?
For websites creating articles made for links, Google takes action on this behavior because it’s bad for the Web as a whole.
When link building comes first, the quality of the articles can suffer and create a bad experience for users.
And lastly, if a link is a form of endorsement, and you’re the one creating most of the endorsements for your own site, is this putting forth the best impression of your site?
My advice in relation to link building is to focus on improving your site’s content.
How can URL parameters, like session IDs or tracking IDs, cause duplicate content?
When user and/or tracking information is stored through URL parameters, duplicate content can arise because the same page is accessible through numerous URLs. In the example below, URL parameters create three URLs which access the same product page.
Why should you care?
Having multiple URLs can dilute link popularity. For example, in the diagram above, rather than 50 links to your intended display URL, the 50 links may be divided three ways among the three distinct URLs.
Search results may display user-unfriendly URLs (long URLs with tracking IDs, session IDs)
If you find you have duplicate content as mentioned above, can you help search engines understand your site?
1. Removing unnecessary URL parameters — keep the URL as clean as possible.
2. Submitting a Sitemap with the canonical (i.e. representative) version of each URL.
How can you design your site to reduce duplicate content?
When tracking visitor information, use 301 redirects to redirect URLs with parameters such as affiliateID, trackingID, etc. to the canonical version.
Use a cookie to set the affiliateID and trackingID values.
Please be aware that if your site uses cookies, your content (such as product pages) should remain accessible with cookies disabled.