WordPress Core Web Vitals Performance Strategies FAQ

Google’s new web performance metrics are on everyone’s mind these days. Earlier this year, we addressed this key topic during a webinar—Optimizing for Core Web Vitals

Since that session, we’ve been answering a ton of thoughtful questions about Core Web Vitals—read the FAQ below.

We also held a live AMA on Twitter Spaces with web performance experts from Google, XWP, and WordPress VIP.

Listen to the recording here:

Frequently Asked Core Web Vitals Questions

Will Core Web Vitals metrics change in the future?


Google says it plans to update Core Web Vitals annually. That frequency strikes a good balance between keeping up with the latest and greatest in user experience, and giving website owners and businesses enough room to plan. 

Our site has good scores in Field Data, but our Lab Data isn’t impressive. Should we worry?


Field data or Real User Metric (RUM) data is the more insightful and valuable data for site owners. It tells you how actual users are experiencing your site.

This could be affected by where the majority of your users are, what time of day your users access your site, if they visit on weekends, and much more. Site owners are frequently surprised by factors they hadn’t considered in their test environments. How you build your site should be determined by how it’s actually experienced by real users.

I know Time To First Byte (TTFB) is important. What is the acceptable threshold? How do I improve Time to First Byte?


TTFB is the first and most fundamental of all the metrics. If your TTFB is slow, you need to compensate by improving other metrics, including Core Web Vitals.

Improving TTFB is done at the platform or host level. It’s best to serve content as close to your users as possible—and only hosts with global networks of data centers can do that. 

Additionally, you’ll want to have a reliable WordPress application with performance in mind. Leveraging page and object caching, for example, can make complex requests feel snappy.

Having good performance worldwide is important because Core Web Vitals metrics are measured for the actual users your site has. If you have worldwide visitors, you are measured worldwide; it’s not in some central performance testing location. See this data in the Field Data in Google Page Speed Insights, and the Core Web Vitals in the Google Search Console. 

Google’s latest recommendations for TTFB are:

Good: < 500ms
Needs improvement: 500 – 1500ms
Bad: > 1500ms

How does website scaling work? Can website resources be increased automatically when traffic surges, to keep CWV scores high?


Because Real User Metrics (RUM) are important to Core Web Vitals performance, if your website doesn’t handle traffic surges, you’ll likely see a drop in your scoring. That’s why it’s important to provide a consistently good experience to all site visitors.

Look for a website platform that autoscales to perform well, even on your best day of traffic.

My Largest Contentful Paint (LCP) scores are really high. How do I improve them?


First, identify what is considered to be the LCP on your site. It might be a hero image, slider, large piece of text, video, or animation. Surprisingly, the culprit might be something unexpected—a cookie notice popup or ads. To improve LCP delivery, ensure you’re working on the right element.

In case your LCP is a hero image, here are tips:

Make sure it’s not being lazyloaded by third parties or built-in browser lazyloading (use eager loading instead).

Apply preloading to it—so it’s discovered earlier by the browser.

Make sure the meta viewport declaration prepends it. Because of pixel density, max-width versus max-device-width, an incorrect CSS hero image might not be identified as the first one on the page.

Keep it under 200KB—use the appropriate dimensions and next-gen formats.

Investigate other elements of the critical path. They share bandwidth, competing with the hero image. Often, it makes sense to see what you can take out to free up bandwidth for above-the-fold content. For example, a large CSS file can be split in two, or fonts can be taken out of the critical path.

For LCP measurement, can webmasters determine what is the “most important content” on a page, rather than allowing Google to determine it?


LCP is based on when the largest element is rendered on screen, so it’s almost entirely dependent on size. Recently, it started to take into account an estimation of importance by ignoring full-width images set as backgrounds of the body element.

We reviewed our site and it looks like fonts are contributing to slowness, but our brands need this aesthetic. How do you balance creative needs with page speed performance?


Performance is a mindset: you should treat this as a long-term goal, not just a one-off project where you balance small decisions against each other. You must understand the complete picture of what else might be contributing to poor page performance. If you can’t identify all the contributing factors, you can’t make data-driven decisions on where to sacrifice performance for creative needs. 

For fonts, there is no silver bullet—start with Google’s best practices for fonts. You can also reference many other loading strategies.

Here are a few techniques to start with:

Start with a simple question: Do we use it? Make sure you don’t load fonts that aren’t used anywhere. This also applies to images, videos, stylesheets, scripts—get rid of anything you’re not using.

Find out which fonts are part of the critical load path—load them as early as possible (use preloading) and delay the load of others.

Aim to host all fonts locally (e.g. from your own domain).

Make sure you use the font-display: swap; property to load local system fonts before custom/third-party fonts are loaded so text stays visible while the remote fonts are loaded. We recommend using a tool like font-style-matcher to get a system font that is close in kerning, spacing, and other elements as your custom font. Also, you can use fontfaceobserver.js to help with leverage promises while loading your fonts.

For icon fonts it depends on the situation: use the block as font-display value, make a subset font to load only the used icons, or even “online them” as SVG.

For Cumulative Layout Shift (CLS) the metrics range from 0.10 to 0.25. What do these numbers represent?


Cumulative Layout Shift measures the sum of the largest burst of all unexpected layout shifts that occur throughout the lifespan of the page. 

Images, ads, fonts, embeds, and more can adjust the layout of your page as it loads. Developers need to make sure they haven’t missed adding widths and heights into areas that are not part of the editorial content. 

The CLS metric has been updated. Now, when you fix the largest shift, the metric’s scoring goes to the value of the next largest burst. This can be pretty close to the previous one, so you won’t see significant improvement in scores until you fix all of the large jumps.

Why does my site get a poor result in Google Search Console (GSC) and a great result in Lighthouse?


A good Lighthouse score usually means you are most of the way towards your goals—from a laboratory standpoint given specified conditions, it measures up to a certain score. 

But, as we noted, there are several aspects that can’t be measured during a lab test. Examples of these are the FID and CLS during the user’s interaction. Additionally, other factors like network latency, device hardware, and complexity in loading mean that your real users might be slower than your optimal site. 

There are APIs available to help you send debug data to analytics, which can then be viewed in a Web Vitals Report. This will help you find what problems are causing the disparity between GSC and Lighthouse. 

Google uses the Chrome User Experience Report (CrUX) data inside GSC and on PageSpeed Insights (PSI). You should target to get “good” scores for the 75th percentile of all your users.
 
Given the variations, there are conditions that your users might actually experience slower (or faster) than lab conditions—and you must use real user metrics (RUM) data to ensure your efforts are impactful after you deploy it.

Why is WordPress deprecating the old version of jQuery? Is this in line with Core Web Vitals transitioning out of IE11 too? How about youmightnotneedjquery.com?


jQuery was pivotal at bringing advanced features to the web. However, it is a large library that often has major performance impacts on LCP, TTI, and FID (depending on when it’s being loaded and what else is on the page.) There are ongoing efforts to reduce its use across the web and WordPress ecosystem, such as ensuring new themes don’t use it as a dependency and removing it from frequently used WordPress plugins like Jetpack. 

Versioning isn’t much of a concern in this context, however—performance-wise, the main issue is needing to load jQuery in the critical path, not which particular version that happens to be.

Our CWVs all look good. But after using performance plugins, our First Input Delay (FID) is still high. How do we improve that?


Fix what you have control over. That might include evaluating third-party plugins for value gained versus penalty incurred. 

Performance profiling in Chrome DevTools is your friend. Minimize the impact of script-intensive third parties by delaying their loading to periods of low CPU utilization. 

If the CPU isn’t overwhelmed by doing 15 different tasks, plugin impact on website interactivity might be minimized.

Google Tag Manager (GTM) and other third-party scripts hinder our Lighthouse reports. How can we improve our score and keep GTM?


Analytics scripts don’t usually impact website page speed significantly, but you can still benefit from optimizing them. Basically, add it when it’s needed, not earlier.

Follow this pattern:
1. Place the GTM script at the end of your HTML, not in the beginning.

2. Download and parse it after the HTML is processed (defer the script).

3. Execute it within 1-3 seconds after the Document Object Model (DOM) is loaded.

This way, GTM or any other analytics script won’t be a blocker, and you’ll still be able to collect needed data.

If you’re still getting a significant performance hit, dig beyond GTM—maybe it’s not the tag manager’s fault, but rather the way it’s used. There are good strategies in this talk: Deep dive into third-party performance. For example, you can take critical scripts out of GTM and defer its loading to late in the waterfall.

What recommendations do you have for websites that monetize via ads? Ad Manager, and ads in general (especially programmatic), can greatly impact TTI and TBT. Should we defer ad loading to post user interaction? How do we improve performance without losing opportunities?


This choice might seem as tough as balancing site speed and revenue, but it shouldn’t be. 

The aim here is to find creative ways to make ads a part of the user experience, instead of an interruption. Then you don’t have to choose between revenue and performance.

Here are suggestions:

The expected content should always come first.

Ensure ad sections (and the way they appear) look organic.

Ensure ads don’t suddenly appear in the middle of the page, causing layout shifts and interrupting user interaction. Preallocate space for them early using JavaScript and setting the minimum height to wrappers.

Don’t load too many things at once. The main thread is overworked and underpaid, and can do work faster if it’s served in smaller chunks. We recommend two approaches: Creative use of the requestIdleCallback method built inside browsers, and waiting for the DOM to complete loading before loading more items.

Loading ads after a user interacts with a page starts doesn’t make much sense—it feels like they’re being punished for engaging, which isn’t a great user experience. Loading ads after the content is served respects the reality of the browsers being single-threaded.

Rather than finding workarounds, implementing a solid strategy that always takes the user into account will keep you afloat long term, even as responsiveness metrics evolve.

We want to measure changes as we release new features on our site. Is there a path or mechanism to include these metrics as part of CI/CD systems like GitHub Actions?


Core Web Vitals aren’t well suited to lab testing, e.g., in a CI/CD system. It’s hard for a lab setting to reproduce the conditions that real users face, and these metrics are highly situational. 

That said, you have to use what you can, in this case Lighthouse. For integrating with GitHub Actions, there are several popular mechanisms already built and ready to use. Find them in the Action marketplace.

We have writers who aren’t experts in web performance. Can we have a checklist when uploading images that says, “Are you sure you want to upload a 2 MB PNG instead of a 100 kb JPG here?”


Having a checklist is one approach. Another approach is to have this automated. Some WordPress-specific platforms build in Adaptive Media features that automatically optimize media for loading speed, removing this QA task from the publishing workflow.

Learn more about CWV in our webinar, Optimizing for Core Web Vitals.

Get the latest content updates

Want to be notified about new content?

Leave your email address and we’ll make sure you stay updated.