Core Web Vitals explained beyond the basics. Learn the hidden performance traps, our FEEL Framework, and actionable fixes that actually move rankings.
The standard Core Web Vitals guide follows a predictable pattern: define LCP, INP, and CLS, tell you to compress images, defer JavaScript, and add a CDN, then call it done. That advice isn't wrong — it's just dangerously incomplete. First, most guides treat Core Web Vitals as a developer problem when it's actually a systems problem.
Your marketing team adding a new chat widget, your designer using a large hero video, your e-commerce manager installing a new review app — all of these can silently tank your scores between audits. Second, guides rarely address the mobile-versus-desktop split. Google's field data is heavily weighted toward mobile users, where CPU constraints, slower network conditions, and memory limitations mean the same page behaves very differently.
A site can look performant on desktop and be a poor experience on mobile — and the mobile score is what matters most for ranking. Third, nobody talks about performance regression. You fix your scores, celebrate, and six months later they've quietly degraded because new content, plugins, and third-party tools accumulated in the background.
Without a monitoring and governance system, every fix is temporary.
Core Web Vitals are a set of specific, measurable signals that Google uses to evaluate the quality of a user's page experience. As of 2024, there are three: Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). Understanding what each one actually measures — not just its name — is the foundation of fixing them correctly.
Largest Contentful Paint (LCP) measures how long it takes for the largest visible element on the page to finish rendering within the viewport. This is typically a hero image, a large text block, or a video thumbnail. The 'good' threshold is under 2.5 seconds. What's commonly misunderstood is that LCP is not a measurement of when your page starts loading — it's a measurement of when the dominant visual content is usable. A page can feel instantly responsive and still have a poor LCP if a large above-the-fold image loads late.
Interaction to Next Paint (INP) replaced First Input Delay (FID) as a Core Web Vital in March 2024. Where FID only measured the delay before the browser responded to the first user interaction, INP measures the full duration of all interactions throughout a page visit — from the moment input is received to when the next visual update is rendered. The threshold for 'good' is under 200 milliseconds. INP is a significantly stricter and more diagnostic metric because it captures interaction sluggishness throughout the entire session, not just on first click.
Cumulative Layout Shift (CLS) measures visual stability — how much the page's visible content unexpectedly moves during loading. If a button shifts position just as you're about to click it, or an image pushes down a paragraph you're reading, those are layout shifts. The 'good' threshold is a CLS score under 0.1. CLS is scored cumulatively, meaning every unexpected shift throughout the page lifecycle adds to the total.
Each metric maps to a distinct user frustration: LCP captures 'Is it loading?', INP captures 'Is it responsive?', and CLS captures 'Is it stable?'. They're not interchangeable, and fixing one has no guaranteed effect on the others. This is why a holistic diagnostic process — not a single fix — is required.
In Google Search Console's Core Web Vitals report, click into 'Poor URLs' grouped by issue type. Each issue group tells you the specific metric and element causing the failure — this is far more actionable than a raw PageSpeed score.
Treating a green Lighthouse score as confirmation that your Core Web Vitals are passing. Lighthouse runs in a controlled lab environment. CrUX field data — which Google uses for ranking — reflects real device and network conditions. Always validate against Search Console field data, not just Lighthouse.
This is the most important distinction in all of Core Web Vitals, and almost every introductory guide skips past it with a single sentence. Let's spend real time here because this is where the gap between knowing about Core Web Vitals and actually improving them lives.
Lab data is what tools like Google Lighthouse, PageSpeed Insights (in its simulated mode), and WebPageTest generate. These tools load your page in a controlled environment — a specific device emulation, a specific network throttling profile, a specific CPU slowdown multiplier. Lab data is consistent, reproducible, and useful for debugging. But it is not what Google uses to evaluate your page experience for ranking.
Field data, also called real-user monitoring (RUM) data, is collected from actual Chrome browser sessions by real users on real devices with real network connections. This data is aggregated into the Chrome User Experience Report (CrUX) and is what populates Google Search Console's Core Web Vitals report. When Google says your page 'passes' or 'needs improvement,' it is referencing field data — not your Lighthouse run.
This creates a meaningful gap in practice. Consider: your development machine runs a fast processor and connects on high-speed broadband. Your Lighthouse score reflects that environment. But a significant portion of your real users might be on mid-range Android devices, on mobile networks, with multiple browser tabs open. Their experience of your page is dramatically different — and it's their sessions that populate your CrUX data.
We've audited pages that scored in the 90s on PageSpeed Insights but showed 'Poor' LCP in Search Console for months. The culprit was consistently heavy JavaScript that lab tools didn't fully penalise but real devices — with constrained CPU — couldn't parse quickly enough to render the LCP element on time.
What to do instead: Always start your Core Web Vitals investigation in Google Search Console. Look at the Core Web Vitals report, filter by device type (mobile and desktop separately), and identify which URLs are grouped under 'Poor' or 'Needs Improvement.' Only once you know your field data status should you move to lab tools like PageSpeed Insights or Lighthouse to diagnose the root cause. Field data tells you *what* is failing. Lab tools tell you *why*.
If a URL has insufficient traffic for individual CrUX data, Google may fall back to the origin-level dataset. You can query CrUX data directly via the CrUX API or the CrUX History API to see trends over time — useful for proving the impact of performance improvements without waiting for Search Console to update.
Celebrating a Lighthouse score improvement without verifying it has flowed through to Search Console field data. Field data in Search Console has a 28-day rolling window and updates with a delay — expect two to four weeks before a fix registers as an improvement in the report.
When operators hear 'improve your LCP,' the first instinct is almost always 'compress the hero image.' That's not wrong — but in our experience diagnosing LCP failures across many different site types, image file size is rarely the primary culprit. By leading with image compression, most guides send you to fix a symptom while the root cause continues to delay your LCP element.
LCP has four sub-parts that happen sequentially: Time to First Byte (TTFB), resource load delay, resource load time, and element render delay. Image compression only affects resource load time — one of the four. If your TTFB is slow (server takes too long to respond), or if render-blocking resources are delaying when the browser even starts requesting the LCP image, compressing that image will have a limited effect on your actual score.
The real LCP culprits, in order of frequency:
1. Slow TTFB. If your server responds in over 600ms, you're already behind before any resource loads. This is often caused by unoptimised hosting, lack of server-side caching, or a database query bottleneck. No amount of image compression recovers this time.
2. Render-blocking resources. JavaScript and CSS in the document head that must be fully parsed before the browser can render anything — including your LCP element. Identifying and deferring non-critical scripts is typically the highest-leverage LCP fix available.
3. LCP element not being preloaded. If your LCP element is a background image set in CSS, or an image loaded lazily, the browser doesn't discover it until late in the parsing process. Adding a preload hint (`<link rel='preload'>`) for the LCP resource tells the browser to fetch it at the highest priority, often reducing LCP by hundreds of milliseconds.
4. Image format and compression (the thing everyone talks about first). WebP and AVIF formats deliver significantly smaller file sizes at equivalent visual quality compared to JPEG or PNG. This matters, but only once the three issues above are resolved.
5. Missing CDN or CDN misconfiguration. Serving assets from a geographically distant origin server adds latency that a well-configured CDN eliminates by serving content from edge nodes close to the user.
Fix in this order. Diagnose TTFB, eliminate render-blocking resources, preload the LCP element, then optimise the image format. Doing it in reverse is the single most common reason LCP improvements underperform expectations.
Use the 'LCP element' section in PageSpeed Insights to identify exactly which HTML element is being measured as your LCP. Then trace its loading waterfall in the network panel to find where the delay is actually occurring — TTFB, discovery, or download.
Applying `loading='lazy'` to hero images to 'optimise performance.' Lazy loading defers image loading until the element is near the viewport, which directly delays LCP for above-the-fold images. Only use lazy loading on images below the fold.
INP is the metric that catches most experienced site owners off guard, because it requires abandoning the mindset that served you with FID (First Input Delay). FID was relatively forgiving — it only measured the delay before the browser acknowledged the first click or keypress. INP measures the full response latency for every interaction throughout a session, and it includes the time to visually update the page in response. That's a much harder bar to clear.
Why INP failures are hard to spot: Unlike LCP, which you can measure against a fixed element, INP failures are interaction-specific. A click on a navigation menu might respond instantly. A click on an 'Add to Cart' button that triggers JavaScript to recalculate prices, check inventory, and update a cart icon might take 800ms — a catastrophic INP event. Standard audits miss this because they don't simulate real interaction sequences.
The JavaScript Execution Budget Principle: We think about INP diagnostics using what we call the JavaScript Execution Budget Principle. Every interaction on a page 'spends' from a limited pool of main-thread time. Long tasks — JavaScript functions that run for more than 50ms without yielding — monopolise the main thread and prevent the browser from responding to user input. The diagnostic goal is to identify which JavaScript is creating long tasks during interactions and either break those tasks into smaller chunks, defer non-critical work, or eliminate it entirely.
Practical INP diagnostic steps: - Open Chrome DevTools and run a Performance profile while simulating real user interactions (clicks, form inputs, scroll-triggered events). - Look for long tasks (shown as red blocks) that coincide with user interactions. - Identify which scripts are responsible — common culprits include analytics event handlers, third-party tag manager payloads, and synchronous JavaScript frameworks. - Use the `scheduler.yield()` API or `setTimeout` with a 0ms delay to break long tasks and yield back to the browser between chunks.
The hidden INP killers most sites ignore: - Tag manager triggers that fire synchronous scripts on every click event - React or Vue component re-renders that update large DOM trees on interaction - Event listeners added to `document` or `window` that run on every interaction site-wide - Third-party chat widgets and personalisation scripts that compete for main-thread time
INP is fundamentally a JavaScript architecture problem, not a page weight problem. Fixing it requires collaboration between SEO, development, and anyone managing third-party scripts.
Use the Chrome DevTools Performance Insights panel (separate from the standard Performance tab) to get interaction-specific breakdowns with INP-tagged events. It shows exactly which interaction caused a long task and which scripts were responsible — significantly faster than reading raw flame charts.
Assuming INP is improved by the same fixes as LCP. Minifying CSS and compressing images have no effect on INP. INP is almost exclusively a JavaScript execution problem that requires profiling, script auditing, and code architecture changes.
Cumulative Layout Shift is the most misattributed Core Web Vital. Every guide tells you to add width and height attributes to your images. That's correct and necessary — but in our experience auditing production sites, unspecified image dimensions are only the source of CLS failures on older or content-heavy sites. On modern sites, the real sources of CLS are almost always third-party scripts, dynamically injected content, and web fonts.
How CLS is calculated: Each layout shift event has a score based on two factors — the fraction of the viewport that moved, and the distance the content moved. These scores are summed across all unexpected layout shifts during a session to produce the CLS score. Unexpected is the key word: shifts that occur within 500ms of a user interaction (like a click) do not count toward CLS.
The third-party CLS trap: Analytics platforms, live chat widgets, cookie consent banners, advertising networks, and personalisation tools all inject content into your page after initial load. If they insert above or between existing content, they physically push content down, causing layout shifts that accumulate in your CLS score. You may have perfect CLS from your own code and a failing CLS score caused entirely by a script you've delegated to a vendor.
Diagnosing CLS origins: Use the Layout Instability API via a simple performance observer snippet to log CLS events and their source elements in the console. Alternatively, in Chrome DevTools, the Performance tab shows layout shift events as purple indicators on the timeline — clicking them reveals which elements shifted and by how much.
Fixing CLS systematically: - Images and media: Always declare explicit `width` and `height` attributes or use CSS `aspect-ratio` to reserve space before load. - Web fonts: Use `font-display: optional` or `font-display: swap` with appropriate fallback font sizing to minimise layout shift caused by font swapping. The `size-adjust` CSS descriptor is an underused tool that adjusts fallback font metrics to match the web font dimensions. - Dynamically injected content: Reserve space for ads, banners, chat widgets, and cookie notices before they load. Use CSS skeleton placeholders or fixed-height containers. - Animations: Ensure CSS animations only affect `transform` and `opacity` properties — animating `top`, `left`, `height`, or `width` triggers layout recalculation and can cause CLS.
The LOCK step: Once CLS is remediated, lock it. Add CLS monitoring to your CI/CD pipeline using tools like Lighthouse CI so that new code deploys that introduce layout shifts are caught before they reach production.
After fixing CLS on key pages, test on a real mobile device with network throttling enabled. Many CLS issues are timing-dependent: on fast connections, injected content loads so quickly it shifts the layout before the user can see it — but on slower connections, users see the shift clearly, and it registers in field data.
Only adding image `width` and `height` attributes and declaring CLS fixed. On sites using tag managers, advertising, or any dynamically loaded UI elements, these are almost never the primary CLS source. Always profile a real session to find every contributing element before concluding the fix is complete.
One-time fixes are the enemy of sustainable performance. Every guide gives you a list of things to do once. Almost none of them give you a system for keeping performance healthy as your site evolves. We developed the FEEL Framework as a repeatable process that any team — technical or not — can run on a regular cadence to find, address, and prevent Core Web Vitals failures.
FEEL stands for: Find, Evaluate, Eliminate, Lock.
Find: Identify which URLs are failing and which metric is responsible. Start in Google Search Console's Core Web Vitals report. Filter by mobile and desktop separately. Export the list of 'Poor' and 'Needs Improvement' URLs grouped by issue type. Prioritise by traffic volume — start with high-traffic pages where improvements will have the greatest impact on user experience and organic visibility.
Evaluate: For each failing URL, run a field-data-aware diagnosis. Open the URL in PageSpeed Insights and review both field data (CrUX panel at the top) and lab data (Lighthouse results below). Use the Diagnostics section to identify the specific resources and scripts contributing to the failure. For INP, run a DevTools Performance profile with simulated interactions. For CLS, use the Layout Instability API to log shift sources.
Eliminate: Implement the fix with surgical precision. Don't make broad, sweeping changes that affect the whole site without validating impact. Fix the specific issue identified in the Evaluate step. Common high-impact fixes: preload the LCP resource, defer render-blocking scripts, reserve space for injected content, break long JavaScript tasks. After implementing, verify improvement in PageSpeed Insights lab data immediately. Then wait for field data to reflect the change in Search Console (typically two to four weeks).
Lock: Prevent regression. This is the step almost every guide omits entirely. Once a page passes, implement controls to keep it passing. Add Lighthouse CI to your deployment pipeline to flag performance regressions before they go live. Set up a monitoring schedule — monthly at minimum — to recheck all previously fixed URLs. Create a 'performance impact review' process for any new feature, plugin, or third-party tool before it's approved for production. Assign ownership of Core Web Vitals scores to a specific person or team.
The FEEL Framework turns Core Web Vitals from a project into a practice. The sites that maintain strong performance over time aren't the ones that did the most fixes — they're the ones with the best governance.
Create a simple Core Web Vitals status dashboard using Google Data Studio (Looker Studio) connected to the CrUX API. A visual dashboard that updates automatically makes performance health visible to the whole team — not just developers — and drives accountability across departments.
Treating Core Web Vitals as a one-time project with a clear end date. Performance is a living metric that degrades as new content, scripts, and design changes accumulate. Without the Lock step, most improvements erode within six to twelve months.
Here's a framing shift that changes how teams approach performance decisions: treat your page's performance capacity like a financial budget. Every element, script, font, and third-party tool added to a page spends from that budget. When the budget runs out, Core Web Vitals fail. When the budget is respected, they pass — and stay passing.
This framework — the Performance Budget Principle — moves performance from a reactive technical problem to a proactive product and design discipline. Instead of asking 'why did our score drop?' after the fact, you ask 'can we afford to add this?' before anything is deployed.
Setting a performance budget: Define your target thresholds based on your Core Web Vitals goals. A practical starting point: - Total page weight: set a maximum for the main document and all critical-path resources - JavaScript bundle size: total JS parsed on load (a common culprit in modern frameworks) - Third-party request count: limit the number of external scripts loaded on any page - LCP target: set a maximum acceptable LCP for high-traffic page templates
How to apply it operationally: 1. New feature requests: Before approving a new chat widget, analytics integration, or marketing tool, require a performance impact assessment. Estimate the script size, execution cost, and CLS risk before implementation. 2. Design reviews: During design sign-off, flag any above-the-fold elements that don't have explicit dimensions declared, any video auto-play features, or any font usage that hasn't been optimised. 3. Development pull requests: Use Lighthouse CI in your CI/CD pipeline to block merges that exceed defined performance budgets. This automates the enforcement of the budget without requiring manual audits. 4. Content publishing: For CMS-driven sites, educate content teams on image optimisation standards and enforce them via automated compression on upload.
Why this works: Most performance regressions are not caused by one large decision — they're caused by many small decisions accumulating over time. A chat widget here, a new analytics tag there, an unoptimised video thumbnail added during a campaign. Each one seems harmless in isolation. The Performance Budget Principle makes the cumulative cost visible at the point of decision, not after the damage is done.
This is especially powerful for growing teams where multiple people make decisions that affect site performance. Without a shared budget framework, everyone optimises locally and the site degrades globally.
When pitching the Performance Budget Principle to non-technical stakeholders, frame it in revenue terms. A measurable improvement in LCP and CLS reduces bounce rates on high-intent pages — which has a direct relationship to conversion volume. Performance is not a technical luxury; it's a commercial lever.
Setting a performance budget without enforcement mechanisms. A budget that lives in a Google Doc and requires manual checking will be ignored under deadline pressure. The budget must be automated into the development workflow — Lighthouse CI, size-limit tools, or similar — to be effective.
The most honest thing we can say about Core Web Vitals as a ranking factor is this: they're a tiebreaker, not a primary driver — but the cost of failing them is higher than most people acknowledge, and the benefit of passing them compounds over time.
Google has been explicit that Core Web Vitals are a ranking signal within the page experience system. Pages that pass all three metrics at the 'good' threshold are eligible for a boost in rankings when content quality and relevance are otherwise equal. Pages that fail are penalised relative to passing competitors for the same queries.
Where Core Web Vitals have the biggest ranking impact: - Competitive SERPs where content quality is roughly equivalent across the top results — here, page experience becomes a meaningful differentiator - Mobile search results, where performance gaps between passing and failing pages are largest - High-intent commercial and transactional queries, where poor page experience has both a ranking cost and a conversion cost
Where the impact is smaller: - Queries where one page has dramatically superior content authority — content relevance still outweighs page experience signals - Low-competition informational queries where few pages compete for the same rankings
The compounding benefit: Even where Core Web Vitals don't directly move your ranking, improving them affects metrics that influence ranking indirectly — specifically engagement signals. Pages that load quickly, respond to interactions reliably, and don't jump around as they load retain users longer, generate lower bounce rates in the session data Google collects, and earn more return visits. These engagement signals contribute to Google's understanding of page quality over time.
The EEAT connection: Core Web Vitals are part of a broader page experience evaluation that includes HTTPS, mobile-friendliness, and absence of intrusive interstitials. Taken together, these signals contribute to Google's assessment of whether a site is trustworthy and user-centric — which aligns directly with EEAT (Experience, Expertise, Authoritativeness, Trustworthiness) principles. A site that invests in genuine user experience signals across all these dimensions is building authority in a way that synthetic link-building cannot replicate.
After improving Core Web Vitals on key commercial pages, monitor organic click-through rate from Search Console alongside rankings for those pages. CTR improvements from better SERP positioning — and reduced pogo-sticking from better page experience — often show up in the data before explicit ranking changes do.
Obsessing over Core Web Vitals scores after you've already achieved 'good' thresholds across all three metrics. Once you're passing, the marginal SEO return from pushing scores higher is minimal. Time is better invested in content depth, topical authority, and link acquisition — the primary ranking drivers.
Run the FEEL Framework's Find phase. Open Google Search Console Core Web Vitals report, filter by mobile and desktop separately, export all Poor and Needs Improvement URLs grouped by metric and issue type. Rank by estimated traffic impact.
Expected Outcome
A prioritised list of failing URLs with the specific metric responsible for each failure, ordered by the pages where improvement will have the greatest user and ranking impact.
Evaluate your top five failing URLs using PageSpeed Insights (both field and lab data panels), Chrome DevTools Performance tab, and the Layout Instability API for CLS-related failures. Document the root cause for each.
Expected Outcome
A diagnosis document for each priority URL that identifies whether the failure is TTFB, render-blocking resources, JavaScript long tasks, layout-shifting injected content, or another specific cause.
Eliminate the identified issues on priority URLs. For LCP: address TTFB, remove render-blocking resources, add preload hints, then optimise image format. For INP: break long tasks, audit tag manager triggers, defer non-critical scripts. For CLS: reserve space for dynamic content, fix font swap issues, constrain third-party script injection.
Expected Outcome
Measurable improvement in lab data scores for priority URLs, validated in PageSpeed Insights. Begin the two-to-four week window for field data to reflect changes in Search Console.
Define your Performance Budget. Set thresholds for JavaScript bundle size, total page weight, third-party request count, and LCP targets by page template. Document these budgets and share with the development, design, and marketing teams.
Expected Outcome
A shared performance budget document that gives every team member a clear framework for evaluating the performance cost of new features, tools, and content before they're deployed.
Implement the Lock step. Set up Lighthouse CI in your deployment pipeline if technically feasible, or establish a manual monthly performance review cadence as a minimum. Create a pre-approval checklist for new third-party tools.
Expected Outcome
A governance system that prevents performance regression — either automated enforcement via CI or a scheduled manual review process with a named owner.
Set up a Core Web Vitals monitoring dashboard using Looker Studio connected to CrUX API data or Search Console data. Schedule a monthly review date on your team calendar. Document baseline scores for all priority URLs to measure future progress against.
Expected Outcome
A visible, shared performance monitoring system that makes Core Web Vitals health transparent to the whole team and creates accountability for maintaining the improvements made this month.