Most speed guides chase scores, not revenue. Learn the VITAL STACK framework to fix Core Web Vitals in a way that actually moves business metrics.
The most common advice you will find on this topic goes something like this: compress your images, enable lazy loading, use a CDN, minify your CSS. All of that is technically correct. None of it is sufficient.
The problem is that generic speed guides treat all pages equally, all metrics equally, and all websites equally. A SaaS product page with a conversion goal, a blog post targeting informational traffic, and an e-commerce category page have completely different performance profiles and completely different failure modes. Applying the same checklist to all three is like prescribing the same medication to three patients with different conditions.
What most guides also get wrong is the order of operations. They list fixes alphabetically, or by ease of implementation, rather than by the size of their impact on the specific Core Web Vital that is holding your site back. Fixing CLS issues when your LCP is critically failing is busywork dressed up as optimisation. The VITAL STACK framework we outline in this guide solves this by forcing you to sequence your fixes based on measured impact — not assumed importance.
Core Web Vitals — LCP, INP, and CLS — are not three independent scores you fix in isolation. They interact with each other in ways that most optimisation guides ignore entirely. When you understand how they connect, you stop wasting time on fixes that cancel each other out.
Largest Contentful Paint (LCP) measures how quickly the largest visible element on screen loads. For most business websites, this is a hero image, a heading, or a large block of text. A slow LCP almost always signals one of four problems: slow server response time (TTFB), render-blocking resources, slow resource load time for the LCP element itself, or client-side rendering delays.
Interaction to Next Paint (INP) replaced First Input Delay in 2024 and measures the full latency of all user interactions, not just the first one. This is where JavaScript-heavy sites typically suffer. Every unnecessary script that runs on the main thread is a potential INP problem waiting to reveal itself under real usage conditions.
Cumulative Layout Shift (CLS) measures visual stability — how much page elements move unexpectedly as the page loads. The irony of CLS is that many well-intentioned performance techniques actually make it worse. Lazy loading images without defined dimensions, dynamically injecting content above the fold, and loading web fonts without fallback strategies are all common CLS contributors.
The system-level insight is this: the fixes you apply for LCP can introduce new INP problems if you are not careful about JavaScript execution order. The fixes you apply for CLS can affect LCP if you change how images are prioritised. You have to treat these three metrics as levers in the same machine, not switches on separate panels.
The practical implication is that before you touch a single line of code, you need a diagnostic snapshot that shows you all three metrics together, segmented by device type (mobile vs desktop) and by page template (homepage, landing page, blog post, product page). Google Search Console's Core Web Vitals report grouped by URL pattern is the fastest way to build this snapshot. Start there, not with Lighthouse.
Pull your CrUX data via the PageSpeed Insights API for your top 20 landing pages and compare field data against lab scores. The pages where these diverge most significantly are where you will find the highest-leverage fixes — and the most common waste of effort.
Running Lighthouse on your homepage and using that single score to represent your entire site's performance. The homepage is often the most optimised page on the site and the least representative of where real users actually experience problems.
The VITAL STACK is the prioritisation framework we use internally to sequence page speed fixes. The name is an acronym that captures the five layers of performance intervention, ordered from highest to lowest revenue impact for most business sites:
V — Visibility Layer (LCP fixes for above-the-fold content) I — Interaction Layer (INP fixes for JavaScript and main thread) T — Transfer Layer (TTFB, CDN, and server response) A — Asset Layer (image, font, and file optimisation) L — Layout Layer (CLS and visual stability fixes)
The STACK part reminds you that these layers sit on top of each other. You cannot meaningfully fix the Asset Layer if the Transfer Layer is broken. A beautifully optimised image still loads slowly on a server with 900ms TTFB. Similarly, fixing Layout issues while the Interaction Layer is unresolved means your CLS score improves but users still leave because interactions feel sluggish.
Here is how to apply the VITAL STACK in practice. First, pull your field data from CrUX. Identify which of the three Core Web Vitals is failing most severely, and for which page templates. Then map that failing metric to its corresponding VITAL STACK layer. For most failing sites, LCP is the culprit, which maps to the Visibility and Transfer layers first.
For a real scenario: if your product pages are failing LCP at 4.8 seconds on mobile, the VITAL STACK tells you to investigate Visibility (is the hero image preloaded? is it the correct format and size?) and Transfer (what is your TTFB from your primary user geography?) before touching anything else. Jumping to the Asset Layer and compressing images would give you a marginal improvement but miss the structural problem.
The VITAL STACK also helps you communicate with developers and stakeholders. Instead of presenting a flat list of 20 fixes, you present a sequenced plan where each layer unlocks meaningful improvement before the next one begins. This reduces scope creep, focuses developer time, and produces measurable checkpoints you can report on.
When presenting the VITAL STACK to non-technical stakeholders, label each layer with its business analogy: Visibility is your storefront window, Transfer is your supply chain, Assets are your product packaging. It makes prioritisation decisions instinctive rather than technical debates.
Starting with the Asset Layer (image compression, minification) because it feels safe and actionable. Asset optimisation is the most visible type of effort and often the least impactful when Transfer Layer and Visibility Layer problems are unaddressed.
LCP is the Core Web Vital with the clearest connection to user experience and business outcomes. When a page's largest content element takes more than 2.5 seconds to appear, users perceive the page as broken — not slow, broken. The psychological difference is significant. Slow pages get second chances. Broken pages get back buttons.
The most common LCP element on business websites is a hero image. The most common mistake is treating this image the same as every other image on the page. It should not be lazy loaded. It should be preloaded with a <link rel='preload'> tag in the document head. It should be served at the correct size for the viewport — not scaled down by CSS — and it should use a modern format like WebP or AVIF with appropriate fallbacks.
Beyond image handling, LCP is acutely sensitive to render-blocking resources. Every CSS file and synchronous JavaScript file loaded in the <head> before your LCP element delays its paint. Audit your critical rendering path by running a waterfall analysis in Chrome DevTools. Look for resources that sit between the HTML document response and the LCP element's load event. Each one is a candidate for deferral, async loading, or inlining if critical.
Server-side rendering and static generation make a meaningful difference for LCP on JavaScript-heavy sites. If your LCP element is being painted by client-side JavaScript, you are adding a full JavaScript parse and execution cycle to the user's wait time before they see anything meaningful. This is the hidden LCP cost of single-page application architectures that purely client-side teams often underestimate.
One tactic we find consistently underused is resource hint optimisation — specifically preconnecting to the origin domains of your LCP element's hosting location. If your hero image lives on a CDN subdomain or a third-party image service, adding <link rel='preconnect'> for that domain eliminates the DNS lookup, TCP handshake, and TLS negotiation time that would otherwise occur mid-load. On mobile connections, this alone can reduce LCP by a noticeable margin.
Use the 'LCP sub-parts' breakdown in Chrome DevTools Performance panel to identify whether your LCP delay is in TTFB, resource load delay, resource load duration, or element render delay. Each sub-part has a different fix — treating them as one problem is why generic image compression advice so often fails to move the metric.
Applying lazy loading to hero images because a blanket 'add lazy loading to all images' recommendation was followed site-wide. This is one of the most common causes of poor LCP scores we encounter on audited sites, and it is entirely self-inflicted.
INP is the Core Web Vital that most site owners understand least and fix last. That ordering is backwards. For sites with significant JavaScript, particularly third-party scripts for analytics, chat, advertising, and personalisation, INP is often the metric that fails most consistently in field data while appearing fine in lab tests.
Here is why: lab tests like Lighthouse do not simulate real user interaction patterns. They load the page in a controlled environment, measure a few predefined events, and report a score. Real users scroll, click, hover, and interact with your page in unpredictable sequences — often while JavaScript is still executing from initial page load. That overlap between JavaScript execution and user interaction is where INP failures live.
The Third-Party Script Audit is a structured method we use to identify and prioritise script-related INP problems. It works in three phases:
Phase 1 — Inventory: List every third-party script loading on your page. Use the Coverage tab in Chrome DevTools to see how much of each script is actually executed on load. Scripts that load 50KB of code but execute less than 20% of it on any given page visit are strong candidates for deferral or conditional loading.
Phase 2 — Attribution: For each script, measure its main thread blocking time using the Performance panel's bottom-up view filtered by domain. Attribute each block of main thread time to the script responsible. Many teams are genuinely surprised to discover that a single analytics or A/B testing script accounts for the majority of their INP failures.
Phase 3 — Triage: Categorise each script as Essential (cannot be deferred without breaking functionality), Deferrable (can load after user interaction is possible), or Removable (provides data or functionality nobody is actively using). The Removable category is almost always larger than teams expect.
Beyond third parties, long tasks in your own JavaScript are INP contributors. Break up any task exceeding 50ms using techniques like setTimeout with zero delay, scheduler.postTask in supported browsers, or Web Workers for computationally intensive operations that do not require DOM access.
Before removing any third-party script, document what business decision it was installed for and who owns it. Script removal is one of the highest-friction conversations in organisations because ownership is diffuse. Framing it as a revenue conversation — this script is measurably degrading user experience on your highest-converting pages — is far more effective than a technical argument.
Deferring all scripts with a blanket async or defer attribute and assuming the INP problem is solved. Deferred scripts still execute and still block the main thread — they just execute later. If that later point coincides with a user interaction, the INP failure is simply moved, not fixed.
The Render Budget Method is the second proprietary framework we use, and it addresses a problem that is almost invisible until you name it: most pages load far more resources than necessary before the user can see anything useful, because no one ever decided what the budget was.
A render budget is a hard limit on the resources — bytes, requests, and render-blocking assets — permitted to load before the above-the-fold content is painted for the user. Setting an explicit budget forces every team member who touches the page (designers, developers, marketers) to make conscious trade-offs rather than defaulting to addition.
Here is how to set and enforce a Render Budget for your key pages:
Step 1 — Establish your baseline. Run a waterfall analysis and identify the exact moment your LCP element is painted. Note the total bytes transferred and total requests made before that paint event.
Step 2 — Set your target budget. For most landing pages, a reasonable render budget targets: under 50KB of CSS delivered to the browser, no synchronous JavaScript in the critical path, all LCP-critical images preloaded and under 120KB in compressed size, and TTFB under 600ms.
Step 3 — Audit every addition against the budget. Whenever a new resource is proposed — a new font, a new widget, a new tracking script — it must be assessed against its render budget impact before implementation. This is a process change, not just a technical one.
Step 4 — Enforce it with tooling. Integrate performance budgets into your CI/CD pipeline using tools like Lighthouse CI or custom size-limit configurations. When a pull request would breach the render budget, it fails the build. This moves performance from a retrospective audit to a proactive constraint.
The Render Budget Method is particularly powerful for marketing-heavy organisations where landing pages accumulate scripts and assets over time through incremental decisions that no single person owns. Making the budget explicit and visible converts an invisible debt into a manageable system.
Set your render budget slightly below your current baseline, not at it. A budget that requires zero change creates zero improvement. A budget that requires 10-15% reduction from current performance is challenging enough to force prioritisation decisions without being so aggressive that it stalls development.
Treating the Render Budget as a one-time audit exercise rather than an ongoing system. Pages that pass today will fail in six months if new scripts and assets are added without budget accountability. The budget only works if it is enforced continuously.
Every guide on CLS tells you to add width and height attributes to your images. That is correct advice and it takes about an afternoon to implement site-wide. But if you have done that and your CLS score is still failing, you are dealing with one of the less-discussed causes — and they are far more common than most guides acknowledge.
Web font loading is one of the most significant and most overlooked CLS contributors. When a browser loads your page and the custom font has not arrived yet, it renders text in a fallback system font. When the custom font loads, it swaps in — and if the metrics of your custom font (character width, line height, spacing) differ from the fallback font, every text element on the page shifts. This is called flash of unstyled text (FOUT) and it registers directly in your CLS score.
The solution is font metric overrides. Using font-size-adjust and the size-adjust CSS descriptor, you can match the metrics of your fallback font to your web font, making the swap invisible to the user and invisible to CLS measurement. Combined with font-display: optional (which tells the browser to use the fallback if the web font does not load within the first render window), this eliminates font-related CLS without sacrificing your typography.
Dynamically injected content is the second major hidden CLS culprit. Cookie consent banners, promotional notification bars, chat widgets, and personalised content blocks that are injected above the fold by JavaScript after initial paint all generate CLS. The fix is to reserve space for these elements before they load — either with CSS min-height on their container or by server-rendering them so they are present in the initial HTML.
Ad slots are the third category. Display advertising is one of the most CLS-intensive elements a page can carry. Ads load asynchronously from third-party servers with unpredictable response times, and unless their container has fixed dimensions that match the ad unit exactly, they shift surrounding content when they load. The solution is to set explicit container dimensions that match your ad unit sizes and to never allow ad units to expand beyond their declared container.
Use the Layout Instability API in JavaScript to log CLS events with attribution data in your real user monitoring setup. This tells you exactly which elements are shifting and when — far more actionable than a CLS score alone. Without attribution, CLS debugging is guesswork.
Focusing CLS remediation only on images and ignoring font loading behaviour. In our experience, web font-related CLS is responsible for a significant portion of mobile CLS failures on content-heavy sites, yet it receives a fraction of the attention that image sizing does.
Time to First Byte (TTFB) is not a Core Web Vital, but it is the foundational metric that determines the ceiling of everything else you do. A slow TTFB means your LCP cannot be fast. It means your browser cannot begin parsing HTML or discovering resources until after the server has responded. It is the first domino, and if it falls slowly, everything that follows is delayed.
Google considers a TTFB under 800ms as 'good' for the purposes of LCP diagnosis, but in practice, under 200ms is where you want to be for competitive markets. The gap between 800ms and 200ms represents a structural advantage that no amount of image compression or JavaScript optimisation can fully compensate for.
The most common TTFB problems we encounter fall into three categories. First, geographic distance: if your server is hosted in one region and a significant portion of your users are in another, the physical latency of data transmission is a hard floor on your TTFB. A CDN with edge caching for HTML documents (not just static assets) is the solution. Many teams configure CDNs to cache images and scripts but leave HTML uncached, meaning every page visit still makes a round trip to the origin server.
Second, dynamic page generation time: if your pages are generated server-side on every request (common in database-driven CMS platforms), the time it takes to query the database, process the template, and assemble the HTML response is added to TTFB on every visit. Full-page caching, object caching, and database query optimisation are the interventions here. For WordPress sites, eliminating plugin bloat is often as impactful as any hosting upgrade.
Third, SSL/TLS negotiation: on some hosting configurations, particularly older shared hosting setups, the TLS handshake adds meaningful latency before any content is transferred. Modern TLS 1.3 with 0-RTT resumption eliminates most of this overhead, but the configuration requires server-level access that shared hosting plans frequently do not provide.
The hosting conversation is often avoided because it involves cost decisions and vendor changes. But the cost of persistent TTFB problems — in lost rankings, reduced conversion, and diminished user experience — almost always exceeds the cost of upgrading infrastructure.
Measure your TTFB from multiple geographic locations using a tool that simulates connections from your actual user geography. Your local TTFB from the same city as your server will be misleadingly fast. The TTFB your users in other regions experience is the real number that matters for your rankings.
Investing heavily in front-end performance optimisation while leaving a TTFB of over 1 second unaddressed. Every second of TTFB consumes 1 second of your LCP budget before the browser has even started parsing your HTML. No amount of asset optimisation recovers that time.
Page speed is not a project that ends. It is an ongoing discipline that degrades by default as new features, scripts, and content are added. Building a measurement and governance system is what separates sites that maintain good Core Web Vitals from sites that improve temporarily and then regress.
The measurement stack we recommend operates at two levels. At the field level, you need Real User Monitoring (RUM) data that captures actual user experiences segmented by device type, connection type, and page template. This is the ground truth. CrUX data from Google gives you aggregated field data, but RUM gives you the granularity to identify specific user cohorts who are experiencing poor performance that aggregate data masks.
At the lab level, Lighthouse CI integrated into your deployment pipeline gives you a regression gate — a check that prevents performance from degrading with each new release. Set your lab-level thresholds conservatively below your current best field-data performance to create a safety buffer for natural variance.
For reporting, track Core Web Vitals performance by page template (not individual URL) in a dashboard that shows field data trends over a 28-day rolling window — the same window Google uses for ranking signal assessment. Include TTFB as a supplementary metric alongside LCP, INP, and CLS, because TTFB degradation is the earliest warning sign of infrastructure problems.
For governance, establish a performance champion role — a person or small team responsible for reviewing performance impact of proposed changes before they ship. This is not about creating bureaucracy. It is about making performance part of the conversation at the proposal stage rather than the debugging stage. Sites that maintain strong Core Web Vitals have almost universally made this a process decision, not just a technical one.
Finally, reassess your VITAL STACK prioritisation every quarter. As your highest-priority failures are resolved, the next-priority layer becomes your focus. Performance optimisation is iterative by nature, and the returns from each layer compound over time when applied systematically.
When presenting performance progress to leadership, anchor the conversation in user experience metrics (percentage of page loads that meet 'good' thresholds) rather than raw scores. A change from 55 to 70 in a Lighthouse score is abstract. An increase in the share of user sessions experiencing 'good' LCP is tangible and tied to outcomes.
Running a one-time performance audit and improvement sprint without implementing ongoing measurement and regression prevention. In our experience, sites that invest in a single optimisation push without governance systems return to their previous performance state within two to three development cycles.
Pull field data from CrUX via Google Search Console and PageSpeed Insights API for your top 20 landing pages. Document LCP, INP, and CLS scores segmented by mobile and desktop.
Expected Outcome
A clear diagnostic baseline that shows exactly which pages and metrics need priority attention — before any fix is written.
Apply the VITAL STACK framework to your diagnostic data. Identify which VITAL STACK layer is the entry point for your highest-priority failing pages.
Expected Outcome
A sequenced fix roadmap by page template, ordered by revenue impact rather than technical convenience.
Address Transfer Layer issues first: audit TTFB from your users' primary geographies, configure HTML edge caching on your CDN, and enable full-page caching if your CMS supports it.
Expected Outcome
TTFB improvements that raise the performance ceiling for all subsequent Visibility and Asset Layer fixes.
Address Visibility Layer (LCP) issues: implement preload for hero images, eliminate render-blocking resources from the critical path, and audit LCP sub-parts to identify the specific delay source.
Expected Outcome
Measurable LCP improvement on your highest-traffic landing pages, validated against field data rather than just lab scores.
Run the Third-Party Script Audit across your primary page templates. Categorise each script as Essential, Deferrable, or Removable and implement deferral or removal for the highest-impact INP contributors.
Expected Outcome
INP improvements on pages with significant JavaScript load, confirmed through Chrome DevTools Performance panel attribution.
Fix CLS issues using font metric overrides, pre-reserved space for dynamic elements, and fixed-dimension ad slot containers. Use Layout Instability API attribution to confirm which elements are causing shift.
Expected Outcome
CLS scores that pass 'good' thresholds across mobile and desktop, with a specific fix list tied to real user data rather than guesswork.
Set your Render Budget for key page templates and integrate Lighthouse CI into your deployment pipeline with budget enforcement.
Expected Outcome
A regression prevention system that maintains your improvements through future development cycles.
Build your 28-day rolling performance dashboard and establish the performance champion role and review process for your team.
Expected Outcome
An ongoing governance system that makes performance a proactive constraint rather than a retrospective firefight.