Skip to main content
Authority SpecialistAuthoritySpecialist
Pricing
See My SEO Opportunities
AuthoritySpecialist

We engineer how your brand appears across Google, AI search engines, and LLMs — making you the undeniable answer.

Services

  • SEO Services
  • Local SEO
  • Technical SEO
  • Content Strategy
  • Web Design
  • LLM Presence

Company

  • About Us
  • How We Work
  • Founder
  • Pricing
  • Contact
  • Careers

Resources

  • SEO Guides
  • Free Tools
  • Comparisons
  • Cost Guides
  • Best Lists

Learn & Discover

  • SEO Learning
  • Case Studies
  • Industry Resources
  • Locations
  • Development

Industries We Serve

View all industries →
Healthcare
  • Plastic Surgeons
  • Orthodontists
  • Veterinarians
  • Chiropractors
Legal
  • Criminal Lawyers
  • Divorce Attorneys
  • Personal Injury
  • Immigration
Finance
  • Banks
  • Credit Unions
  • Investment Firms
  • Insurance
Technology
  • SaaS Companies
  • App Developers
  • Cybersecurity
  • Tech Startups
Home Services
  • Contractors
  • HVAC
  • Plumbers
  • Electricians
Hospitality
  • Hotels
  • Restaurants
  • Cafes
  • Travel Agencies
Education
  • Schools
  • Private Schools
  • Daycare Centers
  • Tutoring Centers
Automotive
  • Auto Dealerships
  • Car Dealerships
  • Auto Repair Shops
  • Towing Companies

© 2026 AuthoritySpecialist SEO Solutions OÜ. All rights reserved.

Privacy PolicyTerms of ServiceCookie PolicySite Map
Home/Guides/How to Use Screaming Frog to Improve On-Page SEO: The Complete Tactical Guide
Complete Guide

How to Use Screaming Frog to Improve On-Page SEO (Without Wasting 6 Hours on the Wrong Reports)

Every guide tells you to 'check your title tags.' Here's what to actually do with Screaming Frog once you've crawled your site — and why most SEOs are reading the wrong columns.

13 min read · Updated March 1, 2026

Martial Notarangelo
Martial Notarangelo
Founder, Authority Specialist
Last UpdatedMarch 2026

Contents

  • 1Why Your Crawl Setup Determines Whether Your Audit Is Worth Anything
  • 2The CANOPY Framework: A Six-Layer On-Page Audit System That Creates Audit Clarity
  • 3Custom Extraction: The Feature That Turns Screaming Frog Into a Content Intelligence Tool
  • 4The Crawl Depth Revenue Matrix: Connecting Technical Architecture to Business Impact
  • 5How to Actually Fix Title Tags and Meta Descriptions (Not the Way Every Other Guide Says)
  • 6Internal Linking: The On-Page SEO Lever That Screaming Frog Reveals Better Than Any Other Tool
  • 7Redirects, Canonicals, and Index Coverage: How to Read the Technical Signals That Suppress Rankings
  • 8Beyond the One-Time Audit: Using Screaming Frog as an Ongoing On-Page Health System

Here is the uncomfortable truth about Screaming Frog guides: most of them stop right at the point where the real work begins. They show you how to open the tool, run a crawl, and glance at the 'Page Titles' tab. Then they tell you to 'fix duplicates' and call it a day.

If that level of guidance actually moved rankings, every site that ran a free crawl would be on page one. They are not. When I started doing serious on-page audits, I made the same mistake — I ran crawls, exported spreadsheets, and sent clients colour-coded reports that looked impressive but rarely addressed root causes.

Rankings barely moved. What changed everything was reframing what Screaming Frog actually is: not a reporting tool, but a diagnostic engine. The outputs only have value when they are interpreted through a structured prioritisation system, connected to real traffic data, and actioned in the right sequence.

This guide gives you that system. You will get two proprietary frameworks — the CANOPY Audit Framework and the Crawl Depth Revenue Matrix — that we use on client sites to move from crawl data to ranked pages in a repeatable, logical order. No colour-coded spreadsheets for their own sake.

No surface-level title tag advice. Just the Screaming Frog workflows that actually drive on-page SEO improvement.

Key Takeaways

  • 1The Crawl-First, Fix-Second Rule: Run your crawl before touching a single page — sequence matters more than speed
  • 2The CANOPY Framework: A 5-layer audit system (Content, Architecture, Navigation, Orphans, Performance, You-Know-Who) for structured on-page reviews
  • 3Title tag issues are the last thing you should fix — internal linking and crawl depth issues cause more ranking damage
  • 4Custom extraction using XPath lets you audit structured data, heading hierarchies, and schema without a separate tool
  • 5Screaming Frog's 'Crawl Analysis' tab reveals crawl depth problems that silently suppress deep pages from ranking
  • 6The Thin Content Trap: Word count filters in custom extraction expose content gaps your CMS hides from you
  • 7Orphaned pages — discovered through the 'All Inlinks' export — are often your fastest wins for organic traffic recovery
  • 8The 'Response Codes' tab paired with Google Analytics data creates a prioritisation matrix no generic guide will show you
  • 9Connecting Screaming Frog to Google Search Console unlocks impression and click data per URL — turning technical audit into revenue intelligence
  • 10On-page SEO is not a one-time crawl — set a crawl schedule cadence and treat Screaming Frog as your site's health monitoring system

1Why Your Crawl Setup Determines Whether Your Audit Is Worth Anything

Before you crawl a single URL, the configuration you choose will determine whether your audit data is accurate or dangerously misleading. This is the step that most tutorials rush past in two sentences. The default Screaming Frog settings are designed for general crawling — they are not optimised for an on-page SEO audit.

Here is how to configure a crawl that gives you clean, actionable data.

First, set your user agent to Googlebot. This matters because some sites serve different content to different user agents. If you crawl as Screaming Frog's default spider and your site's CDN or JavaScript rendering behaves differently for Googlebot, you are auditing a version of your site that Google never sees.

Go to Configuration > User-Agent and select Googlebot.

Second, enable JavaScript rendering if your site uses a JavaScript framework such as React, Vue, or Next.js. Go to Configuration > Spider > Rendering and switch from 'Text Only' to 'JavaScript'. This adds crawl time but gives you the rendered DOM — the actual content Google evaluates.

Auditing a JavaScript-heavy site without rendering enabled is one of the most common causes of 'I can't find any issues but rankings are stuck' situations.

Third, configure your crawl to respect or ignore canonicals depending on your audit goal. If you want to see what Google sees (de-duplicated), keep canonical respect enabled. If you want to find canonical misconfigurations, disable it temporarily and compare the two crawl outputs.

Fourth, connect your Google Search Console and Google Analytics integrations before you crawl. In Screaming Frog, go to Configuration > API Access. Connecting GSC pulls impression and click data per URL directly into your crawl export.

This is the single configuration change that transforms your audit from a technical exercise into a revenue intelligence report. You will be able to see which pages have high impressions but low clicks — a direct signal of title tag and meta description optimisation opportunity — and which pages rank for nothing despite being fully indexed.

Finally, set your crawl depth limit and page limit according to your site's architecture. For most sites under 10,000 pages, crawl everything. For larger sites, set a crawl depth of five to seven levels and prioritise the most commercially important sections first.

Set user agent to Googlebot to audit the version of your site Google actually crawls
Enable JavaScript rendering for any site built on a JS framework — skipping this gives you false positives and false negatives
Connect Google Search Console API before crawling to attach impression and click data to every URL
Decide upfront whether to respect or bypass canonicals based on your audit objective
For large sites, segment crawls by subfolder rather than crawling everything at once — it improves data accuracy and manageability
Save your configuration as a custom preset so every future crawl uses the same settings for consistent comparison

2The CANOPY Framework: A Six-Layer On-Page Audit System That Creates Audit Clarity

One of the core problems with standard Screaming Frog guides is that they give you a list of things to check without giving you an order of operations. The result is that most people fix what is easy — title tags, meta descriptions — and never address the issues causing the most ranking damage. The CANOPY Framework solves this by organising on-page audit findings into six layers, ordered by SEO impact and logical dependency.

CANOPY stands for: Content, Architecture, Navigation, Orphans, Performance, and Your-On-Page-Signals.

C — Content: Start here. Use Screaming Frog's 'Content' tab and enable custom extraction (covered in the next section) to identify thin content, duplicate body text, and pages with missing or inadequate heading structures. Pages that have fewer than 300 words on commercially important topics should be flagged for content expansion before any other on-page work is done.

Improving thin content gives Google more signal to work with and directly raises topical depth scores.

A — Architecture: Next, audit your site's internal linking structure and crawl depth. Use the 'Crawl Analysis' feature under Reports > Crawl Analysis to generate a crawl depth report. Any commercially important page sitting at crawl depth five or deeper is under-resourced from a crawl budget and internal authority perspective.

Move these pages shallower through internal linking before touching their on-page elements.

N — Navigation: Audit your navigation-level internal links. Export your inlink report and identify which pages receive the most internal links from your navigation and header elements. These pages signal topical authority to Google.

If your most internally-linked page is not your most strategically important page, you have a navigation structure problem — not an on-page problem.

O — Orphans: Orphaned pages are pages with no inlinks that Screaming Frog discovers through your sitemap but not through crawling. These pages are essentially invisible to Google. Export your sitemap URL list and your crawled URL list, then compare them.

Any URL in the sitemap but not found via crawl is orphaned. This is frequently where old blog posts and valuable resource pages go to die.

P — Performance: Use Screaming Frog's integration with PageSpeed Insights to pull Core Web Vitals data per URL. On-page SEO is not just about content — page speed and rendering performance affect how Google evaluates and ranks individual URLs.

Y — Your On-Page Signals: Finally, address title tags, meta descriptions, H1 and H2 structure, image alt attributes, and schema markup. This is the layer most guides start with. We do it last — because it only has full impact once the layers above it are resolved.

Content issues must be resolved before on-page signals — thin content limits how much title tag optimisation can help
Crawl depth is a structural issue, not a content issue — fix it through internal linking, not page rewrites
Orphaned pages are the fastest wins because the fix (adding internal links) requires no content creation
Navigation-level link equity distribution is more influential than individual page-level optimisation in most cases
Core Web Vitals data per URL, pulled through the PageSpeed API integration, enables page-level performance triage
Running the CANOPY layers in reverse order (signals before architecture) is the most common reason audits do not produce ranking movement

3Custom Extraction: The Feature That Turns Screaming Frog Into a Content Intelligence Tool

Custom extraction is the most underused feature in Screaming Frog, and arguably the most powerful for on-page SEO. It allows you to pull specific elements from every crawled page using CSS selectors, XPath, or regex — turning your crawl into a content audit at scale.

Here is how to access it: Configuration > Custom > Extraction. You can add up to three custom extraction fields in the free version and unlimited in the paid version.

Use Case 1: Heading Hierarchy Audit Create a custom extraction using CSS selector `h1` to pull every H1 tag from every page. Export this alongside your page titles. Now you can check in a single spreadsheet whether every page has exactly one H1, whether the H1 and title tag are aligned (they should be related but not necessarily identical), and whether any pages are missing H1s entirely.

Sort by H1 count to instantly flag pages with multiple H1s — a common issue on templated CMS sites.

Use Case 2: Schema Markup Detection Create an extraction using XPath: `//script[@type='application/ld+json']`. This pulls every JSON-LD schema block from every page. Now you can identify which pages have schema, which do not, and whether schema types are consistent with content type.

A product page without Product schema is a missed structured data opportunity. A blog post with no Article schema is leaving rich result eligibility on the table.

Use Case 3: Word Count Approximation Use `//body` as your XPath selector and extract the full body text. While Screaming Frog does not provide a native word count column, you can paste the exported body text into a word count formula in your spreadsheet. For a faster approach, the 'Text' tab in Screaming Frog shows 'Word Count' when you enable it under Configuration > Spider > Advanced — tick 'Check Spelling' or use the readability metrics.

Pages under a threshold you define (typically 300 words for informational content, 500 for commercial pages) should be flagged for content expansion.

Use Case 4: CTA Presence Check If you have a consistent call-to-action element with a class name across your site, you can extract it to verify it appears on every key commercial page. Use `.your-cta-class-name` as your CSS selector. Any page returning empty on this extraction is missing its primary conversion element.

Custom extraction essentially lets you build bespoke audit logic on top of Screaming Frog's crawl engine — without writing a custom crawler or using a separate tool.

CSS selectors and XPath are the two most useful extraction methods — learn both, they solve different problems
The H1 extraction audit takes under 10 minutes to configure and immediately surfaces heading hierarchy issues at scale
Schema extraction via JSON-LD XPath reveals structured data coverage gaps across your entire site in one crawl
Word count auditing through body text extraction helps prioritise content expansion across thin pages
CTA presence checks using class name selectors turn Screaming Frog into a conversion audit tool, not just an SEO tool
Save custom extraction configurations as presets — rebuilding them for every crawl wastes time and introduces inconsistency

4The Crawl Depth Revenue Matrix: Connecting Technical Architecture to Business Impact

This is the framework I wish someone had given me earlier in my career. It reframes crawl depth — traditionally a dry technical metric — as a direct predictor of which pages are being suppressed from their ranking potential.

The Crawl Depth Revenue Matrix works like this: export your crawl depth report from Screaming Frog (Reports > Crawl Analysis > Crawl Depth), then merge it with your GSC data (clicks and impressions per URL) and your Google Analytics data (sessions and conversions per URL). The resulting matrix has three dimensions:

- Crawl Depth (1–7+ levels from homepage) - Commercial Value (revenue attribution, conversion rate, or strategic priority) - Current Organic Visibility (GSC impressions and position)

Pages in the matrix quadrant of high commercial value + deep crawl depth + low organic visibility are your highest-priority internal linking opportunities. These are pages that likely rank poorly not because of poor on-page content, but because they are structurally under-resourced — Google is not crawling them frequently enough, and they receive too little internal link equity to compete.

The fix is not to rewrite the page. The fix is to add three to five contextually relevant internal links from shallower, higher-authority pages — and in many cases, to add the page to a relevant section of your main navigation or to a topic cluster hub page.

In our experience, this intervention alone — adding internal links from pages with strong crawl frequency to commercially important deep pages — produces measurable ranking improvement without any content changes. It works because you are solving the actual problem: crawl frequency and internal authority distribution.

How to build the matrix in Screaming Frog: 1. Run your crawl and connect GSC and GA4 APIs before crawling 2. Go to Reports > Crawl Analysis and export the depth report 3.

Export the full URL list with GSC data (clicks, impressions, position) 4. In your spreadsheet, create a column for crawl depth, one for impressions, one for commercial value (manual score or revenue data), and one for current average position 5. Colour-code by quadrant: green (shallow + visible), amber (shallow + invisible or deep + visible), red (deep + invisible + commercially important) 6.

Prioritise every red URL for internal linking sprint work before any other on-page intervention

Crawl depth directly affects crawl frequency — pages at depth six or deeper are crawled far less often than pages at depth two or three
Internal links from high-authority shallow pages to deep commercial pages transfer crawl priority and PageRank
The matrix reveals whether a page's ranking problem is structural (architecture) or content-based — two very different fixes
Adding contextual internal links with relevant anchor text is faster than content rewrites and often more impactful
Pages at depth one or two but with zero impressions likely have a content or canonical issue, not an architecture issue
Run the Crawl Depth Revenue Matrix quarterly — site architecture changes over time and new content silently pushes pages deeper

5How to Actually Fix Title Tags and Meta Descriptions (Not the Way Every Other Guide Says)

Title tags and meta descriptions are the on-page elements that get the most attention in Screaming Frog guides — and the most superficial treatment. The standard advice is: fix duplicates, keep titles under 60 characters, include your keyword. That advice is not wrong, but it misses the nuance that separates average performance from measurable click-through rate improvement.

The Title Tag Audit Process in Screaming Frog In the 'Page Titles' tab, Screaming Frog flags issues as: Missing, Duplicate, Over 60 Characters, Under 200 Pixels, and Multiple. Start with missing titles — these are your most urgent fixes. A page without a title tag will have its title auto-generated by Google, often from the H1 or even from anchor text pointing to the page.

Next, look at duplicates. Export the duplicate title list and group by URL pattern. If you have 40 duplicate titles, they are almost certainly coming from one template — a category page, a tag archive, a product variant.

Fix the template, not the individual pages.

For title length, do not treat 60 characters as a hard rule. Screaming Frog flags titles over 60 characters, but what matters is pixel width, not character count. A title with narrow characters (i and l) at 65 characters may display fully in SERPs, while a title with wide characters (M and W) at 55 characters may get truncated.

The 'Title Pixel Width' column is more reliable than character count for this reason.

The CTR Optimisation Layer Here is what most guides skip: once your GSC integration is active, filter your page title tab to show pages with more than 200 impressions and a click-through rate below your site average. These pages are indexed, visible, and failing to earn clicks. Their title tags (and meta descriptions) are not compelling enough for the query context.

For these pages, open your GSC data and find the top queries driving impressions. Does the current title tag align with the language of those queries? Often it does not — the page was written with one keyword intent in mind and is actually ranking for a related but slightly different query.

Rewriting the title to align with the actual ranking query language (not the intended target keyword) frequently produces click-through rate improvement within two to four weeks — a faster feedback loop than most on-page changes.

Meta descriptions do not directly affect rankings, but they directly affect clicks. Treat meta descriptions as ad copy. Every meta description should answer: what does this page give me that others do not?

Use pixel width, not character count, as your title tag length benchmark — Screaming Frog shows both
Duplicate titles are almost always template-level issues — fix the template, not individual pages
GSC-integrated CTR filtering reveals which titles are visible but not compelling — the highest-leverage fix list
Rewrite title tags to match actual ranking query language found in GSC, not just your intended target keyword
Meta descriptions are ad copy — answer 'what does this page give me that the alternatives do not'
Missing title tags are the highest priority — Google-generated titles frequently miss keyword intent and brand framing
Run title tag audits after connecting GSC — without click data, you are optimising blindly

6Internal Linking: The On-Page SEO Lever That Screaming Frog Reveals Better Than Any Other Tool

Internal linking is consistently undervalued in on-page SEO — and Screaming Frog is one of the best tools in existence for auditing and improving it, yet almost no guide covers this in meaningful depth.

The internal linking picture in Screaming Frog lives in two places: the 'Inlinks' tab (accessible by clicking any URL and viewing its inlink data) and the bulk export under Reports > All Inlinks. It is the bulk export that gives you the full picture.

Auditing Internal Link Distribution Export Reports > All Inlinks. This gives you every internal link on your site — source URL, destination URL, anchor text, and whether the link is follow or nofollow. Sort by destination URL and count inlinks per page.

This inlink count is a proxy for how much internal authority each page receives.

Now compare your inlink count per page against your GSC impressions per page. Pages with high impressions but low inlink counts are ranking despite limited internal support — they have organic merit but are being under-resourced. These are candidates for internal link building that will compound existing performance.

Pages with high inlink counts but low impressions have the opposite problem — they are internally well-resourced but failing on content or on-page signal quality. These need content work, not more internal links.

Anchor Text Audit In the same export, analyse your anchor text distribution for key commercial pages. If your most important service page receives 20 internal links but 18 of them use the anchor text 'click here' or 'learn more', you are wasting significant internal linking opportunity. Contextual, keyword-relevant anchor text in internal links is a direct on-page SEO signal.

Re-anchor existing links to use descriptive, topically relevant phrases.

Finding Link Gap Opportunities Screening Frog's 'Link Opportunities' report (available in paid version under Reports > Link Opportunities) cross-references your page content with anchor text of other pages to suggest internal link placements. This is particularly useful on sites with hundreds of blog posts where manual discovery of link opportunities would take days.

For sites on the free version, export all crawled page titles and URLs, then use a content search to identify pages that mention a topic but do not link to your target page on that topic. These are your link gap pages.

The 'All Inlinks' bulk export is the most actionable internal linking report Screaming Frog produces
Inlink count per page is a proxy for internal PageRank — cross-reference against GSC impressions to identify under-resourced pages
Anchor text quality matters in internal links — replace generic anchors with descriptive, keyword-relevant phrases on commercial pages
Pages with high inlinks but low impressions have content problems, not architecture problems — different fix
The Link Opportunities report automates gap analysis across large content libraries
Nofollow internal links waste internal link equity — audit nofollow usage in your internal link structure and remove it where unnecessary

7Redirects, Canonicals, and Index Coverage: How to Read the Technical Signals That Suppress Rankings

On-page SEO does not exist in isolation from technical SEO — and Screaming Frog is the junction point between the two. Three technical areas have direct, measurable impact on on-page SEO performance: redirect chains, canonical misconfigurations, and indexation issues.

Redirect Chains Go to Reports > Redirect Chains. Any URL that routes through more than one redirect before reaching its destination is a redirect chain. Google will follow redirect chains, but each hop dilutes the PageRank passed to the final destination and slows crawl response time.

More importantly, if a redirected URL has external backlinks pointing to it, those backlinks are losing their value through each additional hop.

For on-page SEO, the most impactful redirect chain fix is consolidating chains to single 301 redirects. This preserves more link equity on the final destination page and makes the value of any external links pointing to old URLs more effective.

Canonical Misconfigurations In the 'Canonicals' tab, look for three issues: pages with no canonical tag, pages with self-referencing canonicals (correct), and pages where the canonical points to a different URL. The third case is the most dangerous. If page A has a canonical pointing to page B, Google will consolidate ranking signals to page B — which means page A will not rank, regardless of how well-optimised its on-page content is.

A common misconfiguration: paginated pages (page-2, page-3 of a blog archive) canonical-ing back to the first page of the archive. This is usually intentional but sometimes applied to pages that should be independently indexed — such as paginated product category pages with unique products on each page.

Index Coverage via Response Codes Screening Frog's 'Response Codes' tab shows every URL and its HTTP response. Filter for non-200 responses. 404s on internally-linked pages mean you are wasting internal link equity on dead pages. 301s that cascade into other 301s create the chains discussed above. 5XX errors on pages with significant GSC impressions are a direct ranking emergency — these pages are periodically unavailable to Googlebot.

For every 404 found on a page with historical GSC impressions, either restore the page or redirect it to the most relevant live equivalent. Never let a page that once received clicks simply return a 404 without action.

Redirect chains cap at two hops maximum — any chain longer than this should be collapsed to a single direct redirect
Canonical tags pointing to wrong destinations silently prevent pages from ranking regardless of content quality
Paginated pages need canonical strategy — independent indexation for unique content, consolidated canonicals for thin archive pages
404 errors on internally-linked pages waste internal link equity — fix or redirect every one
5XX errors on high-impression pages are ranking emergencies — prioritise above all other audit findings
A page with a self-referencing canonical is correctly configured — this is not a problem, it is best practice

8Beyond the One-Time Audit: Using Screaming Frog as an Ongoing On-Page Health System

The biggest missed opportunity with Screaming Frog is treating it as a one-time audit tool. Most teams run a crawl, fix the findings, and revisit the tool six months later when someone notices rankings have dropped. By then, you are doing damage control, not proactive optimisation.

Building Screaming Frog into a continuous monitoring cadence transforms it from an audit tool into an SEO health system. Here is how to operationalise this.

Scheduled Crawl Cadence Screening Frog allows you to schedule automated crawls and export reports automatically in the paid version. Set up a monthly crawl for small to medium sites (under 5,000 pages) and a bi-weekly crawl for large sites or high-velocity content operations. Save your configuration including API connections so every scheduled crawl pulls fresh GSC and GA4 data.

Crawl Comparison Reports Screening Frog's 'Crawl Comparison' feature (Reports > Crawl Comparison) allows you to load two crawl files and compare them. This shows you what has changed between crawls — new pages added, pages that changed status codes, pages where title tags changed, new redirects introduced. This is invaluable for catching regressions introduced by CMS updates, developer deployments, or content team changes that inadvertently broke on-page elements.

Issue Velocity Tracking Create a simple tracking sheet that records the count of key issue types after every crawl: missing title tags, broken inlinks, redirect chains, pages at depth five or deeper, orphaned pages. Plotting these numbers over time gives you an issue velocity metric — whether your on-page health is improving or degrading month-over-month. A rising orphan count, for example, indicates that content is being published without internal linking strategy — a process problem, not just an SEO problem.

The Quarterly CANOPY Review Every quarter, run the full CANOPY Framework (Content, Architecture, Navigation, Orphans, Performance, Your On-Page Signals) against your crawl data. The quarterly cadence aligns with most content publishing calendars and gives Google time to register changes from the previous quarter's fixes before you triage the next round.

The teams that see compounding SEO growth are the ones that treat on-page SEO as an operational system — not a project with a start and end date. Screaming Frog, run on a structured cadence with consistent configuration and benchmarked data, is the infrastructure for that system.

Scheduled crawls in Screaming Frog paid version enable automated, consistent monitoring without manual re-configuration
Crawl Comparison reports catch regressions introduced by CMS or developer changes before they affect rankings
Issue velocity tracking — counting key problem types per crawl — is a leading indicator of on-page SEO health trends
Quarterly CANOPY Reviews provide a structured, comprehensive audit without requiring full audit effort every month
Monthly mini-audits focused on response codes, orphans, and new title tag issues catch fast-moving problems between full reviews
Consistent crawl configuration across every scheduled crawl is essential — changing settings between crawls makes comparison data unreliable
FAQ

Frequently Asked Questions

The free version of Screaming Frog allows you to crawl up to 500 URLs per crawl. For small websites, personal projects, or initial site health checks, this is often sufficient. However, for professional on-page SEO audits of business websites, the paid licence removes this limit and unlocks critical features: scheduled crawls, crawl comparison, Google Search Console and Analytics API integrations, and the Link Opportunities report.

The API integrations alone — which attach GSC impression and click data to every crawled URL — make the paid version essential for any audit where you need to prioritise by business impact rather than just technical severity.

Follow the CANOPY Framework order: Content first, then Architecture (crawl depth), then Navigation (internal link equity distribution), then Orphans (pages with no inlinks), then Performance (Core Web Vitals), and finally Your On-Page Signals (title tags, meta descriptions, headings, schema). Most guides start with title tags because they are easy to identify and explain — but title tag optimisation has limited impact when crawl architecture and internal linking problems are suppressing the page's visibility. Fixing the structural layers first means your on-page signal improvements have a full opportunity to drive ranking change, rather than being offset by underlying technical limitations.

Yes, but only if you enable JavaScript rendering before crawling. Go to Configuration > Spider > Rendering and select 'JavaScript'. Without this setting, Screaming Frog crawls the raw HTML — which for JavaScript-rendered sites means crawling an empty or near-empty page structure rather than the content Google evaluates after rendering.

This produces false-positive findings: the tool will flag hundreds of pages as having thin content or missing H1 tags when the content actually exists but is rendered by JavaScript. The JavaScript rendering mode is slower and more resource-intensive, but it is the only accurate way to audit a JS-heavy site.

Orphaned pages are pages that exist on your site (and may be in your sitemap) but receive no internal links, making them invisible to crawlers navigating via links. To find them, connect your XML sitemap to Screaming Frog before crawling (Configuration > Spider > Crawl > Crawl XML Sitemaps). After crawling, go to Reports > Sitemap and export the list.

Then compare this list against your crawled URLs. Any URL that appears in the sitemap but was not discovered through link-following is orphaned. Alternatively, go to the 'Sitemaps' tab and filter for pages that appear in your sitemap but have zero inlinks.

These are your orphans — and adding even a small number of contextually relevant internal links to these pages can restore their crawlability and ranking potential.

Yes — and this is one of the most underutilised applications of the tool. When you connect the Google Search Console API (Configuration > API Access > Google Search Console), Screaming Frog pulls impression, click, and average position data per URL into your crawl export. This allows you to filter pages by impressions above a threshold and click-through rate below your site average — identifying pages that are visible in search results but failing to earn clicks.

These pages have title tag and meta description problems, not ranking problems. Rewriting the title and description of these pages to better match the intent of their ranking queries — and to communicate a clearer reason to click — typically produces measurable CTR improvement within weeks, often faster than ranking improvement from content changes.

For most business websites under 5,000 pages, a monthly full crawl paired with a quarterly CANOPY Framework review is the right cadence. Monthly crawls catch fast-moving issues — new 404s introduced by content updates, CMS changes that break title tag templates, new pages published without internal links. The quarterly CANOPY review provides a comprehensive structured analysis that goes deeper than a monthly issue check.

For high-velocity sites publishing multiple pieces of content per week, or for ecommerce sites with frequently changing inventory, bi-weekly crawls are advisable. The key is consistency: running Screaming Frog on a regular schedule with the same saved configuration enables crawl comparison — which tells you what changed between crawls, not just what the current state of the site is.

Both canonical tags and redirects tell Google which URL is the 'preferred' version of content, but they work differently and serve different purposes. A redirect (typically a 301) moves traffic and link equity from one URL to another — the old URL is no longer accessible. A canonical tag keeps both URLs accessible to browsers but signals to Google that ranking signals should be consolidated on the canonical URL.

In Screaming Frog, audit both in separate passes. The 'Canonicals' tab shows canonical configurations and flags misconfigurations where the canonical points to an unexpected or incorrect URL. The 'Response Codes' tab and redirect chain report show redirect behaviours and chain lengths.

For on-page SEO, canonical misconfigurations are often more damaging than redirect chains because they silently suppress pages from ranking while those pages remain fully accessible and seemingly functional.

Your Brand Deserves to Be the Answer.

From Free Data to Monthly Execution
No payment required · No credit card · View Engagement Tiers