Here is the advice almost every keyword research guide gives you: find keywords with high volume and low difficulty, create content targeting those keywords, and watch the rankings roll in. It sounds logical. It is also, in practice, consistently wrong — or at least dangerously incomplete.
When I started doing keyword research at scale, I made the same mistake. I optimised for the number. Volume looked like opportunity.
Low KD looked like an open door. What I consistently found instead was that the content we built around those 'perfect' metrics either attracted the wrong audience, failed to convert, or got swallowed by a competitor with a stronger domain the moment they bothered to publish.
The problem is not keyword research as a discipline. The problem is that most guides teach you how to use tools — not how to think about Search intent has four tiers, but there's a hidden fifth tier, competitive positioning, and the underlying buyer psychology that makes one keyword worth ten times another.
This guide is different in three specific ways. First, it introduces two frameworks — the Intent Depth Score and the Competitor Leak Method — that we developed to solve real ranking and conversion problems, not to fill a blog post. Second, it deals honestly with what AI search is doing to keyword strategy right now, including which types of keywords are becoming less valuable and which are becoming more defensible.
Third, it gives you a sequenced 30-day plan so the tactics don't sit in your browser history unused.
If you have done basic keyword research before and want to go deeper — this is where you go next.
Key Takeaways
- 1High search volume is a trap — the 'Intent Depth Score' framework reveals which low-volume terms drive the highest-value conversions
- 2The 'Competitor Leak Method' uncovers keywords your rivals rank for accidentally — terms they never intentionally targeted but still pull traffic from
- 3Semantic clustering is not the same as topic clustering — learn the difference before you build another content silo
- 4Search intent has four tiers, but there's a hidden fifth tier most SEOs never map: the 'transition moment' where buyers shift from research to decision
- 5Your best keywords are often inside your own analytics — the 'Internal Search Mining' tactic extracts gold you already own
- 6Modifier stacking — combining qualifier types systematically — is the fastest way to uncover non-obvious long-tail clusters at scale
- 7AI Overviews are reshaping which keywords are worth targeting; use the 'SGE Displacement Test' to filter your list before you write a single word
- 8Entity-first keyword research changes how you build topical authority — starting with entities instead of phrases accelerates your ranking timeline significantly
- 9The 30-day action plan inside this guide gives you a sequenced, week-by-week process to go from blank sheet to fully mapped keyword architecture
1The Intent Depth Score: Why Not All Low-Difficulty Keywords Are Equal
The Intent Depth Score (IDS) is a framework we use internally to rank keyword opportunities not by volume or difficulty alone, but by the combination of specificity, commercial signal, and transition proximity — how close the searcher is to making a decision.
Here is how it works. Every keyword sits somewhere on a spectrum from pure awareness (the searcher is learning a concept exists) to pure transaction (the searcher has a credit card ready). Most keyword tools show you this as 'informational' versus 'commercial' versus 'transactional' intent.
That is a start, but it is not granular enough to be actionable.
IDS adds a third axis: transition proximity. A keyword like 'what is CRM software' is informational. A keyword like 'CRM software comparison' is commercial.
But a keyword like 'switching from spreadsheets to CRM software' is something different entirely — it is a transition-moment keyword, where the searcher is actively moving from one state to another. These keywords are often lower in volume, routinely overlooked in standard research, and disproportionately high in conversion value.
To calculate a rough IDS for any keyword, ask three questions: 1. How specific is the query? Generic queries score low; queries with explicit modifiers score high. 2.
Does the query contain a commercial or decision signal? Words like 'best', 'vs', 'alternative', 'for [specific use case]' score high. 3. Does the query imply a state change?
If the searcher is describing a problem they want to leave or a goal they are moving toward, score it high.
A keyword that scores high on all three axes is almost always more valuable than a high-volume keyword that scores low on all three — regardless of what the difficulty metric says.
In practice, this means scanning your keyword lists not just for volume thresholds, but for the presence of transition language. Words like 'switch', 'replace', 'migrate', 'instead of', 'without', 'before I', and 'how to stop' are reliable transition-moment indicators. Build a filter for these in your keyword tool of choice and you will uncover a layer of opportunity that most of your competitors are not deliberately targeting.
2The Competitor Leak Method: Mining Accidental Rankings for Deliberate Strategy
Every established website in your niche ranks for keywords it never intentionally targeted. These are accidental rankings — pages that picked up search traffic because the content happened to use relevant language, because a linking site used anchor text that matched a search query, or because the page had sufficient authority to rank for related terms despite no on-page optimisation.
The Competitor Leak Method is a systematic process for identifying these accidental rankings, evaluating whether the underlying search intent is genuinely valuable, and then creating content that targets that intent deliberately — with full on-page and structural optimisation that the accidental ranking page lacks.
Here is the three-step process:
Step 1: Export the full keyword ranking data for your top two or three competitors. You want every keyword they rank for, not just the ones driving the most traffic. Most crawl tools will let you export this at scale.
Step 2: Filter for keywords where your competitor ranks in position 8-20 and the ranking page is not the homepage, not a category page, and not a page that appears to have been written with SEO intent. You are looking for blog posts that mention a topic in passing, product pages that include a phrase incidentally, or older content that was never updated. These are your leaks — positions held by weak pages.
Step 3: For each identified leak, check the search intent. Does the current ranking page actually satisfy what the searcher wants? If the answer is 'partially' or 'not really', you have found an opportunity.
You can create a dedicated, intent-matched piece of content, build relevant internal links to it, and with a moderate amount of authority, outrank a page that was never trying to rank in the first place.
The reason this works so consistently is that accidental rankings represent undefended territory. Your competitor either does not know they hold these positions or does not value them enough to reinforce them. Either way, they are unlikely to react quickly when you move in.
We have used this method to identify entire content clusters that were essentially uncontested — not because no one cared about the topic, but because no one had thought to look for them this way.
3Semantic Clustering vs. Topic Clustering: A Distinction That Changes Your Architecture
Most SEO practitioners use the terms 'topic cluster' and 'semantic cluster' interchangeably. They are not the same thing, and confusing them leads to content architecture that looks logical in a spreadsheet but underperforms in search.
A topic cluster is an organisational structure. You have a pillar page covering a broad subject, and supporting pages covering subtopics. The logic is hierarchical and editorial: this page is about email marketing broadly, and these six pages cover specific aspects of it.
A semantic cluster is a ranking structure. It is built around how search engines understand relationships between terms — not how humans organise information into categories. Two pages can cover completely different topics in a human editorial sense and belong to the same semantic cluster from a ranking perspective, because the underlying entities, co-occurring terms, and search journeys connect them.
Here is a practical example. 'How to reduce churn in SaaS' and 'customer success metrics for B2B software' might live in different topic clusters in your content plan — one in 'retention', one in 'analytics'. But semantically, they share entities (SaaS, churn, customer success) and they are often consumed by the same searcher in the same research session. Building them to support each other, cross-referencing them, and structuring your internal linking to reflect their semantic relationship will do more for their rankings than assigning them to separate pillar structures.
For advanced keyword research, this means that when you group keywords, you should group them by shared entities and co-occurring search terms — not just by subject matter as you would organise a textbook. Tools that show you related keywords based on SERP overlap (which pages rank for multiple keywords in a group simultaneously) are reflecting semantic clusters, not topic clusters. That SERP-overlap grouping is the one worth building around.
The practical implication: before you finalise your content architecture from any keyword research exercise, run a SERP overlap analysis on your grouped keywords. If the same pages keep appearing across different groups, those groups belong in the same semantic cluster — regardless of how different they look editorially.
4The SGE Displacement Test: Filtering Your Keyword List for AI-Search Survival
AI-generated search overviews are changing which keywords are worth targeting — and if you are building a keyword strategy without accounting for this, you risk investing months of content effort into terms where organic click-through has been structurally reduced.
The AI Overviews are reshaping which keywords are worth targeting; use the [SGE Displacement Test to filter your list before you write a](/guides/can-seo-and-geo-strategies-work-together-for-better-results) is a pre-production filter we apply to every keyword list before committing to content creation. The goal is to identify which keywords are likely to trigger an AI overview that satisfies the search intent without requiring a click, and deprioritise those in favour of keywords where a click is still the primary way the searcher gets what they need.
Here is how to run the test:
First, check the live SERP for each keyword. If an AI overview is already present and it fully answers the query in the snippet — definitions, step-by-step processes, comparison tables — the click-through rate to organic results is likely lower than historical averages for that query type. Informational keywords with simple, definitive answers are most vulnerable.
Second, assess what the searcher needs to do after getting the answer. If the answer is the destination (e.g., 'what does CAC stand for'), AI displacement is high. If the answer is a step toward something the searcher still needs to do — evaluate a tool, read a detailed guide, apply a process to their specific context — the keyword has click-pull despite the overview.
These are your safer investments.
Third, identify keyword types that AI Overviews are reshaping which keywords are worth targeting structurally cannot satisfy: keywords with strong personal context ('best CRM for a two-person sales team in professional services'), keywords requiring recent or proprietary data, keywords where the searcher needs to compare specific vendor options, and keywords where trust and credibility are part of what the searcher is evaluating. These are increasingly valuable precisely because AI cannot commoditise them.
The net result of running this filter is a keyword list that is more defensible, not just more rankable in the short term. This matters particularly for founders and operators investing in SEO as a long-term channel — you want traffic that compounds, not traffic that gets displaced in the next model update.
5Entity-First Keyword Research: How to Build Topical Authority Faster
Traditional keyword research starts with phrases. You enter a seed term, get a list of related queries, group them, and build content. Entity-first keyword research starts one layer deeper — with the concepts, people, places, products, and organisations that Google's Knowledge Graph uses to understand what topics mean.
Starting with entities rather than phrases changes what you find and how you structure what you build.
Here is the practical process. For any niche you want to build authority in, identify the core entities that define the space. For a B2B SaaS brand focused on project management, those entities might include specific methodologies (Agile, Scrum, Kanban), specific roles (project manager, scrum master, ops lead), specific outcomes (sprint planning, resource allocation, project retrospectives), and specific tool categories (project tracking software, task management platforms).
Once you have mapped the entity landscape, use your keyword tool to find search queries that contain or closely relate to those entities. You are not looking for the phrase — you are looking for which queries have that entity as their core subject. This surfaces keyword clusters that are genuinely distinct and comprehensive, rather than keyword clusters that are just phrasing variations of the same thing.
The ranking benefit is significant. When Google can identify that your site covers a defined set of entities with depth and consistency, it assigns topical authority to your domain for those entities — which means pages you publish later rank faster and with less link acquisition required. This is the compounding mechanism that makes SEO genuinely scalable.
Entity-first research also surfaces keywords you would not find through standard seed-term expansion. Entities that are adjacent to your core topic — related but not obvious — often carry commercial intent that your competitors have not mapped. Finding these adjacencies is one of the more reliable ways to build a content moat that is difficult to replicate quickly.
6Internal Search Mining: The Keyword Gold You Already Own
One of the consistently underused sources of keyword intelligence is the search data your own site is already generating — and this is especially true for businesses that have been operating online for more than a year.
If your site has an internal search function, every search query a visitor types is a direct signal of what they wanted but could not find through your navigation or existing content. This is not keyword research in the conventional sense — it is demand intelligence, and it is the most contextually accurate signal you have access to because it comes from people who are already qualified enough to be on your site.
Here is how to extract value from it. First, pull your internal site search data from your analytics platform. Look for queries that appear repeatedly, even if individually they seem low-frequency.
Recurring internal searches indicate recurring unmet needs — and unmet needs are content opportunities.
Second, cross-reference those internal search terms against your existing content inventory. If a term is being searched internally and you have no page targeting it, that is an immediate content gap to fill. If you have a page that nominally covers the topic but is not appearing in internal search results as a satisfying destination, the page may be failing on relevance or structure.
Third, take your most-searched internal terms and run them through your external keyword tool. You will often find that what your existing audience is searching for internally maps directly to external search queries — which means you can build or optimise content that serves both your existing visitors and new organic searchers simultaneously.
The method also works in reverse. Pages with unusually high internal search departure rates — where visitors land, then immediately use internal search to find something else — are pages that are not fully satisfying their searchers. These are your clearest on-site signals of intent mismatch, and fixing them is often faster than building net-new content.
7Modifier Stacking: The Systematic Method for Uncovering Long-Tail Clusters at Scale
Long-tail keyword research is usually described as 'adding qualifiers' to a head term. That description, while accurate, obscures a more systematic approach that can surface dozens of non-obvious clusters in a fraction of the time.
Modifier stacking — combining qualifier types systematically — is the fastest way to uncover non-obvious long-tail clusters at scale is the process of combining multiple qualifier types simultaneously against a core term or entity, then filtering the resulting keyword set for intent and commercial value. Instead of adding one qualifier and seeing what the tool returns, you build a matrix of qualifier types and run them in combination.
The qualifier types worth stacking fall into five categories:
1. Role qualifiers: Who is the searcher? ('for founders', 'for enterprise teams', 'for freelancers') 2. Stage qualifiers: Where are they in their journey? ('beginner', 'advanced', 'when to', 'before you') 3.
Constraint qualifiers: What limitation are they working within? ('without a budget', 'in under an hour', 'with no team') 4. Outcome qualifiers: What result do they want? ('that converts', 'that ranks', 'that scales') 5. Comparison qualifiers: What are they evaluating against? ('vs', 'alternative to', 'instead of', 'better than')
When you stack two qualifiers from different categories against a core term, you find keywords that are extremely specific, often have very low competition because no one has deliberately targeted them, and tend to attract searchers who are precisely in the decision-making phase you want to reach.
For example, instead of 'keyword research tools', try 'keyword research tools for founders without a dedicated SEO team' — or break that into individually targetable modifiers: 'keyword research for founders', 'keyword research without SEO experience', 'keyword research on a limited budget'. Each of these is a distinct, targetable query cluster with intent that is far sharper than the head term.
The method scales because once you have built your qualifier matrix for one core term, you can apply the same matrix to every core term in your niche. This is how you go from a keyword list of 200 terms to a keyword architecture of 2,000 terms — systematically, not randomly.
