Relevance Reporting measures how well your page content matches search queries. Rather than checking whether the words appear on the page, it compares the meaning of your content to the meaning of the query.
Search engines and AI systems (including Google, ChatGPT, and Perplexity) use this same approach, called semantic similarity, to decide which pages best answer a given query. Relevance Reporting applies that method to your crawl data, giving you a score for each query and page pair that reflects how closely your content aligns with what someone is actually searching for.
This is a diagnostic tool. It helps you answer questions like:
- Do my most important pages match the queries I want to rank for?
- Which page elements (title, H1, description, or body content) are pulling the relevance score down?
- Are there queries in my GSC data where rankings exist but the page content is not a strong match?
It is designed to surface mismatches and content gaps, not to provide a step-by-step optimisation workflow.
How relevance scores work
Modern search systems do not look for keywords. They evaluate meaning. To do this, they convert text into a numerical representation called an embedding: a set of values that captures the semantic content of a word, phrase, or passage. Two pieces of text with similar meanings will produce similar embeddings, even if they use different words entirely.
Relevance Reporting works the same way. When a crawl runs, Lumar generates embeddings for each query and for each page element being scored. It then measures how closely those embeddings align, a value known as cosine similarity. A page about "large-scale document organisation" can score well against a query for "enterprise content management" because the meaning is close, even if the exact words are not shared.
For short queries or near-exact string matches, a secondary string-similarity method is applied alongside the embedding comparison, as pure semantic models can be less reliable on very short inputs. The two signals are combined into a single score between 0 and 100.
Score tiers
| Tier | Score range |
|---|---|
| Low | Below 55 |
| Moderate | 55 to 70 |
| High | Above 70 |
These tiers appear throughout the reports as colour-coded indicators: red for Low, amber for Moderate, and green for High.
What gets scored
Relevance is measured independently for four page elements:
-
Page title — the
<title>tag - H1 — the primary heading on the page
- Meta description — the page's description tag
- Main content — the body text, extracted by removing navigation, ads, and boilerplate
The main content extract is particularly important: it is the text actually sent to the embedding model, and you can view the exact extract on the query detail page. If a score looks unexpected, checking the extract is a good first step.
Enabling Relevance Reporting
Relevance Reporting is available as a project extension. Scores are generated during each crawl once the extension is enabled.
- From your project, click Settings in the left sidebar.
- Navigate to Step 3: Extensions.
- Open the Semantic Relevance Reporting card.
- Configure your options (see below) and click Save Changes.
- Run a new crawl.
Setting up Target Search Queries
In the extension settings, you can upload a list of queries you want your site to rank for. These sit alongside any query data pulled from Google Search Console and are scored against your crawled pages on the next crawl.
You can upload a CSV or plain text file with one query per line, or paste a comma-separated list directly into the input field. A sample file is available to download from the settings panel. The maximum is 2,000 queries.
Google Search Console data
Use the Search Queries data toggle to enable or disable GSC query data. When enabled, Lumar pulls observed queries from your connected GSC account and scores them in the same way as your uploaded target queries.
The three reporting sections
Data is organised into three sections, accessible from the Relevance heading in the left sidebar.
Target Search Queries
This section covers the queries you have uploaded, the ones you are actively trying to rank for. It shows whether those queries have well-matched landing pages on your site, and where the gaps are.
Because these queries come from your own list rather than from GSC, traffic data (clicks, impressions, CTR, position) will not be populated unless a target query also appears in your GSC data.
Observed Search Queries
This section covers queries pulled from Google Search Console, the queries Google already associates with your site. Because these come from GSC, traffic data is available alongside the relevance scores.
That combination is useful for prioritisation. A query that drives meaningful traffic to a page with a Low relevance score is worth investigating: it suggests the page may be ranking for reasons other than content alignment, which is an unstable position.
Page Relevance
Rather than starting from a query, this section starts from the page. It shows how your crawled pages score across five relevance dimensions (title, H1, description, content, and query relevance) and is useful for auditing specific pages or identifying pages that score poorly across the board.
Reading the reports
Dashboards
Each section opens on a dashboard with an overall relevance score, the average relevance of top landing pages to their associated queries. Below that you will find a distribution chart showing the proportion of queries in each tier, a trend chart tracking changes across crawls, and a summary of flagged issues.
Reports and errors
The All Reports tab lists every report available in the section. The Errors tab shows the subset of reports that flag potential issues. For Target Search Queries this covers Low Relevance landing pages. For Observed Search Queries there are additional types including Poorly Matched Landing Pages, Multiple Landing Pages, and Cannibalizing Pages.
Data table
The main data table lists all queries in the section. Each row shows the query, any available traffic data, the number of associated landing pages, and a nested table of those pages with their relevance scores. Clicking a query opens the query detail page.
Query detail page
The query detail page is where the diagnostic value is clearest. It shows a side-by-side comparison of two landing pages for the selected query, with element-level scores for each page element and the full main content extract used for scoring.
A page with a strong content score but a weak title score suggests the body addresses the topic but the title does not signal it clearly to search engines. This kind of breakdown helps you understand not just that a page scores poorly, but why.
From the query detail page, clicking any page URL takes you to the Resource Detail view for that page, where you can access Content Suggestions — AI-generated alternative copy for your title, H1, and description, scored against your queries.
For a full walkthrough of the query detail page and Content Suggestions, see Relevance Reporting — Query Detail and Content Suggestions.
The Is Target Search Query metric
Each query record includes a metric called Is Target Search Query, set to Yes for uploaded queries and No for GSC queries. You can use this to filter or segment data in custom reports.
Frequently asked questions
Why is traffic data showing as dashes for my target queries? Traffic data comes from Google Search Console. Queries you have uploaded will not have traffic data unless the same query also appears in your GSC account.
How many target queries can I upload? Up to 1,000 queries per project.
Why does the main content score look different from what I expected? The score is based on the text extract Lumar uses for the embedding, not the full page source. You can view the exact extract on the query detail page. If it contains irrelevant content or looks incomplete, that will affect the score.
What does a Low relevance score mean? A score below 55 indicates that the page element and the query are not closely aligned semantically. It is a signal worth investigating, not a definitive verdict. Consider the page's purpose and the intent behind the query before drawing conclusions.
Why is search volume empty for some queries? Search volume data comes from a third-party provider and reflects global monthly averages. Empty cells mean the query did not meet the provider's minimum volume threshold. This is expected for niche or long-tail queries and does not affect relevance scoring.
Will improving my relevance scores improve my rankings? Relevance Reporting uses the same fundamental approach that search engines and AI systems use to evaluate content. However, rankings are influenced by many factors beyond content alignment. These scores are a useful diagnostic input, but there is no direct or guaranteed relationship between score changes and ranking changes.
Can I get suggestions for how to improve my page copy? Yes. From the query detail page, clicking a page URL opens the Resource Detail view, which includes a Content Suggestions section. This generates AI-powered alternative copy for your title, H1, and description based on your search queries. See Relevance Reporting — Query Detail and Content Suggestions for more detail.