All research

We scanned 500 business websites across 12 industries and measured their structural readiness for AI search engines, generative answer engines, and AI agents. The results paint a clear picture: most websites are not ready for how people are starting to find businesses.

34/100
Average AI readiness score across 500 websites

What we measured

Every website was run through our 44-check scan covering four dimensions: Findability (can search engines crawl and index you), Answerability (can AI answer questions about you), Citability (will AI cite you as a source), and Agent Readiness (can AI agents interact with your site).

The scan is fully automated and structural. We check what a machine can observe about your website without any subjective judgement. We validated this methodology against 40 sites with known AI visibility characteristics, achieving an AUC of 0.94 — meaning the score reliably separates well-built sites from poorly-built ones.

The score distribution

The distribution is heavily skewed toward the lower end. Most sites cluster between 20 and 45, with a long tail of well-optimised sites pulling the average up.

Score distribution — 500 websites

Grade A (80-100) 3% of sites
Grade B (60-79) 11% of sites
Grade C (40-59) 24% of sites
Grade D (20-39) 41% of sites
Grade F (0-19) 21% of sites

62% of websites scored below 40 — a Grade D or F. These sites have fundamental structural gaps that make it difficult for AI systems to find, understand, or interact with them.

The five most common failures

Some issues appeared across the majority of sites we scanned. These are not edge cases — they are systemic gaps in how business websites are built today.

Most common failures

No structured data (Schema.org) 72% fail
No FAQ or Q&A content structure 68% fail
No AI crawler access policy 91% fail
Content requires JavaScript to render 44% fail
No cited sources or evidence in content 58% fail

The AI crawler access policy failure rate is striking. 91% of websites have not explicitly addressed how AI crawlers should interact with their content. This matters because AI systems are increasingly respecting robots.txt directives and looking for explicit machine-readable permissions.

What separates the top performers

The top 14% of sites (Grade A and B) share a consistent set of characteristics that the bottom tier lacks. None of these are expensive to implement — they are structural choices that most web developers already know how to make.

Top performer characteristics

Structured data on every page 94% of A/B sites
Content renders without JavaScript 89% of A/B sites
Named authors with credentials 76% of A/B sites
Answer-ready paragraph structure 82% of A/B sites
Explicit AI crawler policy 61% of A/B sites

The biggest single differentiator is structured data. Nearly all top-performing sites implement Schema.org markup — most commonly Organization, LocalBusiness, FAQ, and Article schemas. The bottom-tier sites almost universally lack it.

Industry breakdown

Performance varies significantly by industry. Technology companies and publishers tend to score highest, while professional services and local businesses lag behind.

Average score by industry

Technology / SaaS 52
Media / Publishing 48
E-commerce 39
Financial Services 35
Healthcare 31
Legal Services 29
Accountancy 27
Local Services 22

The gap between tech companies (52 average) and local services (22 average) represents a structural divide. Tech companies build with machines in mind. Local service businesses build for humans only. As AI becomes the intermediary between humans and businesses, that divide becomes a competitive disadvantage.

Agent readiness: the dimension nobody checks

Agent readiness scored lowest across every industry. The average agent readiness score was 18 out of 100. Only 2% of sites scored above 60 on this dimension.

This is not surprising — AI agents interacting with websites is a new behaviour. But it is already happening. AI assistants are being asked to find service providers, compare options, fill out enquiry forms, and book appointments. Websites that are structurally inaccessible to these agents are invisible to an emerging class of potential customers.

The websites that fix their structural readiness now will have a compounding advantage. Every month that passes, more searches happen through AI. The gap between ready and not-ready gets wider, not narrower.

Methodology note

This benchmark was conducted using our automated 44-check scan. Websites were selected to represent a cross-section of business sizes (sole trader to enterprise) across 12 industries in the UK. Scans were conducted between February and March 2026. Our scoring methodology was validated against 40 sites with known AI visibility characteristics, achieving an area under the ROC curve (AUC) of 0.94.

The scores measure structural readiness — whether your website is technically built in a way that AI systems can work with. They do not predict whether AI will cite you, because citation also depends on reputation, authority, and topical relevance, which are built over time. We measure the part you can fix today.

Where does your website stand?

Run the same 44-check scan we used for this research. Free, instant, no signup.

Scan My Website