WebAIM Million 2026: Same Six Failures, Worse Numbers Tell the Real Story
David · AI Research Engine
Analytical lens: Balanced
Higher education, transit, historic buildings
Generated by AI · Editorially reviewed · How this works

56,114,377 Errors and the One Number That Explains the Rest
The 2026 WebAIM Million report (opens in new window) detected 56,114,377 distinct accessibility errors across the home pages of the top one million websites. That's an average of 56.1 errors per page — a 10.1% increase from 51 errors per page in 2025, reversing a multi-year downward trend. 95.9% of home pages now contain WCAG 2 A/AA failures detectable by automated tools, up from 94.8% last year.
The same six failure categories continue to dominate. They account for 96% of every error detected. Adding accessibility infrastructure to address just those six categories would meaningfully improve the experience for millions of disabled users.
But there's a buried finding in this year's report that explains why the numbers keep getting worse — and it's not about the six failures at all.
The ARIA Paradox
WebAIM's data shows that pages using ARIA contain an average of 59.1 errors, while pages without any ARIA average just 42 errors. That is not a typo. Adding ARIA correlates with 17 more accessibility barriers per page, not fewer.
ARIA usage has exploded. The average page now contains 133+ ARIA attributes, up from 22 in 2019 — a six-fold increase in seven years. The single-year jump from 2025 to 2026 was 27%. Pages now average 23.3 instances of aria-hidden="true" alone, up 30% from 2025 and over 250% since 2020. The tabindex attribute appears 30.4 times per page, up 17% in one year and nearly 300% since 2020.
ARIA was designed to bridge gaps where native HTML couldn't express semantics. In practice, it has become a way to wallpaper over inaccessible custom components — and the wallpaper itself is causing harm. WebAIM's own 2026 conclusion: "Pages utilizing ARIA were associated with significantly more errors than those that did not."
This is the systemic story of the 2026 report. Organizations are layering more accessibility code onto pages without building the discipline to use it correctly.
The Six Failures: Some Better, Most Worse
The categories themselves are not exotic. They are the same patterns that have dominated the WebAIM Million for eight consecutive years.
| Failure Type | 2026 | 2025 | Direction | |---|---|---|---| | Low contrast text | 83.9% | 79.1% | ⬆ Worse | | Missing alt text | 53.1% | 55.5% | ⬇ Better | | Missing form labels | 51.0% | 48.2% | ⬆ Worse | | Empty links | 46.3% | 45.4% | ⬆ Worse | | Empty buttons | 30.6% | 29.6% | ⬆ Worse | | Missing document language | 13.5% | 15.8% | ⬇ Better |
Five of six categories got worse year-over-year. Low contrast — already the most prevalent failure — climbed nearly 5 percentage points and now appears on 83.9% of pages. The average affected page has 34 distinct low-contrast text instances, a 15% jump from 2025.
The two categories that improved share something in common: alt text and document language are both server-renderable, single-decision fixes that benefit from CMS automation. The categories that got worse — contrast, form labels, empty links, empty buttons — are all properties of dynamic, interactive components. They reflect what teams ship, not what content authors enter.
Page Complexity Has Doubled in Seven Years
In 2019, the average home page contained 782 HTML elements. In 2026, it contains 1,437 — a 22.5% increase in just one year and nearly double the 2019 figure. The most popular sites in the dataset (top 100,000) average 1,584 elements per page, 20% more than the bottom 100,000.
Error density holds at roughly 1 detectable error per 26 elements. As pages grow more complex, the absolute error count grows with them. The web is not getting more accessible — it is getting more code, with the same defect rate baked in.
WebAIM names the cause directly in the report's conclusion: "Increased reliance on third-party frameworks and libraries, and automated or AI-assisted coding practices ('vibe coding')." When the code is generated rather than authored, accessibility decisions are not being made — they are being inherited from training data that already failed.
The Framework Story: Astro Stands Alone
Among JavaScript frameworks with meaningful sample sizes, the gap between best and worst is staggering.
Best performers (fewer errors than average):
- Astro: 9.0 errors (−84% vs. average) — the standout of the entire report
- Next.js: 40.9 (−27.1%)
- React: 43.5 (−22.5%)
Worst performers:
- jQuery UI: 79.9 (+42.3%)
- prettyPhoto: 80.4 (+43.3%)
- FancyBox: 90.0 (+60.3%)
- SweetAlert2: 101.6 (+81.0%)
Astro at 9 errors per page is not a rounding-error result. It is roughly six times better than the report's average. The pattern: server-rendered, minimal-JavaScript architectures ship less code, expose less surface area, and inherit fewer of the accessibility defects that come bundled with carousel and modal libraries written years ago.
The legacy carousel and lightbox libraries — Slick (70.9), OWL Carousel (70.7), Lightbox (70.5), Swiper (74.0) — are the connective tissue of a generation of marketing sites. They are also accessibility liabilities at scale. jQuery itself, still on 560,294 pages in the dataset, correlates with 64.9 errors per page versus the 56.1 average.
The .gov Anomaly: Section 508 Actually Works
Pages on the .gov top-level domain average 18.5 errors — 67.1% fewer than the overall mean. .edu averages 23.0 errors (−59%). These are the two best-performing TLDs in the entire report, and the gap to .com (56.2 errors) is enormous.
This is the cleanest piece of evidence in the 2026 dataset that mandates work. Federal agencies and universities operate under Section 508, the DOJ Title II final rule (opens in new window), and state procurement requirements that demand WCAG 2.1 AA conformance. Compliance requirements are routinely framed as bureaucratic burden in the private sector. The data shows that organizations subject to enforceable accessibility standards build more accessible products. Not perfectly accessible — but measurably, materially better.
The opposite pattern shows up at the bottom of the TLD ranking: .ru (75.4), .vn (78.6), .cn (82.6), .ua (103.1). These represent jurisdictions with weak or absent digital accessibility enforcement.
The Language Access Disparity Nobody Is Talking About
WebAIM Million identifies the document language for 87.3% of analyzed pages and breaks down errors by declared language. The disparity is brutal.
| Language | Avg Errors/Page | vs. Average | |---|---|---| | English | 46.0 | −18.0% | | Dutch | 46.5 | −17.1% | | German | 46.9 | −16.4% | | French | 58.5 | +4.2% | | Japanese | 62.2 | +10.8% | | Spanish | 64.3 | +14.7% | | Russian | 78.1 | +39.2% | | Korean | 94.6 | +68.7% | | Chinese | 136.2 | +142.8% |
Pages declaring lang="zh" average nearly three times more accessibility errors than pages declaring lang="en". This is not a translation problem. It is an infrastructure problem. The frameworks, CMS templates, component libraries, and accessibility tooling that the global web relies on were built primarily in English-speaking markets, tested on English content, and documented for English-speaking developers.
The compounding effect is profound: a Chinese-speaking screen reader user encountering the average top-million website is hitting roughly 136 distinct accessibility barriers before they read a word of content. Tools like idioma.chat (opens in new window) — which translate the full accessibility layer including ARIA labels, form errors, and dynamically loaded content — exist precisely because this multilingual accessibility gap is real and measurable. WebAIM's report just put a number on it.
What This Means Through a Systems Lens
Through the CORS framework, the 2026 WebAIM Million data reveals an operational capacity failure, not a knowledge failure. Detection has scaled. Automated scanning is cheap, ubiquitous, and reasonably accurate. What hasn't scaled is the organizational discipline to act on what scanning finds — to integrate accessibility into design systems, content workflows, code review, and component libraries before issues ship.
The strategic lesson hiding in the framework data is that architectural choices upstream of accessibility work harder than remediation downstream. Astro's 84% error reduction did not come from a heroic accessibility team auditing every page. It came from shipping less JavaScript, rendering on the server, and inheriting fewer of the patterns that produce empty buttons and unlabeled controls.
The risk lens points to the .gov data: the organizations subject to enforceable standards have measurably better outcomes. As the DOJ's April 2024 Title II rule (opens in new window) takes effect for state and local governments through 2026 and 2027, we should expect to see those entities migrate toward .gov-style numbers — if they invest in capacity, not just compliance theater.
The community lens is the most uncomfortable. The Chinese-language data point should be a five-alarm fire for anyone who frames accessibility as a global mission. The web's accessibility infrastructure is conspicuously English-shaped, and the people most disadvantaged by that are also the people most likely to be invisible in the rooms where accessibility tools, frameworks, and standards are built.
The Forward Path
The WebAIM Million is a measurement instrument, not a prescription. But the 2026 data points toward three conclusions that are hard to argue with.
First, the six core failure categories are not getting fixed by accessibility scanners. They are the output of organizational processes that lack accessibility integration. Adding more scanning will not change the result. Adding accessibility decisions to the design system, content workflow, and component library will.
Second, the ARIA paradox is a warning. ARIA is not free. Every additional aria-hidden, role="button", and custom widget pattern is a chance to introduce a barrier. The teams achieving the lowest error counts are not adding the most ARIA — they are using less, more deliberately, on top of native semantics that work without it.
Third, the era of "vibe coding" — AI-assisted code generation that surfaces in production without human review — is going to make this worse before it makes it better. AI coding tools trained on the existing web inherit the existing web's accessibility defects. The 56 million error baseline is what those models learned from. The next several WebAIM Million reports will likely show that pattern compounding. Organizations that want to escape it will have to make accessibility a generation-time concern, not a post-hoc fix.
The full WebAIM Million 2026 report (opens in new window) is worth reading in detail. The numbers in this analysis are a small selection; the report itself contains category breakdowns by industry, CMS, advertising network, and more. For anyone responsible for accessibility decisions in their organization, it is the most important annual data point in the field.
About David
Boston-based accessibility consultant specializing in higher education and public transportation. Urban planning background.
Specialization: Higher education, transit, historic buildings
View all articles by David →Transparency Disclosure
This article was created using AI-assisted analysis with human editorial oversight. We believe in radical transparency about our use of artificial intelligence.