Accessibility's observer paradox
Last week, WebAIM published a damning accessibility analysis of the top 1,000,000 homepages that I interpret as accessibility's "observer's paradox."
By deliberately improving the accessibility of your website, you increase the likelihood of accessibility errors.
Home pages with ARIA present averaged 11.2 more detectable errors than pages without ARIA. An increase in the number of ARIA attributes also had a moderate correlation with increased errors. In other words, the more ARIA in use, the higher the detectable errors. This does not necessarily mean that ARIA introduced these errors (it's likely these pages are simply more complex), but pages typically have more errors when ARIA is present, and even more so with higher ARIA usage.
Pages with a valid HTML5 doctype had significantly more page elements (average of 844 vs. 605) and errors (average of 61.9 vs. 53.3) than those with other doctypes.
Home pages in the sample that utilize the popular Bootstrap framework had 1.3 million more accessibility errors than pages that did not utilize Bootstrap.
Sure, maybe, but I interpret these a little more softly: many people choose these frameworks because they include -- either as part of the package or with an easy add-on -- a robust layer of accessibility options. Bootstrap, by itself, is a pretty smart choice for the accessibility-conscious designer.
I prefer to think there is more to it than will or ignorance.
The availability of more tools to assess accessibility and thus customize the experience does not really correlate with doing it well. It definitely correlates, however, with a more complex experience with more room for error, and thus a need for better testing - especially, you know, the in-person human type of testing.
In my own usability tests with blind users after attempting to improve the search results of a popular library discovery service with additional screen-reader readouts and context I thought and hoped would be useful, I was totally surprised to see that I'd only added to the confusion.
This is a phenomenon I've noticed that exposes the gulf between technically accessible and accessibly usable, whereby designing to avoid the red flags thrown-up by accessibility scanners like WAVE or aXe, we omit the question of usability and, subsequently, the need to test the users of this interface.
The assumptions "able" designers make about what is accessible are probably wrong. Not out of malice or gross intent, but ignorance only absolved by either a sufficiently diverse team or sufficient user testing (or both).