My WebAIM 10th SR User Survey Takeaways

A rambling collection of thoughts from reading through the WebAIM Screen Reader User Survey #10 Results. Most of this was in a Masto thread, but I opted to post it here so I can laugh at myself later.

Disability

This opening nugget is important for understanding some of my commentary:

Do you use a screen reader due to a disability?
Response # of Respondents % of Respondents
Yes 1372 89.9%
No 154 10.1%

I will be mostly quoting the narrative content after the charts and tables, however, since that is where some of the nuance becomes clear. Like this:

Responses are predominantly very similar for respondents with and without disabilities. Any notable differences are indicated below to highlight differences in practices or perceptions between disabled and non-disabled respondents.

I am not a screen reader user. I fire one up every day, and am comfortable in them, but this is not the same. I would fall in the 10%. Much of this post is about that 10%, which probably maps to digital accessibility practitioners.

It’s also important to note that not all disabled screen reader users are blind. This survey continues to confirm that.

Proficiency

Those who use screen readers due to a disability reported themselves as more proficient with screen readers—63% of those with disabilities considered their proficiency to be “Advanced” compared to only 18.2% of those without disabilities.

To me, this should serve as a reminder that non-disabled screen reader users are probably not the best ones to dictate how disabled screen reader users should consume (or be forced to consume) content and patterns. This include assertions about usability or better experiences (such as pronunciation or hints).

For context, I do not consider myself an advanced screen reader user and I am alarmed how much more I know than developers and accessibility testers I encounter who cavalierly assert what is and is not good for SR users.

Primary

Respondents with disabilities are more likely to use JAWS and NVDA and less likely to use VoiceOver as their primary screen reader than respondents without disabilities. 8.2% of respondents with disabilities primarily use VoiceOver (up from 5.5% in 2021), compared to 23.7% of respondents without disabilities.

When I see a pattern or widget or article, especially one that asserts its accessibility features, show screen reader pics and recordings only from macOS VoiceOver, that is a bit of a red flag for me.

This preference also leads to developers who think VoiceOver quirks are genuine bugs in their software (though, actually, their egos make them think VO quirks are bugs in all SRs, not their own software).

Primary usage varied greatly by region. JAWS usage was higher than NVDA in North America (55.5% vs. 24.0%) and Australia (45.8% vs. 37.5%), though JAWS usage was lower than NVDA in Europe (29.7% vs. 37.2%), Africa/Middle East (23.3% vs. 69.9%), and Asia (22.9% vs. 70.8%).

Like, have you tried to get a JAWS license outside the US? Freedom Scientific has known for years, but here we still are.

This assumes the cost is not a barrier, that is.

Browser

There are many combinations of browsers and screen readers in use, with JAWS with Chrome the most common.

Firefox is the second most common browser in use across screen readers, though with a paltry 13.5%. Safari comes in at a paltrier 7% but is only paired with VoiceOver for what I hope are obvious reasons.

Anyway, perhaps one takeaway here is to target Firefox before Safari in your unnecessary browser targeting? Though VoiceOver is knowingly not good with Firefox, so, yeah, start with standards first.

OS

Respondents without disabilities were nearly 3 times more likely to use Mac OS than respondents with disabilities.

I feel this kinda reinforces my points about developers not being aligned with screen reader users.

Granted, I always developed on the same platform I was targeting (which is why I am also a terrible user of AmigaOS, Mac System 9 & below, had a failed dalliance with BeOS, and surfed in IE daily ugh).

Anyway, Windows users are at 86.1%, macOS at 9.6%, and Linux at 2.9%. Linux doubled since the last survey (or the last 10 years).

That might also explain why Orca made its first ever appearance as primary screen reader, coming in at an impressive 2.4% (compared to 0.7% for Narrator). Which also reinforces my decision to include Orca in more of my tests.

Reason

Existing comfort and features went down slightly, and Availability, Cost, and Support went up slightly.

Screen reader users largely stick with their primary screen reader because they are familiar with it and its capabilities. Momentum is true for all users across all software. There is a tangible cost in time and energy to switch their primary tool. Just like any other user (or developer).

Satisfaction

Respondents indicating that they are very or somewhat satisfied by their primary screen reader:

  • NVDA – 97.6%
  • JAWS – 95.6%
  • VoiceOver – 92.4%
  • Narrator – 88.9%

Given that screen reader users are also largely satisfied with their screen reader, it’s not too surprising that they aren’t clamoring to switch even if cost increases and support gets poorer.

Braille

Because it would not generally be expected that users without disabilities would use Braille, they have been omitted from these data. Braille usage at 38% is up slightly from 33.3% in 2017 and 27.7% in 2012. 54.2% of VoiceOver users used Braille compared to 42.4% of NVDA users and 35.1% of JAWS users.

One thing I find accessibility testers and developers fail to consider is how Braille displays present their work. The best approach is to recruit Braille display users. A less ideal approach is to at least test your work in Braille emulators (at least the live regions).

Mobile

91.3% of respondents report using a screen reader on a mobile device. Respondents with disabilities (93.6%) are more likely to use a mobile screen reader than respondents without disabilities (70.4%).

That differential should be concerning. It suggests developers and testers are not spending as much effort cross-testing on mobile devies.

Platform

Respondents with disabilities (72.4%) used iOS devices at a higher rate than those without disabilities (56%). Usage of iOS devices was significantly higher in North America (84%), Australia (75%), and Europe/UK (72%) than in Asia (40%), Africa/Middle East (34%), and South America (31%). Respondents with more advanced screen reader and internet proficiency were much more likely to use iOS over Android.

I have had folks assert they don’t need to test with VoiceOver on iOS because they already tested with VoiceOver on macOS. They clearly have no idea how divergent VoiceOver is on those two platforms. This data does not say that, of course, but it reminds me of those interactions to justify not spending money on an iDevive for testing.

Impacts

Over time, more respondents have answered “better web sites” to this question—68.6% of respondents in October 2009, 75.8% in December 2010, 81.3% in January 2014, 85.3% in 2021, and now 85.9% on this survey. This change may reflect improvements in assistive technology. It certainly indicates that users expect more accessible web sites.

When asked whether better assistive technology or better sites will have a greater impact on accessibility, 85.9% of respondents said better sites. I am curious how much that differs for only non-disabled respondents.

I think this suggests that users know efforts to auto-fix accessibility problems (either in browsers or screen readers) aren’t necessarily going to work.

Problematic

In order, the most problematic items are:

  1. CAPTCHA – images presenting text used to verify that you are a human user
  2. Interactive elements like menus, tabs, and dialogs do not behave as expected
  3. Links or buttons that do not make sense
  4. Screens or parts of screens that change unexpectedly
  5. Lack of keyboard accessibility
  6. Images with missing or improper descriptions (alt text)
  7. Complex or difficult forms
  8. Missing or improper headings
  9. Too many links or navigation items
  10. Complex data tables
  11. Inaccessible or missing search functionality
  12. Lack of “skip to main content” or “skip navigation” links

The order and indicated difficulty for the items in this list are largely unchanged over the last 14 years. CAPTCHA remains the most (by a notable margin) problematic item indicated by respondents. Respondents with disabilities were twice as likely to rank CAPTCHA as a problematic item than respondents without disabilities.

Not at all surprising.

All those end-of-year posts about the biggest accessibility barriers of the prior year are simply regurgitations of what users (and capable practitioners) have known for a couple decades or more. Contrast is the only one missing from this list perhaps because the majority of participants are unaffected.

Similarly, the only people still implementing CAPTCHAs are those who don’t have to use CAPTCHAs. Sure, that is a really broad statement that I cannot prove, but I was recently asked for richly sourced research (without considerations for research bias) before they would take users’ words for it.

Wrap-up

These are all my interpretations of the results. They are almost certainly clouded by my biases given my work. Since I was mostly curious what the non-disabled users reported, along my own experience doing the work, my notes reflect that particular exploration.

This was all from a self-selected uncontrolled survey. If a particular screen reader vendor (Freedom Scientific / Vispero) once again prompted its customers to fill it out, that could have skewed some of the results.

Update: Karl’s 10 Takeaways

Karl also gathered his thoughts in 10 takeaways from the WebAIM Screenreader Survey #10.

7 Comments

Reply

When I write up the results article I try not to provide too much commentary or interpretation which would rightfully be seen as biased, so you providing this type of commentary is very insightful. Thank you!

You indicated that you are curious if responses about the impacts on accessibility were different for non-disabled respondents. There was a slight difference that I did not note because it is very minor – 86.2% of respondents with disabilities indicated “more accessible websites” compared to 82.7% for respondents without disabilities.

In response to Jared Smith. Reply

Jared, thanks for that note. I figured the difference wouldn’t be much or else you would have noted it. But I am not surprised that non-disabled users would put less blame on sites — even if only a little less. Thanks!

Reply

It’s a great resource Jared and the team have put out there again!
And thank you for sharing your observations.

One of the key worries I have with this survey, though, is that it might be skewed towards more expert screen reader users and accessibility professionals out of the gate.

63% considering themselves advanced users does not correlate with our findings in user research sessions. There, we see screen reader users can often get around, but you’d be hard-pressed to even consider them intermediate users, having only a basic understanding of the controls and options available to them.

That in and of itself is already enough reason to think about simplifying things and not relying on fancy tricks we as accessibility professionals might know about, of course.

But those users are also not the people being aware of this survey and filling it in. How reflective of the actual average user is it in the end? Maybe Jared has some insights into that?

Reply

How reflective of the actual average user is it in the end?

We can’t know for sure. Certainly those responding to the survey are those aware of it – and these would tend to be those more connected to social media, mailing lists, etc. We can’t (and don’t) state that the survey is representative of all screen reader users, but it is representative of those that respond – and 1500+ respondents is a fairly large sample considering the relatively low prevalence of screen reader use.

We’d love to conduct pure research on a true representative sample of screen reader users, but would need a non-trivial pile of cash to do so. In the meantime, we hope the survey results are informative.

Reply

Great thoughts – I had very similar ones, as an a11y-focussed developer, not an actual screen reader user.

I totally understand the limitations Jared mentioned around doing a representative survey (though, depending on how non-trivial the pile of cash would be, I wonder if a kickstarter or open-collective or anything would be at all useful…).

But, assuming that the means to do the survey don’t change soon, one thing that would be nice would be some more info in the survey preamble around how the survey was advertised/how respondents were recruited.

I’ve also always been curious, as I currently develop for an app and test against mobile screen readers, if the prevalence of patterns such as navigation-by-heading are the same for desktop vs mobile screen reader use, as the methods available for each are different (though of course there’s overlap).

Andrew Gorrill; . Permalink
Reply

Hi! When you say “ They clearly have no idea how divergent VoiceOver is on those two platforms. (desktop and mobile)”. Could you point us to a place to learn more about it?

Sandrina Pereira; . Permalink
In response to Sandrina Pereira. Reply

As in a single source? No, not really. This is experience from testing. I reference these mismatches in posts whenever they come up, such as in my support tables for Exposing Field Errors or in my narrative from verdict at the end of Switch Role Support. But broadly this is not tracked anywhere. It just comes from doing the work.

Leave a Comment or Response

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>