I've been brainstorming sentences for my portfolio: "I do software blah blah blah... especially web things yada yada ya..." Every sentence fell flat, except for one cheeky line: "I can even deploy a website with my eyes closed."
It seemed like a fun challenge to take literally, so I followed the fun and learned to deploy a website blindfolded. You can watch (and listen) to my performance above.
DISCLAIMER: SIMPLE DEPLOYMENT a
Deployments can be very simple or very complicated. Sometimes you just throw HTML onto a server, and other times you push complicated code onto platforms.
I chose a simple interpretation of deployment: hosting a static site on GitHub with a build step and a domain name. I think it was a good choice:
- for me, needing a reasonable goal to accomplish blindfolded
- for you, dear viewer, trying to keep the screencast short and watchable
- for the indie web, showing that publishing a website with a domain name is not too difficult (jump to indie-web tangent)
- for portfolio purposes, showing that I can fiddle with web technologies
So no provisioning servers today, and no mucking around in the terminal. No one-click solutions either, like drag-and-drop deployments or fully-automated scripts. This is primarily a screen reader learning challenge.
The first task: how do I turn this thing on?
LEARNING A SCREEN READER a
Win + Ctrl + Enter opens Narrator, the built-in Windows screen reader. It's not the most popular screen reader, but it was the most convenient starting point.
The most popular screen readers are JAWS and NVDA. I would've used JAWS if it didn't cost hundreds of dollars or if the free trial lasted longer than 40 minutes.
If I could do it all over again, I would choose NVDA since it's free and open-source.
I finished the tutorial and glanced through the list of commands (Caps + F1). Then I practiced navigating websites. I'd like to share a few mental models that helped me, in case you want to learn a screen reader too:
1) WEBPAGES ARE SEQUENTIAL
Screen readers strip a webpage down to a series of footholds. You jump between elements hoping to land on something helpful. It's a lot different from the sighted experience, getting anywhere on a two-dimensional screen in one click.
This was the mental map I developed: sorta like the DOM, if you know HTML; or like the accessibility tree, if you dabble with devtools; but trimmed to only a sequence of tabbable items and landmarks.
skip to content button header landmark nav landmark link link link search landmark main landmark h1 h2 link button h2 link text input footer landmark link link
Note: I was navigating GitHub, a webapp with hundreds of interactive elements. I tabbed through elements looking for the right button. If I was browsing a simple website, the mental map might be a little different. "Task mode" versus "exploration mode", as Jason puts it.
2) SCREEN READERS ARE LIKE VIM
The tools are surprisingly similar.
Modal. Screen readers have two modes: scan mode for moving around the page, like vim's normal mode; and input mode for filling out form fields, like vim's insert mode. Caps + Space toggles between scan mode and input mode.
Units of motion. Some screen reader movements cover large areas, like between landmarks or headings. Other movements are smaller, like between paragraphs or words. It reminds me of how vim motions act on differently sized text objects.
Leader key. Many screen reader keybinds are prefixed with Caps. It's not customizable like vim's leader key, but it feels like a similar convention.
Learning curve. Learning a screen reader is like learning vim: first adjusting to a new paradigm, then learning advanced commands over time. I was able to brute-force my challenge with a only few basic commands (d for next landmark, Tab for next tabbable element). But I'm sure there's more commands that would've made my life easier.
TAKEAWAYS AS A SCREEN READER USER a
I never expected to feel so exhausted using a screen reader. Even after several practice runs on a site I've visited thousands of times, I still struggled.
Imagining a page's layout took a lot of effort. I often found myself surprised by elements or a lack of elements. It kinda felt like programming, where you build up the logic in your head and reconcile it with the computer's feedback. It's constant problem solving.
Getting unstuck was an important skill. For example, I was stuck from 5:40 to 6:40 in the screencast.mp4 because I expected to land on a new page or to hear that a live region updated. I should've reset the cursor to the top of the page and reoriented myself. And on some other websites, I found myself stuck in keyboard traps, having to refresh the page.
It's humbling to recognize the difficulties that screen reader users encounter. Kudos to people who experience screen readers not as an alternative interface, but as a regular fact of life.
Of course, take everything here with a grain of salt. My experience does not represent the experiences of legitimately impaired users. I'll point you to some of their eye-opening videos:
- a screencast from expert screen reader user Léonie Watson
- one video series of people with disabilities using assistive technologies, and another series of everyone benefiting from web accessibility
If you prefer articles over videos, I found a couple resources:
- a brief interview with Victoria Chan, who is blind user of JAWS and iOS Braille
- the latest screen reader user survey from WebAIM
TAKEAWAYS AS A DEVELOPER a
Originally, I was going to share some web accessibility tips and ARIA reference material here. But that didn't feel right because:
- that's not the point of this post; I'd recommend that developers try a screen reader before reading more accessibility tips
- I went too deep into the accessibility rabbit hole, and now I have a headache
- I'm still learning a lot; better to listen to industry experts with expert advice
- any leftover accessibility tips will be posted at my /nuggets
Instead, I'll just share what I learned about a few accessibility questions. I don't have the all the answers, or even a complete picture. I'm just sharing the little bit of research that made sense to my brain.
WHY THE DISCONNECT BETWEEN ARIA AND HTML?
This addresses niggles like "why haven't I learned about landmarks until now?" and "why is the footer's role called contentinfo?"
The ARIA spec is not part of the HTML spec. ARIA was shoehorned onto HTML around 10 years ago when websites were becoming more interactive and therefore more inaccessible to screen readers. Different specs, different goals, different timelines, one compromise. Bruce explained in 2013:
[ARIA] attributes are from the Accessible Rich Internet Applications (WAI-ARIA) spec, and not part of HTML(5), although they’re allowed in pages. They’re developed by different groups, and for different reasons. ARIA is a bridging technology for any markup language – HTML4, SVG or HTML5 to "plugin" accessibility information that isn’t part of the host language.
...
It seems that the name <footer> was adopted as it was the most common class name found in a billion web pages analysed in 2005 by Ian Hickson, HTML5 editor. Arguably, contentinfo is a better "semantic" name (after all, information about content doesn’t have to be below the content it refers to, which is what "footer" implies), but "footer" is what people were already using. Anyway, the naming of the new HTML5 elements is done now.
HTML5 implicitly creates a lot of ARIA roles behind the scenes. That's fine for simple websites relying on native elements. But with the rise of custom components, developers are increasingly creating interactive widgets that do not account for ARIA or for screen readers, and they don't realize they're victims of a leaky abstraction.
IS TRADITIONAL ACCESSIBILITY ADVICE EFFECTIVE?
Common web accessibility advice includes fixing alt text, color contrast, heading structure, form labels, and keyboard navigation. That advice has been around for nearly 20 years, and it helps, but most websites still need to fix the basics. That advice also comes from a simpler time, before highly interactive websites became popular. I would add one tip: try a screen reader.
Once I used a screen reader, I realized the value of other accessibility features like landmarks, alt text for CSS content, pronunciation-friendly prose, careful keypress handling, and announcing live region updates. Now I can anticipate issues while I write code. "Will this abbreviation sound confusing to a screen reader? Will this keypress listener interfere with screen reader commands? How will a blind person know this part of the page changed?"
I also developed some curiosity and empathy after trying a screen reader. "Out of sight, out of mind" is sad but true. If you've never used assistive technology or seen someone else using it, you probably won't think about it very much. That's why I shared the videos in the last section and why I'm sharing this post.
WHY IS IT HARD TO TEST WEBSITES FOR SCREEN READERS?
Rob explains why screen reader behavior varies so much. Browsers vary, so accessibility trees vary, and operating system accessibility APIs vary, and screen readers vary. The result? A matrix of manual testing environments: JAWS on Chrome, JAWS on Firefox, JAWS on Edge, VoiceOver on iOS, VoiceOver on macOS, NVDA on Chrome, and so on. To be fair, testing with one screen reader is usually good enough. But a developer does not truly know how their website behaves unless they manually try multiple devices.
This is not news to front-end developers, of course. To create a robust web application, one must think about a spectrum of screen sizes, magnifications, browser vendors, browser versions, assistive technologies, progressive enhancement, and other configurations. This requires a lot of manual work or a lot of satisficing.
I try not to be overwhelmed by this. I have to remind myself "progress over perfection". As long as users have an easy path to accomplish basic tasks, that should make me happy.
CAN TOOLING HELP DEVELOPERS CATCH ACCESSIBILITY ISSUES EARLIER?
I know there's online checkers like wave.webaim.org, but they're too slow and manual. I've seen continuous integration checks like a11ywatch/github-action, but they're slow too. I've used built-in browser tools like Lighthouse, which is great, but still slow and manual. How about tools that can validate my code before I go to the browser?
Linters like axe-linter or the jsx-a11y ESLint plugin might help. But you can't find many accessibility issues in JSX components. How about checking rendered pages for issues without opening the browser?
Maybe command-line tools like a11ywatch_cli or evaluatory would help — tools that work in a fast feedback loop and that give helpful error messages. Like Brian, I'm inspired by the Elm compiler and the Rust compiler as educational tools. They teach a language while validating it. Maybe there's room for something similar, something like SuperHTML, a tool to teach accessible HTML while writing it. Detecting accessibility issues early would leave more time and headspace for manual testing.
TAKEAWAYS AS AN INDIE WEB RESIDENT a
One final note, not about accessibility, but about deploying personal websites with a domain name:
Owning the URL to your website is a power move. "Of the billions of pages on the internet, this is mine, down to the letters in the address bar." Leave the walled gardens! Make your own website! Claim your territory!
Owning a domain is more than hubris; it's preparing for platform collapse. I understand the convenience of website publishing platforms. Maybe you don't want to manage a server, or fiddle with static site generators, or learn deployment commands. There's a lot of platforms that solve these problems for you. blot.im, neocities.org, bearblog.dev, and yay.boo are good. Just do yourself and your readers a favor: own your content and URLs.
All platforms, no matter how promising they appear, will eventually betray you — with pricing increases, policy changes, bankruptcy, sunsetting servers — and they'll drag your website to rot alongside them in the grave.
This is a moment of silence for
cohost.org/victims
and
victims.glitch.me
,
and a plea to
ur-next.neocities.org
and
ur-next.github.io
.
Save yourself before it's too late.
If you control your content and URLs, at least you'll be able to port your website to another platform when the time comes.
If you're still hesitating to purchase a domain name, thinking "I won't buy one unless I know how to use it", "what happens behind the paywall?", or "yeah I have my HTML but how do I show it to the world?", then maybe the brief demonstration in the screencast.mp4 will give you some confidence.
I didn't show myself buying a domain ($12 USD per year), but I showed myself making a DNS record around 0:35. And I typed the custom domain in GitHub at 3:20. It's like the written instructions for GitHub Pages but in video form. Other publishing platforms have similar instructions. If I can do it with my eyes closed, then you can do it with your eyes open. I believe in you.