Who is this session for?
Engineers and product owners
Session description
Promoting accessibility at a project and organizational level is critical for not only achieving compliance, but also for allowing open access into information for all users no matter how they need to reach your content.
Accessibility testing is typically divided into: things we can automate, and things we have to do manually. Traditionally, testing interaction has been a largely manual process because our out-of-the-box automation testing tools can't open menus, check ARIA states, click around or test general UI and movement. Because of this, as sites age, they can quickly fall out of compliance and become slowly inaccessible.
In this session we'll talk about how to create a long term accessibility roadmap and how to turn some of those manual testing processes into lightweight automated scripts that can keep an eye on your project for years to come and ensure your users can always access the information and content they need.
Key takeaways:
- How to create a longterm accessibility testing roadmap
- Basics of automated accessibility testing (and when to use it)
- How to automate manual/interactive testing at a component level (and when to use it)
Materials
Presenter
Tim Wright
Tim has been building and designing for the Web since 2004 when he got my first job creating sites for a small community college in Virginia. Since then he's led great teams at companies like: 10up, Fresh Tilled Soil, Boston University, University of Southern California, and NC State.
His writing has been featured on sites such as: A List Apart, Smashing Magazine, InformIT, and SitePoint. On topics ranging from basic UX to advanced JavaScript APIs. In 2012 he also wrote his first book called, Learning JavaScript with Pearson Education.
Sessions
- General Lecture Session: End to End Accessibility Testing
Session video
Session transcript
Tim (Presenter): Thank you for everyone that showed up today. Accessibility has been a passion of mine for a long time. I am always excited when I get a chance to talk about it. Accessibility testing has been an extra nerdy thing for me as of late. I am very excited to have a chance to talk about it. Testing is not always the hip and cool thing. It is not like a fun thing that everyone is working on but it is always there and we have to do it. It spans all levels of our work from design to development. We have lots of different types of testing. We have functional testing, unit testing, even code reviews is a form of testing, collaboration design reviews, testing wires, spell checking and tools like Grammarly. We are always inundated by this testing. We are looking at direct E2E testing. We were using it to test colors in design systems. I wanted to focus on more of a full spectrum of testing from what we do in the beginning to the end. We will hit a lot of different points. We will talk about some things that I discovered in my journey.
It comes down to there is a lot of different types. What are the benefits of each and how far do you want to take it? Not everything calls for a unit test. How much code coverage do you want to have. Do you need to have 100% code coverage? Probably not. It is looking at each individual code testing and looking at how far you want to go with it. Do you need 100% of the templates and pages covered? Probably not. There is a subset it is what you find in the balance.
Why is testing so important? Who does accessibility testing help? What are the types of testing I have come across? How do we scale it to larger projects? If you are at a university or big company, there is a much more difficult challenge to take place.
I am not just promoting automated testing at all. We are taking an even look at both manual and automated testing. We will see what comes out of it.
Why is accessibility testing so important? I think it is generally a good part of quality assurance. People make errors and mistakes. People forget stuff. There will be gaps in knowledge. That is why we have QA in place and why we have it in place with accessibility. Defending the stuff can become difficult depending on who you are talking to. I have been in the fortunate position for 16 years now of talking about accessibility a lot. I have developed a bit of framework when talking to someone about the benefits of accessibility testing. First and foremost, we have taken the altruistic approach to open access.
When we build stuff, anyone who wants to access information at any point should have the opportunity to do so and we should not be putting up barriers for them. Whether that is a tweak or an article, whatever it may be, you should be able to access it. That is what we are trying to do. In the testing, we are making sure it is getting out there.
We fall back in opening up market share. There is a lot of people with disabilities. If you cut off that portion of the market, you won't get in there and there is better HTMO and that is more money and traffic coming into a company. If all else fails, we fall back on reduced risk of litigation. As we step through, that is the one that always opens the eyes wide. I didn't realize that I could get in trouble for this stuff. The end goal is to make sure information is accessible to everybody. How do you get there and how you convince people can change with who you are interacting with. Overall, the way we talk about testing encompasses all these things.
What is our goal? Our ultimate goal is to help the user. The goal in what we are talking about today is making and helping users easier. Helping people is kind of hard sometimes, especially if you have knowledge gaps you don't know about. Putting these systems of checks and balances in place like QA helps people make less mistakes and helps people help other people. We are making that process easier. There are knowledge gaps. People make errors because they are human. Human errors are around 4%. We put checks in place to try and minimize that.
The ultimate goal of an automated test is not to replace humans. Especially in accessibility. There is a lot of subjectivity in the guidelines. Rather to augment portions of the system that can be automated. Things that are repetitive and predictable. Things like checking focus trackings and things that have specific rules where you can tell a machine specifically what this is. Anyone that has done code reviews with accessibility in mind and they know this needs an RA state and they can understand the value of teaching a machine a program to say that instead of a person saying that. It saves time.
I have had this quote in my head for a long time. It is by a friend of mine. We were building a DNA analysis tool in Jango at the time. He was doing something with cropping images. It was the proper cropping of images. He said, "If you can teach a human to do it, you can teach a robot to do it." That is not always true. But the heart of it is if you can set a clear rule and communicate it clearly, that is a situation where you can teach a system to do the same thing. We are pulling barriers away so humans can focus on making a better experience. This is the same reason that I don't need to light a fire in my kitchen if I want to make toast. We made a machine called a toaster that can heat up the bread and make the toast for you.
As humans, we can work on making the best toast instead of just heating it up.
It reminds me of a time when I was working in Boston and we were doing a project for the Massachusetts Board Authority and we had a partnership with blind and low vision users using the site. We knew it would be a disaster and we had them run through it. They flagged accessibility issues with the website. We went off, fixed them and retested it. Because we fixed the results directly all of these barriers were removed. The blind and low vision users were experiencing the same problems that a sighted user would encounter. That is what I mean by pulling away these barriers. We moved along with the project and we were able to make it better.
We are really looking to help out two groups of people - the users which is primary. We are also talking about the editorial or product team. The editorial team is someone that is updating content and the product team is who is designing the building.
When we look at the editorial team, what I have found in the product work is that the most common accessibility problems we see as a project matures is in the editorial experience. As product team members, we build the site and the navigation and we do our best to make that accessible. But the content is most of the site and it changes all the time. We need to put in safeguards to help people doing this work. They are not accessibility experts. They are experts in the content they are creating.
We do education which is our first line of defense and we install guidelines.
Let's look at education. If you have sat through a previous talk of mine, you will notice that I always sneak education into all my talks. It is so important. This is our first line of defense in people doing the right thing. If they don't know what the right thing is it will be nearly impossible for them to do it. We are teaching people to do their own mental tests while writing content. It is a form of internal manual testing. When we are teaching people, we are looking at more common editorial issues. They don't need to know everything, only the things relevant to the content they are putting in. Presentation versus structure so they are choosing the wrong headings, bumping font size up and setting it to bold and understanding what that means. Writing meaningful links so we don't have to read more. Writing meaningful text is very important. These are things the editorial teams need to understand and know to make sure the content they are producing does not make the site fall out of compliance. It changes so much.
We are also leaving artifacts behind like articles and links and such for anyone that will be a new editor on site so they can go through and understand this and it creates a spiderweb of teaching. We install these mental triggers when someone is entering content. Did I remember to have alternative text to that image? Should this be an H2 or an H3? These are mental triggers we are hoping to install. We are not making accessibility experts, but helping them identify things that can help the code experience.
Even so, humans forget and bad things happen all the time. We have this extra step in inline guidance. It is a really cool thing we have done. The editor gets a giant form. It is a debatable form because of Gutenburg. We can add validation against the content output of those elements. That is what we did. We built a preflight accessibility check or plug in that hooks into the admin experience. What happens is you will be writing your content and you can run an accessibility scan. It will take the preview of the page, submits it to a note server, runs over the content and returns the result. You can see where your accessibility errors might be based on these results before publishing. When we combine that with education, we can prevent and reduce these other editorial issues.
Why don't I just use the plug in? The combination of the plug in and the education gives content to these issues. If you see something like this heading should be in H3 instead of H5, if you see it there you will change it, but unless you understand why the error is happening, it will keep happening. It gives them the why of the errors. It is our first form of checking. You could block until all these checks are green. We usually don't do that since it can return some false positives. But having these in place lets them know there is nothing that the scanner is checking. It can be our frontline of defense.
The problem with this is it is only the content. The UI is different. It involves the product team. This is where our more traditional types of testing that we think of happens. Everybody wants to do this stuff correctly. We deal with testing collaboration and then we share these solutions throughout the product team so we can keep doing this sort of thing. We do want to run through some common types of testing that I have come across and things I have done up until now.
We have basic automated testing, advanced automated testing which is really cool, manual testing (always Present) and then design and development collaboration in there as well. That is more of a conversational checks and balances. Let's run through them.
Basic testing. You will have manual basic testing and automated basic testing. Manual testing will be testing screen users, browser tests, usure tests, inspecting HTML, code reviews and keyboard tests (using without a Mouse) and making sure the interface is usable. For automatic testing, you will have AX, and even linting. There are good linting that can run through the markup for you. Linting in general can be really good to help. We have our own javascript and linting and we ask ourselves what can be automated and what should be automated. There is a lot of basic stuff in generalized best practice in the files, and beyond those basic things we looked at the content of the code reviews we are getting. What are the more common things we saw that we could automate? Things like using color, nesting, tabs versus spaces and that sort of thing. All those things can be automated. There is nothing to say that as a human I need to be there and use a color variable or something like that. Our automattic processes can be like that. You still need them in for higher accessibility things. You can take the things that can be automated out and set them aside so you don't need to spend the time doing those things that the machine can do. You don't need to light the fire to make toast, you can use the toaster.
As far as automation, this is my automation of choice. I know a lot of people like Light House and AXE as well. This is a bookmark I use that runs on Pa11y. It runs guidelines for each error. It will tell you which guideline you are violating. It is good for colors, general HTML scanning and snapshots that is happening on the page and all text. It is really good at finding that stuff. It is not good at interactivity, responsive behaviors or anything subjective. It is just a scanner. It won't check everything. If you have been deep into the world of accessibility testing, you will know you won't find a scanner that will get 100% return errors for you. But there is still a value in this. You can scan it and look through the warnings, and notices and see what is going on.
A pro for this is that it is really quick. It is a single click and looking at the report and pulling out what is valid and not valid. Cons of this is that false positives do happen. It is just a scan tool. It is only as smart as you make it. It doesn't really scale since you need to click a button. You can run NCI if you want to but mostly this sort of thing will not be like that.
We also have manual testing. This is an example of the manual testing spreadsheet that we use. It has all the guidelines and links to descriptions - pass/fail and compliance level. It is really good at keyboard tests and looking around and anything you would find in screen readers and stuff like that.
It is very thorough. It will all be valid. A con is that it takes forever. It takes a long time to do. It is 30 minutes to an hour per template you want to scan. There is one for global elements and things like header/footer and components that occur frequently throughout the site. We do that in a separate tab. If there is something wrong with the navigation, it will not appear every single tab. These will get transferred into a tracking software.
Ultimately, you can find a balance of manual and automated testing that is best for your product. It is a hard thing. It takes a lot of time and effort. But we can also take the parts of this manual test and see what is repeatable, what happens a lot, and what can be automated. Kind of like what we did when we built up the linting rules. That is when we get into the nitty gritty of functional testing. This is automated functional testing.
We will ask ourselves three questions. Are there things we do manually that are reputable enough to automate? Can we ask a specific question and will the technology allow for it? Are there things we can automate? Are the questions straight forward enough where a machine can say yes or no for this. Is there a technology that will allow me to do this thing? When I started down this project, it was on the angular project I mentioned earlier. We were building out tabs. I thought why don't we put a bunch of accessibility testing out here. We are looking at the browser. It worked! Basically what we need to do these tests is a web browser and a tester. Something that is pretending that it is me and clicking around. The technologies that I rely on are Puppeteer. It is a library that controls chromium in this place. Jest is paired with react. You can use any insertion library. It can test the results. It is good for compliance testing if you have specific rules and components that you need to test against. You can watch a video in the browser and then check for anything that might cause a seizure in that video. Anything you can sit down and have a human do with a yes/no answer you can do with these tests. You are asking specifically objective questions.
Things like do all the correct RI rules exist? You can check the HTML for roles. You can look for window roles and such. Some of these will be applicable for applications. You can check for the presence of them and when they are needed. Are the keyboard bindings correct? You can check for all that stuff what happens when I close a panel? What happens when I expand a key? Does focus move properly throughout this component? These are all things you can have a yes/no response to so you can wire them up to a component.
Can I navigate without a mouse? Do all my substitute menus open and close correctly? Can I access my sub menus at all? Can I go from the top to the bottom of this page without being trapped? Is focus always visible? These are all very straight forward yes or no questions you can check for. Is focus properly trapped in this dialogue? Does it return focus correctly when it is closed? You can ask the system these questions. This is the output of one of the tests that we wrote for an accordion. You can see it opens and closes without a mouse. It is up to spec. If you ask a specific question, you can get a specific answer. You can test these over and over throughout the life cycle of a project. You can run these in the CI pipeline to see if some code got pushed that broke the accessibility test. You can even run them on third party code if they are general enough. You can change them a little bit. Most of these things are compliance issues. If you are pulling in a third party modal, you can hook this test up to it and it will tell you if it is working up to spec or not. If you have an environment where people bring in their own stuff, you can tell them this plug in does not meet our accessibility compliance. It is good for compliance. You still need screen reader and human testing to improve it. These tests are not 100% foolproof. You won't catch everything with them.
Does this Youtube video have captions? I think you would have trouble testing that one. Anything that is subjective would be hard. You can look at reading level and such. You still need a human to test those things. That is not to say that there are some things that we can offload to these more custom automated tests.
The question after that is how do I scale this? This is all great. You have your in line guidance and training. You have your Pa11y. You have your compliance in your functional testing and manual testing for everything else for one website. You are exhausted and you don't feel like doing this anymore. Scaling this is challenging but you can scale it up to a larger organization. How do we scale this? You have editorial tests looking after your content and functional tests looking after the UI. People want their work to be accessible. If you give them a way to easily do it, they will utilize it. We have prebuilt design elements and distribute them using MPM and they are paired with accessibility tests. We have done this in our component library called Baseline. It is filled with MPM packages that we use. They are filled with accessibility tests and we can distribute these through projects. We can fix the package and deploy it. We can run the accessibility tests against other websites. It is kind of mimicking. The way we do it is more project to project. It is the same pattern if you wanted to do this for a distributed component library across a larger organization. You will distribute these components packaged up with an accessibility test. When something deploys, you can run the accessibility tests and make sure things are working how they should be. What I think is neat about Baseline is that it is very project based. If we run into something we think is a challenge in the project, it will be extracted to be reused. We have countdown timers, and scrolling indicators and more traditional ones as well all paired with accessibility tests.
If you want to look into one of them, this is our automation library. It is a collection of classes. The accessibility tests that is run here is make sure that all the classes available support reduced motion. With Puppeteer and Jest we are hitting up a triggered motion of chrome, and running the query and running through the automatons in the demo. It appears in the site and it applies them and does them over and over again. That way we can make sure that if anyone ever pushes a new automation to this, it will run if it is not hooked up right then the test will fail and you will have to go back and fix it.
These can be run pre-deploy. We don't have to worry about certain components if we are using this thing we know that all these automatons will support that. We know the accordion will pass compliance. That is not to say the individual designing it will pass. But we know we came with a good starting point. This even works with design systems. You can check if your branding colors are being used correctly. Are they using a good type face? There is a lot we can do with these automated tests beyond the scope of what is reasonable to ask a human to do. If you have an organization with 300 websites, it is not reasonable for someone to spot check these sites and see if the all the branding colors or font sizes are correct.
It is important to remember that we are not really striving for or guaranteeing 100% anything. It is like security. You can have the best security protocols applied to an organization and you can still get hacked. It is the same thing with accessibility. You can have all these checks in place and you can still find things that don't quite work for accessibility. Striving for 100% is great, but it is extremely difficult. These things we talked about are steps to help people access information. It is testing education, scaling, and making sure that everybody who wants that information can get that information regardless of who they are, where they are or any disabilities they may have. Keep pushing hard. But give yourself a break. It will be okay.
It is a big responsibility. Information access is very often times access to truth and it is really important to get out there. But take it one step at a time. There are a lot of steps in the process. You don't have a giant leap to the end. Remember you are doing it to help people in the first place. You are helping people to have understanding and access to this information. I think you will come to the same conclusion that I came to. It is worth all the effort and time put into it in the end.
That is all I have.
Feel free to reach out to me. You can email me or find me on twitter. The feedback link to my presentation is here. We don't often get feedback as speakers. I would love that. I hope to hear some feedback. I think we have some time for questions. I don't have access to the questions log. If we have any, I am happy to answer them.
(The following had to be trimmed from the video.)
Rachel (Room Host): Thank you, Tim! We do have two questions. The number one question with four votes is: is the plug in you shared publicly available?
Tim: Not currently. But I do believe there are plans to make it public. We are working on it.
Rachel: Cool. The other question was will the spreadsheet be available as a template anywhere?
Tim: I think I can do that. I will send the link. I will confirm. I am assuming if I get the thumb's up that I can make that available.
Rachel: Awesome. I am also a big fan of Pa11y. I have used it. They have a nice suite of tools. You don't need to have too much tech chops to work. For example, the dashboard and being able to set up your own basic scanning system and such. It is cool to hear from a fellow Pa11y fan.
Tim: I don't think it gets enough press.
Rachel: Awesome. They still can't hear me. We may have had our own conversation that they saw in captions. That's interesting.
Tim: Okay.
Rachel: Hold on. I am going to double check. Apparently I have been muted this whole time. You can hear me.
Tim: I can hear you, yes.
Rachel: Yes, they couldn't hear me but they were reading the captions. Accessibility is exciting. I will figure out how to fix that for the next session. Let me see if I can get into my microphone really quick and see if that helps before we sign off. I am transitioning. There we go. I am going to mute myself on Skype to make sure there is no echo. Sorry for the audio issues. I thought my audio was coming through Skype but I guess it was not. I was excited to see that everyone was taking advantage of our captions. The captioners could hear me clearly.
Tim, thank you for that amazing presentation.