A week at the CSUN Assistive Technology conference in San Diego leaves the mind reeling. So much new information! So much of it related to learning!
A few presentations made this connection explicit, directly correlating learning and accessibility. What I’d really like to talk about though, is the way that accessibility, the process and principle of making content available to those with disabilities, relates to elearning. Drawing from various presentations, I’d suggest that accessibility can inform elearning in four areas in particular: the design process, content development, course production, and module testing.
What Does It Mean to Be Accessible?
When you make elearning accessible, you make it available to all learners. You leverage approaches and techniques so that those who may have a disability related to hearing, vision, mobility, or cognition (among others) have equal access and opportunity to experience the content and instruction of your course.
(As a side note, elearning developers already use many techniques to make their courses accessible, although we may not think of it in those terms. We strive to use good visual design principles so that users will be able to focus on what they need to see. We aim for clear and consistent navigation, so that users won’t be distracted by confusing directions. We deconstruct expert knowledge so that novices can comprehend and assimilate what they need to do. When incorporating accessibility into our design process, we’re merely stretching the boundaries to include additional techniques.)
And How Do You Make Elearning Accessible?
As you might imagine, there’s a lot that goes into making your course accessible—just as there’s a lot that goes into making your course aesthetically pleasing, easily navigable, and understandable by novices. How do you know what you need to do? And how do you know when you’ve been successful?
Perhaps the most common guide is the Web Content Accessibility Guidelines (WCAG), version 2.0. There are three levels of accessibility (Level A, AA, and AAA); most organizations aim for Level A and AA. The guidelines state that content should be perceivable (presented to users in ways that they can actually see, hear, or otherwise access), operable (interface components and navigation must be able to be manipulated), understandable (content has to be readable and designed so that users can comprehend it), and robust (the content has to be able to be interpreted by devices such as assistive technologies).
WCAG 2.0 sets out a series of measurable criteria against which you can judge your course to see whether the course meets the guidelines.
The Design Process
So what did CSUN have to say about making elearning perceivable, operable, understandable, and robust?
As in any other software development, efforts to make elearning accessible should start early in the process. Too often, accessibility is something thought of and checked for at the last minute. As accessibility consultant Ryan Stunk and product designer Joe Lonsky noted, the cost of fixing a bug increases from 1´ in design, to 6.5´ in development, to 15´ in testing, to 100´ in the wild. Much, much better to catch it earlier.
Bill Tyler, digital accessibility engineer at Optum Technology, favors a role-based approach, where testing rests with the appropriate decision-making role instead of the quality assurance team (which usually enters at the end of the development process). For example, should decisions about color contrast be made at the quality assurance stage? Tyler argues that the primary owner of color contrast is the visual designer (although the business owner may have input through a marketing style guide). If the visual designer checks for color contrast issues, errors will be caught and fixes will be implemented much earlier. Tyler recommends making sure that knowledge about accessibility and tools for remediation are placed within the roles that have responsibility for making those decisions.
Should considerations of accessibility influence the content of your course? Absolutely.
We’ve hinted at color contrast, but color affects so much more than visual design. Accessibility solutions engineer Crystal Baker took a deep dive into color. People who have a variation of color blindness won’t be able to understand directions if the only indicator is color (the green circle indicates this value; the red circle represents the opposite value). She suggests principles to follow (keep color schemes conservative, conventional, and simple) and gives guidance on tools (notably The Paciello Group’s color contrast analyzer—which we use at Microassist—and WebAIM’s color contrast checker).
Jeff Witt of Fidelity Investments took color in a different direction, noting that, essentially, color is weird, and that you shouldn’t underestimate its influence in charts and graphs. While his presentation isn’t available as of this posting, his work in “Practical Guidelines for Accessibility and Usability of Charts” was riveting (as odd as that may sound). In it, he dissected the role that color contrast theory played in creating the palette that Fidelity uses in its charts and graphs. He noted other considerations, including how charts might be read by a screen reader (most charts translate very well into a table; a pie chart, however, works best as a list). And he suggested, rather provocatively, that table, graph, and chart summaries aren’t particularly helpful. It’s the viewer of the chart that constructs the meaning of the information, not the chart’s author; and how is the author to know what an accurate summary might be? Provide the data and let the user determine its meaning. (I detected a hint of reader-response theory influencing his approach, but didn’t get a chance to talk about it further.)
The intersection of cognition and accessibility was the subject of several sessions. In addition to her work on color, Crystal Baker presented on the art of language in accessibility. She showed the necessity of using plain language (including how to use Flesch-Kincaid Readability Score) in your content. Krista Greer, assistant director of disability resources for students at the University of Washington, shared a fascinating presentation that uncovered the difficulties in reconciling the cognitive theory of multimedia learning with the lived experience of presenting information in an accessible manner. Ashley Bischoff of The Paciello Group called attention to the often unintentional ways in which language stigmatizes those with disabilities and called for a more careful approach to avoid ableist language.
Often, participating in an elearning course means leaving with various handouts, job aids, and other takeaways. Like the course itself, these need to be accessible. Karen McCall of Karlen Communications led several sessions on the accessibility of PDFs. She shared the results of her survey of those using assistive technology to access PDFs and described how to optimally organize the PDF tag tree to create maximum accessibility. On her website, she provides the results of the PDF surveys for 2015 and 2016, as well as additional resources, including links to handouts and webinars. (Consider whether you can reach out to her and help her spread the word about her PDF survey—the more people who respond, the more useful the results.)
Once the course is designed, how might concerns about accessibility affect building the course in an authoring tool? Authoring tools come in many varieties, from SCORM-producing rapid authoring tools such as Articulate Storyline, Trivantis Lectora, and Adobe Captivate, to massive open online course development tools such as edX and Coursera, to learning management systems such as Moodle and Canvas.
CSUN didn’t have much to say about rapid authoring tools, although in his presentation on accessibility at the Census Bureau, Lawrence Malakhoff noted that Captivate courses had difficulty passing his accessibility checks.
Several people commented on the possibilities of making MOOCs (massive open online courses) and LMSs (learning management systems) accessible. Mark Sadecki and Christina Roberts shared their experience in building the question-creation tool in EdX, where course authors generally lacked experience in accessible design. As authors built their own quizzes, they pushed the tool in directions that it wasn’t meant to go, breaking accessibility features.
Sadecki and Roberts redesigned the interface so that the content would follow a logical sequence, independent of the way that the author created the question, which allowed assistive technology to operate effectively. They included instructions so that test takers would be informed of how the question would function. And, crucially, they ran the redesigned system through a series of user tests to make sure that it worked.
Other sessions explored the accessibility of the learning management systems Moodle and Canvas (for Canvas, the app version was explored this year; last year, the same team looked at the learning management system as a whole). While these tools have made significant strides, they still had a way to go before they were fully accessible.
Once created, the accessibility of elearning needs to be verified through testing (in the same way that navigation, quizzes, feedback, etc. needs to be tested). Rich Boys of Deque pointed out that automated testing catches a limited number of accessibility errors (most sources say from 20%–40%). Manual testing, on the other hand, can lead to widely divergent results. Boys described a study where eight programmers were given five hours to find as many bugs as they could, and the results were all over the map—some took a little time and found lots of errors; others took a lot of time and found only a few—there wasn’t any consistency.
How do you ensure consistency? Intopia’s Sarah Pulis described the development of reusable success criteria as a way to provide a standard approach to testing. Essentially, develop a list of criteria that states given this, when I do that, this happens. For example, given that there are checkboxes presented as a logical group with a group label, when I press the tab key, focus moves to the first checkbox in the group and I hear the label for the group of checkboxes. The test cases can be cross-referenced against the WCAG 2.0 Level AA success criteria noted above.
While the process doesn’t cover all cases, it can help bring uniformity. Paciello’s Ian Pouncey, when describing how Paciello helped Skillsoft make Skillsoft’s online courses accessible, described a similar process, using the Agile software development tool “definition of done” to define success criteria.
The team in charge of designing an accessible equation editor at Pearson Education showed that it was absolutely essential for quality assurance teams to have members who possess the disability that’s being tested for. As quality engineers, Su Park, Cricket Biddleman, and Edgar Lozano, each of whom has a visual impairment, gave essential feedback to the developers and guided Pearson in developing a robust and accessible equation editor.
Like other forms of electronic and information technology, individuals and organizations are coming to realize that in order to reach all learners, elearning needs to be accessible. Building in accessibility throughout the process will ensure that your efforts to make elearning accessible to learners with disabilities are successful. Once again, the CSUN Assistive Technology Conference offered great insight into how to make that happen.
Until next time,
P.S. Note that the presentations listed here, as well as several others and additional information related to CSUN, can be found on Microassist’s CSUN curated backchannel.
More on Accessible Elearning
At Microassist, accessibility is an essential priority as we develop elearning courses. We value education and strive to make our elearning accessible to everyone, offering Section 508 and WCAG AA accessible course development. For more on our accessible elearning development services, contact our accessibility team.
Image credit: © Can Stock Photo / destina