Presentation is loading. Please wait.

Presentation is loading. Please wait.

Agile and the automated testing of accessibility

Similar presentations


Presentation on theme: "Agile and the automated testing of accessibility"— Presentation transcript:

1 Agile and the automated testing of accessibility

2 Agile has changed a lot since it was first introduced last decade with frameworks like this scaled agile framework that cover all aspects of how agile interacts with the entire organization. © All Rights Reserved

3 However, if you boil it down to its core, agile is about the simple iteration process that consists of a backlog that is groomed and broken-down into sprints (or iterations). These iterations focus on team communication and adaptation to learning. They generally contain feedback loops on a daily basis to facilitate collaboration and coordination and then feedback that improves the overall process. © All Rights Reserved

4 Impact of Agile There have been a lot of changes that agile has caused, many of these are the tools that we use. These tools in turn have changed agile. Tools like GitHub have become the center of the coding portions of agile © All Rights Reserved

5 Agile Tools There have been a lot of changes that agile has caused, many of these have resulted in new tools that help us to manage this agile process. These tools in turn have changed agile. Tools like GitHub and JIRA have become the center of the entire process with other tools integrating into these hubs to facilitate various portions of the process. © All Rights Reserved

6 Shift Left Testing Catching a defect in the design and requirements phase can reduce cost to fix by up to 100:1 Software Defect Reduction Top 10 List, Barry Boehm and Victor R. Basili, January 2001 The result of this agility is that testing has become more and more important. As we have learned more about software development, we have realized that catching and correcting defects early can save huge amounts of effort later in the process. © All Rights Reserved

7 Rise of Continuous Lag between committing code and getting feedback allows damage to magnify Costs rise Velocity decreases Continuous Integration Continuous Deployment This is not only true for defects themselves, but impacts can be on personal relationships too. The end result of late feedback is rising costs and lower value provided by development teams. This has led to the rise in continuous feedback. Continuous integration allows feedback on code quality and regression testing. Continuous deployment allows feedback on customer adoption of new features which feeds the backlog. © All Rights Reserved

8 Automation This continuous revolution can only work if the repetitive portions of the process can be automated. So continuous integration has led to high levels of unit testing, high levels of automated integration, performance, security and acceptance testing. This has led to the widespread adoption of technologies like Selenium, Qunit, Jasmine, Mocha, Cucumber etc. © All Rights Reserved

9 Accessibility Testing’s Image
We are now asked to introduce accessibility into this environment and when we do, the need for manual testing in accessibility means that we are viewed as the “stick in the mud” in the process. How do we change this? If we cannot change this, we make it difficult for the widespread adoption of accessibility. © All Rights Reserved

10 Aim of Software Testing
Reduce the risk of the existence of a serious problem on a supported configuration to an acceptable level Define “serious problem” Define “supported configuration” Define “acceptable level” First we need to be clear about what software testing is about? What is the aim? The aim of software testing is not about eliminating defects altogether. There is no software product without defects. The aim is to optimize the quality of the product, given the resources available. To this end, we find it useful to define the aim of software testing as the reduction of the risk of the existence of a serious problem on a supported configuration to an acceptable level. This means that we need to define the term “serious problem”, we need to define what a “supported configuration” is and we need to define what an “acceptable level” of risk is. There is no single answer to any of these questions. It will depend on the resources of the team. The demographics of the users and the importance of the application. © All Rights Reserved

11 So, even though this may differ for each specific organization, lets look at the general market in which we find ourselves and how this is changing over time. In 2009, Windows had the majority of the market with approximately 70%, Apple and Blackberry had about 6% and other operating systems about 20%. Fast forward to 2013 and the picture has changed dramatically. Android already had over 50% of the market with Windows about 25%, Apple 20% and the rest disappearingly small. © All Rights Reserved

12 Android Fragmentation
Looking just at the largest segment of that market: Android – what does it look like? Well it is tremendously fragmented. © All Rights Reserved

13 Android Fragmentation
We have seen 24,093 distinct devices download our app in the past few months. In our report last year we saw 18,796. In 2013 we saw 11,868 OpenSignal 2015 The android fragmentation can represented in numbers too as pointed out by a report by OpenSignal. © All Rights Reserved

14 Combinations 4 operating systems, 2 supported versions 5 Assistive technologies 4 Browsers, 2 supported versions 3 Responsive breakpoints 216 combinations excluding device combinations If you then ignore devices, and simply look at the software components like the assistive technologies, browsers and operating systems and combine that with your own software variants – like responsive breakpoints, then you still arrive at a very large number of combinations. 216 combinations using the numbers presented here: 4 operating systems with support for 2 versions, 5 assistive technologies, 4 browsers with 2 supported versions and 3 responsive breakpoints. © All Rights Reserved

15 Most of our customers have trouble testing a single browser-AT combination – so how do we get our arms around this problem? © All Rights Reserved

16 Sparse Testing Matrix Browser/AT Operating Sytem Chrome KB IE KB
Firefox KB Safari KB Dragon IE NVDA FF Jaws IE VO Safari TB FF Orca FF Operating Sytem Windows OS X iOS Android Linux One of the ways to do this is to look at this set of combinations, and using the process of reducing risk to an acceptable level, we start to eliminate combinations that do not help us to reduce that risk by a significant amount. We then combine that with our user demographics. For example, we my have an application that is only accessed on mobile platforms. In this case, we can ignore all the desktop operating systems and this simplifies our picture significantly. Another example of reduction can be done in the area of keyboard testing. Most keyboard platforms are exactly the same and so it becomes unnecessary to test all of the keyboard combinations. Furthermore, we can reduce testing requirements by defining our supported configurations. It is totally acceptable, for example, to say that we will only support the most recent version of iOS – because 90% of iOS users upgrade their operating system immediately. When you do that, then you will arrive at a much smaller number of platforms. Furthermore, when you look at the functionality of your application, you will realize that you do not need to test all of the functionality in order to cover the areas of risk for (for example) keyboard testing. This methodology will result in a much smaller set of tests that need to be performed to get an acceptable level of risk. © All Rights Reserved

17 The other key to dealing with the need for manual testing, is to leverage automation wherever possible. There are two types of automation when it comes to accessibility automation. This first type of automation is the create specific automation for testing accessibility. This is what I call “Accessibility Acceptance Automation” © All Rights Reserved

18 Accessibility “Acceptance Automation”
Here is an example of this acceptance testing automation. This is testing an ARIA menu. As you can see in the example, the test code is simulating key presses. For example the first action sets the focus into the menu. The next action simulates pressing the LEFT arrow key. This should cause the focus to shift to the other end of the menu. Then the test asserts that this is happening. The rest of the test carries on in the same vein. © All Rights Reserved

19 Acceptance Automation
Comes from the “requirements” Designers mark up their wireframes and comps Stories must contain specific accessibility acceptance criteria Can go to any level of detail Test the specific alt attributes on images Test the DOM order specified in the wireframe Etc. The important thing is that the tests, test the requirements created during the design and story writing process. It is best done by having these requirements specified by the UX designers, marked-up in the design comps and then translated into acceptance criteria and unit tests. The testing can be taken to any level of detail – even to the point of validating the value of alt attributes, the reading order of the DOM content etc. etc. Once again you use a process of analyzing the acceptable risk level versus the required effort tradeoff, as well as evaluating the immediate versus the future effort. © All Rights Reserved

20 Open Source Rules Engine
The second type of automated testing is “generic” automated testing. There are many different technologies that can be used for this. The one I will be showing is aXe – Deque’s open source rules engine. © All Rights Reserved

21 aXe Manifesto Must have a zero false positive rate
Must be lightweight and fast Must work in all modern browsers on all platforms Must be tested automatically There are a lot of things we have learned over the years in terms of what is required by developers when using an automation library. The primary concerns here are that when the library finds a problem, it better be a real problem. Developers hate it when their CI builds break. They therefore hate it when they break for no good reason. It is thi slearning that has led us to the aXe manifesto. © All Rights Reserved

22 aXe-Core Small ~100KB Fast Intelligent Unlimited Reliable Secure
No network connection, no new browser start Intelligent Supports cross domain iframes Turns document level rules on/off automatically Unlimited No throttling, no server limits Reliable No double-execution or missing CSS problems No network connectivity problems No server capacity problems Secure No server access across the wire No sending data to a server Mozilla Public License 2.0 Available via GitHub, npm and Bower © All Rights Reserved

23 aXe Chrome extension © All Rights Reserved

24 aXe Firefox extension © All Rights Reserved

25 aXe extension Developers Testers Accessibility Experts Use when coding
Use during functional testing Accessibility Experts Use for validation © All Rights Reserved

26 aXe API Two major calls Getting started
axe.getRules() – returns a list of the rules loaded axe.a11yCheck() – analyze a page or portion of a page Getting started Install package bower install –-save-dev axe-core Include library <script src=“(path_to)/axe.js”></script> Call function a11yCheck(target, options, callback); © All Rights Reserved

27 aXe API demo © All Rights Reserved

28 Integrations Java Selenium JavaScript Selenium (axe-webdriverjs)
Grunt (grunt-axe-webdriver) Cucumber and R-Spec (axe-matchers) Firefox Dev Tools Chrome Dev Tools Ember-aXe (ember-axe) more… © All Rights Reserved

29 Summary Manage your matrix Leverage automation
Optimize for resources and risk Leverage automation Generic automation Specific automation Optimize for up-front vs. ongoing effort © All Rights Reserved

30 Questions https://addons.mozilla.org/firefox/addon/axe-devtools/
Chrome: search Google for “chrome axe extension” © All Rights Reserved


Download ppt "Agile and the automated testing of accessibility"

Similar presentations


Ads by Google