Skip to main content
18. januar 2023

Our everyday lives have grown increasingly dependent on digital tools. This has impacted the ways in which we take in media, access services, shop, learn and communicate. But we do not all experience digitalization in the same way. There is a significant risk that the rapid digitalization of our lives can exclude those who are unable to keep up. This is why we need websites and apps to be accessible for all.

Malin Rygg is giving a speech at the NLDL conference in a packed hall at the university of Tromsø. She is showing her presentation on a big screen behind her.

As a government body, The Authority for Universal Design of ICT has a mandate to assist businesses in ensuring that their digital platforms are accessible. We do this through expertise sharing and guidance, but also through monitoring, control testing, and sanctioning those who do not follow the law. Today, there are more than 100 000 websites and apps in Norway and this number grows every year. How can we use Artificial Intelligence (AI) to help us monitor these platforms efficiently to help us close the accessibility gap?

When we talk about monitoring of websites and apps, we essentially mean testing – a lot of testing. We test applications, websites, and documents to check if they comply with the WCAG 2.1 criteria. Testing can be automated or manual, but the majority of web solutions are tested manually or semi-manually. As a result, the testing process is time consuming and resource intensive. With the implementation of the EU Web Directive (WAD) in 2023, testing demands will increase, which has caused the EU and EEA regions to scramble to find the best automated testing technology to use.

However, automating accessibility compliance testing is complicated, very complicated. In fact, it is so complicated that only 20-30% of success criteria can be fully automated as of today. Therefore, we face a serious issue – how can we keep pace with the increased demands of compliance testing and monitoring in the EU and EEA regions?

Let’s daydream for a little, wouldn’t it be wonderful if you could put your web solution into an AI-powered engine and it spat out a finished accessibility compliance report? With the advancements in AI technology made by major tech companies like Facebook and Google, we believe that this technology has the potential to revolutionize the way we approach monitoring and testing of web solutions. But several challenges remain, some of which can be easily remedied and others that require more thought.

Challenges with AI-powered accessibility compliance testing

WCAG (Web Content Accessibility Guidelines) are a set of guidelines developed by the World Wide Web Consortium (W3C) to make web content more accessible to people with disabilities. There are several challenges that can arise when testing web content for WCAG compliance using AI-powered systems. In this blog we have chosen to focus on four of the big ones: False positives and negatives, complex websites, understanding the users’ needs and data bias. It is important to note that these challenges aren’t show-stoppers for AI-powered accessibility compliance testing, but they need to be addressed to develop a sustainable solution.

False positives and negatives

This is not only a major problem with regular automated testing, but this can also be a problem with AI-powered accessibility compliance testing. A false positive occurs when an AI-powered system identifies an accessibility issue that does not actually exist. This can be caused by several factors such as a lack of understanding of the context of the web page, or by a system that is overly sensitive to certain issues. False positives can be a problem because they can lead to developers wasting time and resources trying to fix issues that do not actually exist.

On the other hand, a false negative occurs when an AI-powered system fails to identify an accessibility issue that is present on the website. This can be caused by a lack of understanding of the context of the web page or by a system that is not sensitive enough to certain issues. False negatives can be a problem because they can lead to accessibility issues being overlooked.

These false positives and negatives can make it difficult to trust the results of the testing, and it can be hard to determine which issues are true issues and which are not. Trust in the test results is crucial for us as an Authority making decisions like handing out fines for non-compliance.

Complex websites

Large, complex websites can present several challenges for AI-powered systems when it comes to testing accessibility. One of the main challenges is that these websites often use dynamic content or JavaScript that changes frequently. This can make it difficult for AI-powered systems to keep up with the constantly changing nature of the website, as the system needs to re-evaluate the website for accessibility each time it changes.

Additionally, these websites often have multiple pages and a large amount of content, which can make it difficult for AI-powered systems to test all the content accurately. With large and complex websites, it is crucial to ensure that all the pages and functionalities are tested, which can be a very time-consuming task for human testers, and even more so for AI-powered systems.

Moreover, these websites often use sophisticated features and advanced technologies, such as JavaScript, AJAX, and dynamic forms, which can add more complexity to the testing process and might require additional expertise and knowledge to be tested properly.

Another important factor to consider is that complex websites may have a more complex and dynamic user interface which can make it harder for AI-powered systems to navigate and understand, thus making it hard to test for all potential user interactions.

In conclusion, testing the accessibility of complex websites can be a difficult task for AI-powered systems, as these systems may not be able to keep up with the changing nature of the website and the sophisticated features it might contain. It is important to note that even with AI technology, human expertise, understanding of the user needs, and a thorough testing process with real users, is still necessary to ensure that complex websites are fully accessible.

Understanding the users’ needs

One of the key challenges with using AI-powered systems for testing WCAG compliance is their inability to understand the user’s needs, preferences, and abilities.

One of the key issues is that AI-powered systems are not able to consider the various needs, preferences, and abilities of users with disabilities, as well as the wide range of assistive technologies that they may use. For example, a website that is fully accessible to a user with a visual impairment using a screen reader may not be fully accessible to a user with a motor impairment using a keyboard.

Additionally, AI-powered systems are not able to understand the preferences of users with disabilities, such as font size, color contrast and layout. These preferences may differ from user to user, and not taking them into account can lead to false negative results.

AI-powered systems lack the ability to understand the user’s needs, preferences, and abilities, which can lead to false negative results and not addressing the real user needs. That is why it is crucial to involve real users with disabilities in the testing process, through user testing and feedback, to ensure that the website is truly accessible to all users and to address the real user needs.

It is important to note that WCAG is a set of guidelines and not a set of strict rules. Using AI-powered systems will not fully replace human evaluation and understanding. WCAG compliance will require the involvement of human experts in evaluating and testing, along with the use of AI-powered tools to automate the process and identify potential issues.

Data bias

Data bias is not necessarily a problem for accessibility compliance testing but is a problem when developing AI in general.

Data bias based on disability refers to the tendency for a dataset used for training a machine learning model to be skewed or unrepresentative of the real-world population of people with disabilities. This type of bias may occur if the dataset used to train the model does not include enough examples of users with disabilities or if the examples that are included do not accurately represent the diversity of users with disabilities. For example, if the dataset used to train the model only includes examples of users with visual impairments, the model may not be able to accurately identify accessibility issues for users with other types of disabilities such as cognitive, hearing, or motor impairments.

AI systems have the potential to make many decisions more efficiently, accurately, and consistently than humans. Their ability to process large amounts of data and learn from it enables them to understand human languages and perceptions, and to translate from one language or sensory mode to another. Because people with disabilities are a minority in society, they are a group that often experience discrimination and prejudice. And the biases found in the society in large spill over to the digital world and the data within. This means that AI systems can perpetuate and even amplify the biases and discrimination found in the data they are trained on.

AI for monitoring and beyond

Thus far we have discussed the limitations and challenges of using AI-powered tools to test accessibility compliance. But what can AI-powered tools help us with?

WCAG

AI-powered tools can automatically check for missing alt text on images and provide suggestions for appropriate alt text. The challenge around this is to analyze what the intent with the image is and provide a good alternative to it. These tools can also automatically check for proper use of heading tags which today takes a lot of time to do manually. Furthermore, they can identify other issues such as poor keyboard navigation, missing skip links, and more.

Color contrast

These tools can automatically check for color contrast issues and provide suggestions for appropriate color combinations. The challenge today is the difference between computed contrast and true contrast. For example, the use of overlays often confuses color contrast analysis tools now because they do not analyze the visual contrast but rather the computed contrast from the code.

ARIA and clear language

AI-powered tools can also be very effective in identifying missing accessibility information on websites and applications. For example, they can detect missing or incorrect ARIA (Accessible Rich Internet Applications) attributes, which are used to provide additional information to assistive technologies, such as screen readers. This can include information about the role, state, and properties of elements on a webpage.

Additionally, these tools can use natural language processing (NLP) to interpret the contents of the webpage and identify if the content is written clearly and in an accessible manner, providing suggestions for improvement.

Overall, AI-powered tools can help developers and testers ensure that their website or the website they are testing is fully accessible to screen readers and other assistive technologies, and that it meets the requirements of the WCAG.

But wait, there is more!

Now that we have identified areas where AI-powered tools can assist in accessibility compliance monitoring, what other accessibility areas might AI be able to help with?

We were recently at Northern Lights Deep Learning Conference 2023 and got the opportunity to talk so different experts in the field. Through these discussions we gained insights into other things we can do with AI to increase accessibility, such as advanced accessibility checks, continuous accessibility testing and personalization.

Advanced Accessibility checks

Some AI-powered tools can perform advanced accessibility checks that would be difficult for humans to do manually. For example, certain AI-powered tools can check for keyboard accessibility by simulating a user using a keyboard to navigate a website and identify issues.

Accessibility Statement

With the WAD coming into force, owners of web solutions also have to publish an Accessibility statement of their web solutions, which has to be updated at annually at a minimum. AI can be used to conduct Continuous Testing by integrating AI-powered systems with the development process so that they run on a continuous basis. This can help to identify and fix accessibility issues early in the development process, rather than waiting until the end of the development cycle.

Personalization

Personalization in testing refers to the customization of the test-taking experience for individual users based on their specific needs or characteristics. AI-powered technology can be used to analyze data on a user’s abilities, preferences, and past performance, to adapt the test to their individual needs. This can include adjusting the difficulty level, format, or type of questions, as well as providing accommodations such as text-to-speech or additional time.

Road ahead

When working with AI-based technology for testing WCAG, it is important to recognize that these tools are still in the early stages of development and may have limitations in their capabilities. One major limitation is that a fully AI-powered automated testing solution, capable of accurately and reliably monitoring a large number of solutions, has yet to be developed.

As we have observed, AI-powered accessibility compliance testing tools face several challenges that must be overcome in order to be useful for monitoring and testing web content. While these challenges can be significant, the potential benefits of such tools are enormous. Imagine the convenience of being able to input a web solution into an AI-powered engine and receive a finished accessibility compliance report in return. This vision is achievable through the power of AI, but we are not quite there yet. Given the pressing need for more efficient testing of accessibility compliance in the EU and EAA, it may be worth exploring the possibility of using AI-powered solutions to help solve this problem. Maybe this should even be an intergovernmental project for the EU/EAA countries?

Teknolog Jan Beniamin Kwiek
Rådgiver, Digitaliseringsdirektoratet

Jan Beniamin Kwiek

Jan Beniamin Kwiek er teknolog i Tilsynet for universell utforming av ikt i Digitaliseringsdirektoratet.

Portrettfoto av Malin Rygg
Avdelingsdirektør, Digitaliseringsdirektoratet

Malin Rygg

Malin Rygg er direktør i Tilsynet for universell utforming av ikt i Digitaliseringsdirektoratet. Malin er jurist med erfaring frå blant anna Konkurransetilsynet og har tidlegare jobba som advokat og dommar.

Authors

Teknolog Jan Beniamin Kwiek
Jan Beniamin Kwiek
Rådgiver, Digitaliseringsdirektoratet
Portrettfoto av Malin Rygg
Malin Rygg
Avdelingsdirektør, Digitaliseringsdirektoratet

Comments (1)

J

Jan Lundin

19. januar 2023

Good insights on using AI. At Accessible Clloud we have already tried new concepts of testing websites using AI, Machine learning in combination with iComputer Vision. Using power ful computing power from Sas Institute we manage to build a solutions that was able to detect accessibility problems in forms on websites.

Read the article included written by Sas Institute - and as a bonus we also won Sas Institues Global Hackaton 2022 for the solution.

https://www.linkedin.com/safety/go?url=https%3A%2F%2Fcuriosity.sas.com%…