W3C Accessibility Guidelines (WCAG) 3.0

W3C Working Draft 24 July 2023

More details about this document
This version:
https://www.w3.org/TR/2023/WD-wcag-3.0-20230724/ https://www.w3.org/TR/2024/WD-wcag-3.0-20240501/
Latest published version:
https://www.w3.org/TR/wcag-3.0/
Latest editor's draft:
https://w3c.github.io/silver/guidelines/
History:
https://www.w3.org/standards/history/wcag-3.0/
Commit history
Editors:
( TetraLogical )
( Library of Congress ) Michael Cooper ( W3C Invited Expert )
( Google, Inc. )
Chuck Adams ( Oracle ) Alastair Campbell ( Nomensa ) Project Manager: Wilco Fiers ( Deque Systems, Inc. W3C )
Feedback:
GitHub w3c/wcag3 w3c/silver ( pull requests , new issue , open issues )

Abstract

The W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. These guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. The guidelines apply to They address various types of web content including static, dynamic, interactive, and streaming content; static content, interactive content, visual and auditory media; media, and virtual and augmented reality; and alternative access presentation and control. These reality. The guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.

Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple outcomes to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each outcome.

This specification is expected to be updated regularly to keep pace with changing technology by updating and adding methods, outcomes, and guidelines to address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.

W3C Accessibility Guidelines 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [ WCAG22 ] and previous versions, but does not deprecate these versions. WCAG 3.0 will incorporate content from and partially extend User Agent Accessibility Guidelines 2.0 [ UAAG20 ] and Authoring Tool Accessibility Guidelines 2.0 [ ATAG20 ]. While there is a lot of overlap between WCAG 2.X and WCAG 3.0, WCAG 3.0 includes additional tests and different scoring mechanisms. As a result, WCAG 3.0 is not backwards compatible with WCAG 2.X. WCAG 3.0 does not supersede WCAG 2.2 and previous versions; rather, it is an alternative set of guidelines. Once these guidelines become a W3C Recommendation, the W3C will advise developers, content creators and policy makers to use WCAG 3.0 in order to maximize future applicability of accessibility efforts. However, content that conforms to earlier versions of WCAG continue to conform to those versions.

See WCAG 3 Introduction for an introduction and links to WCAG technical and educational material.

Status of This Document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This is a Working Draft of W3C Accessibility Guidelines (WCAG) 3.0 by the Accessibility Guidelines Working Group together with the Silver Task Force and Silver Community Group . WCAG 3 was published on 21 January 2021. This version includes content maturity levels moves much of the introduction to inform readers about where we are in the process. This Explainer for W3C Accessibility Guidelines (WCAG) 3.0 and addresses editorial fixes from comments received on that draft presents pieces of a . Not all comments on the previous draft have been processed yet. In particular, testing and conformance model have received many comments which are being actively explored, but does the group has not provide a completed conformance model. This model presents a more granular way of thinking about guidance based around how reliable and repeatable test results are. It allows yet adopted updated content for guidance that only applies in certain conditions, such as those sections. The group will continue processing comments from the language used. This previous draft also introduces assertions , which are statements about whether a process was completed, such as usability testing or assistive technology testing. Assertions are documentation of well as on this draft.

The Working Group seeks input on the process following general questions:

To comment, file an issue in the W3C wcag3 silver GitHub repository . The Working Group requests that public comments be filed as new issues, one issue per discrete comment. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, send email to public-agwg-comments@w3.org ( comment archive ). The Working Group requests comments on this draft be sent by 9 July 2021 . In-progress updates to the guidelines can be viewed in the public editors’ editors' draft .

This document was published by the Accessibility Guidelines Working Group as a Working Draft using the Recommendation track .

Publication as a Working Draft does not imply endorsement by W3C and its Members.

This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 1 August 2017 W3C Patent Policy . W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy .

This document is governed by the 12 June 03 November 2023 W3C Process Document .

1. Introduction

This section (with its subsections) provides advice only and does not specify guidelines, meaning it is informative or non-normative.

Plain language summary of Introduction

The W3C Accessibility Guidelines (WCAG) 3.0 shows show ways to make web content and apps usable by accessible to people with disabilities. WCAG 3 3.0 is a newer standard than the Web Content Accessibility Guidelines (WCAG) 2. WCAG 3 doesn’t replace WCAG 2. WCAG 2 is used around the world and will still be required by different countries for a long time to come. Meeting WCAG 2 at AA level means you will be close to meeting WCAG 3.0, but there 2.2. You may be differences. We have developed labels for sections to tell you how confident we are that the content will not change. This lets you tell whether we are trying out an idea use WCAG 2.2 or whether we have put a lot of work into the topic and we don’t expect it to change. There are 5 levels: new standard.

What’s new in WCAG 3.0?

End of summary for Introduction

What’s new in this version of WCAG 3? We received a lot of feedback from our first drafts. We changed our approach to WCAG 3 based on that feedback. We have added WCAG 3 Guideline placeholders which are short summaries of what we envision for the migration of the WCAG 2 success criteria. Content is in various states of maturity. The status is marked at the top of each section (see 1.2 Section status levels). We removed old material that was not consistent with the new approach. It does not mean we will not include it eventually, but to reduce confusion, we have removed guidelines written for the last draft. We did not publish an updated WCAG 3 Explainer.

1.1 About WCAG 3 3.0

This introduction provides a brief background to WCAG 3.0. Detailed information about the structure of the guidelines and inputs into their development is available in the Explainer for W3C Accessibility Guidelines (WCAG) 3.0 . That document is recommended reading for anyone new to WCAG 3.

This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. The W3C Accessibility Guidelines (WCAG) 3.0 support a wide set of user needs, use new approaches to testing, and allow frequent maintenance of guidelines and related content to keep pace with accelerating technology change. WCAG 3 3.0 supports this evolution by focusing on the users’ functional needs of users. needs. These needs are then supported by outcomes and technology-specific methods to meet those needs. 

Following these guidelines will make content more accessible to people with a wide range of disabilities, including accommodations for blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. Following these guidelines will also often make content more usable to users in general as well as accessible to people with disabilities.

WCAG 3 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [ WCAG22 ] and previous versions, but does not deprecate WCAG 2. 2.X. It will also incorporate content from and partially extend User Agent Accessibility Guidelines 2.0 [ UAAG20 ] and Authoring Tool Accessibility Guidelines 2.0 [ ATAG20 ]. These earlier versions provided a flexible model that kept them relevant for over 15 10 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.

There are many differences between WCAG 2 2.X and WCAG 3. 3.0. Content that conforms to WCAG 2.2 A and & AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2, 2.X, while others may wish to migrate to the new standard. For those that wish to migrate to the new standard, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.

1.2 Section status levels Relationship to other W3C guidelines

As part of the WCAG 3 drafting process each normative section of this document is given a status. This status is used The Web Content Accessibility Guidelines (WCAG) 2.0 [ WCAG20 ] were designed to indicate how far along in the development this section is, how ready it is for experimental adoption, be technology neutral, and what kind have stayed relevant for over 10 years. The Authoring Tool Accessibility Guidelines ( ATAG ) 2.0 [ ATAG20 ] provide guidance for various types of feedback the software that assist people in writing accessible content. User Agent Accessibility Guidelines Working Group is looking for. ( UAAG ) 2.0 [ UAAG20 ] offers useful guidance to user agent developers and has been implemented on an individual success criterion basis.

These guidelines have normative guidance for content and helpful implementation advice for authoring tools, user agents, and assistive technologies.

For more details about differences from previous guidelines, see Appendix: Differences From WCAG 2 .

Placeholder :
Editor's note

This content is temporary. It showcases version of the type guidelines includes an example method for ATAG ( Author control of content or section to expect here. All text alternatives ) and UAAG ( Reflow of this is expected to be replaced. No feedback is needed on placeholder captions and other text in context ). Future drafts of the guidelines will include additional examples of ATAG - and UAAG -related content.

Exploratory :

1.3 Goals and Requirements

The working group goal of WCAG 3.0 and supporting documents is exploring what direction to take make digital products including web, ePub, PDF, applications, mobile apps, and other emerging technologies more accessible and usable to people with this section. This content disabilities. It is not refined, details and definitions may be missing. Feedback should be about the proposed direction. Developing : There is rough agreement on what is needed intention for WCAG 3.0 to meet this section, although not all high-level concerns have been settled. Details have been filled, but are yet goal by supporting a wider set of user needs, using new approaches to be worked out. Feedback should be focused on ensuring the sections are usable testing, and reasonable in a broad sense. Refining : allowing more frequent maintenance of guidelines to keep pace with accelerating technology change. The working group has reach consensus on this section. It hope is ready that WCAG 3.0 will make it significantly easier for broad public review both beginners and experimental adoption. Feedback should be focused on experts to create accessible digital products that support the feasibility needs of people with disabilities.

Research and implementability. Mature : Content is believed design work performed by the working group Silver Task Force identified key requirements needed to be ready improve upon the existing WCAG 2.X structure. These requirements, presented in the Requirements for recommendation. Feedback Silver document, shaped the guidelines that follow and should be focused on edge case scenarios taken into account when evaluating and updating the working group may not have anticipated. guidelines.

Editor's note

While the majority of guidelines are still to be written and we continue to explore additional ways of validating conformance, we seek wider public review on the approach presented here.

Guidelines

2. Normative requirements

Placeholder
Plain language summary of Normative requirements

There are two types of content in this document:

End of summary for Normative requirements

In addition to this section, the Guidelines , Testing , and Conformance sections in WCAG 3.0 provide normative . content and define requirements that impact conformance claims. Introductory material, appendices, sections marked as non-normative , diagrams, examples, and notes are informative (non-normative). Non-normative material provides advisory information to help interpret the guidelines but does not create requirements that impact a conformance claim.

The key words MAY , MUST , MUST NOT , NOT RECOMMENDED , RECOMMENDED , SHOULD , and SHOULD NOT are to be interpreted as described in [ RFC2119 ].

Editor's note

Outcomes are normative. The working group is looking for feedback on whether the following should be normative or informative: guidelines, methods, critical errors, and outcome ratings.

3. Guidelines

Plain language summary of Guidelines

The following guidelines are being considered for WCAG 3. They are currently a list five guideline examples show different features of topics which we expect to explore more thoroughly in future drafts. The list includes current WCAG 2 guidance. 3.0:

End of summary for Guidelines

Editor's note

The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. To In order to meet the varying needs of this audience, several layers of guidance will be are provided including functional categories of disabilities, general guidelines, outcomes that can be tested, a rich collection of methods, resource links, and code samples.

The following guidelines are an initial list included in this draft have been selected to show different types of content:

Editor's note

These are The following early drafts of guidelines are included to serve as initial groupings examples . They are used to guide illustrate what WCAG 3.0 could look like and to test the next phase process of work. They should be considered writing content. These guideline drafts and should not be considered as final content of WCAG 3 . 3.0. They are included to show how the structure would work. As this draft matures, numbering of individual guidelines will be removed to improve overall usability of the guidelines in response to public requests. WCAG 2.x success criteria will be migrated to this new structure before WCAG 3.0 moves to candidate recommendation.

As more content is developed, this section will be a list of guidelines with a unique short name, and the text of the requirement written in plain language. The list is currently in alphabetical order, but we do not expect that To see the overall plan for migrating content from WCAG 2.1 to persist. WCAG 3.0, see the WCAG to Silver Outline Map .

End of note
Aid navigation

3.1 Text alternatives

Placeholder

Guideline: The web site or app aids navigation Audio and video Provide text alternative for non-text content. Text alternatives how-to Placeholder Guideline: Video and audio have alternatives

Clear language Placeholder

Text alternative available Guideline: (outcome for Text alternatives ) Content uses clear language Clear purpose

Placeholder
Guideline: Controls have clear purpose

Provides text alternatives for non-text content for user agents and assistive technologies. This allows users who are unable to perceive and / or understand the non-text content to determine its meaning.

Color Outcome, details, and contrast methods for Text alternative available Placeholder Guideline: Contents use sufficient contrast and do not rely on color alone

Functional categories for Text alternative available Consistent design

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
  • Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Mental Health
  • Cognitive & Sensory Intersections
Critical errors for Text alternative available
  • Any image of text without an appropriate text alternative needed to complete a process.
Rating for Text alternative available Placeholder Guideline: The web site or app has
Rating scale for "Text alternative available"
Rating Criteria
Rating 0 Less than 60% of all images have appropriate text alternatives OR there is a consistent design critical error in the process
Rating 1 60% - 69% of all images have appropriate text alternatives AND no critical errors in the process Content order
Rating 2 70%-79% of all images have appropriate text alternatives AND no critical errors in the process
Rating 3 80%-94% of all images have appropriate text alternatives AND no critical errors in the process
Rating 4 95% to 100% of all images have appropriate text alternatives AND no critical errors in the process Placeholder
Guideline: Editor's note Contents are programmatically and visually ordered Control and focus appearance Placeholder
Guideline: Appearance

We selected the Text Alternatives guideline to illustrate how WCAG 2.2 success criteria can be moved to WCAG 3.0 with minimal changes. Most of controls the material was directly copied from W3C sources such as WCAG 2.1, Web Accessibility Tutorials, and focus support keyboard HTML 5.3 examples.

There are subtleties to the scoring of the methods that should be noted in this guideline. We have included four different methods for different types of images in HTML:

  • functional images;
  • informative images;
  • images of text; and pointer
  • decorative images.

The scoring is set up to work across all types of images to make it easier for automated tools. The automated tool does not need to know the type of image and can give you a score of the number of images and the number of images passed. The tester reviewing the path that a user would use to accomplish a task can identify whether the lack of a text alternative is a critical error that would stop a user from completing a task. This allows an automated tool to do the heavy lifting for identifying all the text alternatives while still allowing a knowledgeable tester to identify and evaluate the images that are necessary to complete a task.

This guideline also illustrates an example of critical errors along a path. Organizations with large numbers of images often have a missing text alternative on an image as a bug. They need to know when that missing text alternative is critical to be fixed, and when it is a lower priority. This critical error example shows how an image without alternative text that is crucial for completing the task gives a rating of zero. An image without alternative text that is not crucial, such as an image in the footer, does not block the organization from receiving the score the rest of the images deserve. This makes it possible for very large web sites or apps to be able to conform even if they have a low number of bugs without losing the critical needs of people with disabilities.

We are interested in your feedback on this approach to testing and scoring. Does this approach help large organizations conform even if their site is not 100% perfect? Do you think that organizations will interpret that they only need 95% of text alternatives for images and then stop adding alternative text? Are the bands of numbers for the different ratings correct? Do people with disabilities in particular feel that this approach will meet their needs?

For this First Public Working Draft, we included HTML methods. This will be expanded in future drafts. We have also included a method, Author Control semantics Placeholder of Text Alternatives ( ATAG ), that demonstrates how requirements from the Authoring Tool Accessibility Guidelines ( ATAG ) 2.0 can be included as methods.

Guideline: Controls have correct semantic markup
Error notification

3.2 Clear words

Placeholder

Guideline: Controls notify users when making mistakes Error prevention Use common clear words. Clear words how-to Placeholder

Guideline: User processes prevent users from making mistakes Exception :

Flexible views Placeholder

Common clear words Guideline: (outcome for Clear words ) Views are flexible Harm from motion

Placeholder
Guideline: On-screen motion does not cause harm

Uses common words to reduce confusion and improve understanding.

Keyboard support

Outcome, details, and methods for Common clear words Placeholder

Functional categories for Common clear words Guideline: The web site or app supports

This outcome relates to the keyboard following functional categories:

  • Speech
  • Cognitive - Attention
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Executive
  • Cognitive - Mental Health
  • Cognitive & Sensory Intersections
  • Independence
Critical errors for Common clear words
  • None.
Rating for Common clear words Mobile and pointer support
Rating scale for "Common clear words"
Rating Criteria
Not Applicable If this outcome does not apply to the technology or content being scored, do not score it.
Rating 0 Average score below 1
Rating 1 Not used in this outcome
Rating 2 Average score of 1-1.6 rounded to one decimal place (significant figure)
Rating 3 Not used in this outcome
Rating 4 Average score of 1.7 or above rounded to one decimal place (significant figure)
Placeholder Editor's note
Guideline: The web site or app supports mobile

We selected Use Clear Words to show that the new WCAG3 structure can include accessibility guidance that does not fit into the WCAG 2.x structure. In the research phase of this project, we identified user needs from the Cognitive Accessibility Task Force and pointer inputs the Low Vision Accessibility Task Force that could not be addressed by a true/false success criterion in WCAG 2.1. We wanted to select one of those user needs and include it in the first draft of WCAG3 to show that more complex user needs can be included and still be testable and scored.

Non-visual alternatives Placeholder Guideline: Images

Use Clear Words is a new guideline proposed by the Cognitive Accessibility Task Force (COGA) and graphics have non-visual alternatives includes research, documents and comments from COGA. The selection of user needs and the outcomes necessary to address them is aligned with the new COGA publication, Making content usable for people with cognitive and learning disabilities [coga-usable] .

Prevent harm Placeholder Guideline:

The web site clear words guideline was included to illustrate that the proposed WCAG 3.0 scoring and structure can be used in non-binary testing. Clear words guideline uses a rating scale with flexible units of measure. For example, testing could be done by a webpage, a paragraph, a section of instructions on an application, or app does other. A manual tester evaluates the paragraph, webpage, or section on a rating scale. While we do not cause harm know of any mainstream accessibility tool that measures common words, there are some working prototypes of tools developed outside the W3C . We are interested in feedback on testing this guideline and its scoring.

There are a number of exceptions to this guideline. We are interested in feedback where to put that information for ease of use.

This category of new guideline needs further development. It is included to show that it could work, not necessarily that this is the shape of the final guideline.

Process cognitive load

3.3 Captions

Placeholder

Guideline: User processes do not increase cognitive load Provide help captions and associated metadata for audio content. Captions how-to Placeholder Guideline: The web site or app provides help

Structured content Placeholder

Translates speech and non-speech audio Guideline: (outcome for Captions ) Views have structure that helps user orient and navigate Text appearance and semantics

Placeholder
Guideline: Text uses appropriate layout

Translates speech and semantics non-speech audio into alternative formats (e.g. captions) so media can be understood when sound is unavailable or limited. User agents and APIs support the display and control of captions.

Timing Outcome, details, and interruptions methods for Translates speech and non-speech audio Placeholder

Functional categories for Translates speech and non-speech audio Guideline: The web

This outcome relates to the following functional categories:

  • Sensory - Hearing & Auditory
  • Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive & Sensory Intersections
Critical errors for Translates speech and non-speech audio
  • Any video without captioning that is needed to complete a process.
    For example, an education site with a video that a student will be tested on or app minimizes the impact a shopping experience of timing previewing movies. If they do not have captioning (closed or open captioning), they fail.
Rating for Translates speech and interruptions User control Placeholder non-speech audio Guideline: Users have ability to control audio, movement,
Rating scale for "Translates speech and auto updating non-speech audio"
Rating Criteria
Rating 0 A critical error or an average score 0-0.7 rounded to one decimal place (significant figure)
Rating 1 Not applicable
Rating 2 A critical error or an average score 0.8-1.5 rounded to one decimal place (significant figure)
Rating 3 Not applicable
Rating 4 A critical error or an average score 1.6 - 2 rounded to one decimal place (significant figure)

Conveys information about the sound (outcome for Captions ) 3. Conformance

Exploratory
This section (with its subsections) provides requirements which must be followed

Conveys information about the sound in addition to conform the text of the sound (for example, sound source, duration, and direction) so users know the necessary information about the context of the sound in relation to the specification, meaning environment it is normative . situated in.

Outcome, details, and methods for Conveys information about the sound

Plain language summary of Functional categories for Conformance Conveys information about the sound

You might want This outcome relates to make a claim that your content or product meets the WCAG 3 guidelines. If it does meet the guidelines, we call this “conformance.” To conform to WCAG 3, your test results must show that your project is accessible. following functional categories:

If you want to make a formal conformance claim, you must use
  • Sensory - Hearing & Auditory
  • Sensory Intersections
  • Cognitive - Language & Literacy
  • Cognitive & Sensory Intersections
Critical errors for Conveys information about the process described in this document. However, conformance claims are not required and your content can conform to WCAG 3, even if you don’t want to make a claim. You can still use this process to test your project’s accessibility. There are two types of content in this document: sound
  • None.
Rating for Conveys information about the sound Normative: what you must do
  • Is meta-data directionality essential to meet the guidelines. this experience?
  • Non-normative: advice Can a user orientate themselves to help you meet the guidelines. This is also called informative . sound with/without any additional interface?
There are a variety
Rating scale for "Conveys information about the sound"
Rating Criteria
Rating 0 No meta-data
Rating 1 Sound visually indicates the direction of ways to say what is required origin in WCAG 3. We are experimenting 2D space
Rating 2 Not applicable
Rating 3 Meta-data includes the location the sound originates in 3D space
Rating 4 Meta-data includes the location the sound originates, Meta-data includes the direction of the sound.
Editor's note

This guideline demonstrates how the WCAG3 structure can be used with different approaches. Once we have developed enough guidelines, emerging technologies such as virtual reality, augmented reality and other immersive web technologies (XR). Research in this area is ongoing and we will test how expect to complete more details in future drafts.

The Silver XR group has been working closely with other groups within the W3C as well each works. Here are some of as researchers in the ideas for WCAG 3 requirements: Two ways area of evaluating accessibility: Outcomes captioning in immersive technologies. This is a rapidly developing field, and the recommendations listed are statements more exploratory. They are included as an example that WCAG3 can be tested. Assertions and procedures are statements that you used an accessible way with emerging technologies. We hope that including this guideline will help inspire more research in this area.

Because this guideline was included to create demonstrate emerging technology, there is little guidance included on traditional captions. Future drafts will also include more traditional caption guidance.

We are looking for feedback on the content scoring of captions. Media that is hard essential to test. Assertions are new and we are trying different rules for how they can work. Outcomes accomplishing the task that does not have two (2) types of tests: Quantifiable tests: Tests where there captions is a high degree of consistency between test results from different testers. critical error and automatically fails (a 0 rating). Examples include testing whether certain properties exist in the content educational videos, entertainment site previews, or if they match directions for installing a value specified by the requirement. Qualitative tests: Tests product. Other videos without captions that rely on a qualitative evaluation based on existing criteria. Test results may vary between testers who understand are not essential to the criteria. An example is evaluating task such as advertising and promotional videos that are not essential to shopping experience are not automatically failed, but the quality cumulative lack of a requirement such captioning reduces the score. We want feedback on this approach.

We want public feedback about whether Open Captions (burned in captions) should be considered as alternative text. Tests equivalent to Closed Captions. Closed captions are text that can be applied customized to four (4) different scopes: Item: A component or unit of content. Examples include a drop down menu, a media player, meet user needs, for example, a phrase, or an image. View: All content visually and programmatically available without hard of hearing person with low vision (like a user-initiated substantive change. User process: Series lot of user actions, aging people). Open captions are burned in and cannot be customized. They can't be adapted to other languages. If closed captions are added, then they are overlaid on the distinct items Open Captions and interactive views that support the actions, where each action is required hard to complete an activity. Aggregate: Combination of all related items, user processes, and views. Three different levels show how accessible the content is: bronze, silver, read. If we receive sufficient feedback to leave captions as they are today (both closed or gold level. Issue severity allows different rules for accessibility problems. Rules for problems that block people from using the content open are stronger than rules for minor problems. Adjectival ratings allow testers equally acceptable), then we will use a simple scoring rating. If we decide to grade content using words (fail, pass, great) or numbers (1,2,3). not accept open captions as equivalent to closed captions, then we will give more points to closed captions than open.

Note that the advanced XR outcomes and metadata do not have critical errors. This provides is a scale that may better match what people with disabilities go through to use it. Percentages are used instead of requiring way that accessibility best practices can be perfect or you fail. Pre-assessment conditions included so that they are tests not punitive, but could give extra points that coders can an organization who implements them could use to see if they potentially raise their score. We are ready to start accessibility testing. This is interested in your feedback about this approach.

3.4 Structured content

Guideline: Use sections, headings, and sub-headings to help coders, not organize content. Structured content how-to

Headings organize content (outcome for Structured content )

Organizes content into logical blocks with headings relevant to make a new level of requirements. the subsequent content. This makes locating and navigating information easier and faster.

We intend Outcome, details, and methods for Headings organize content

Functional categories for Headings organize content

This outcome relates to address the following topics in the future, but for now, you can skip them. functional categories:

  • Conforming alternative version Sensory - whether WCAG 3 will allow having inaccessible content if there is also an accessible version. Vision & Visual
  • Only accessibility-supported ways of using technologies Sensory Intersections
  • Physical & Sensory Intersections
  • Cognitive - this is a complicated topic about different assistive technologies, their features, and the languages they are available in. Attention
  • Defining conformance scope Cognitive - WCAG 2.x defines scope as a web page. WCAG 3 will have a broader scope to include the modern web and applications. Language & Literacy
  • Conformance requirements Cognitive - what someone has to do to meet WCAG 3. Memory
  • Conformance claims – how to make a statement that you meet WCAG 3. Cognitive - Executive
  • Cognitive & Sensory Intersections
End of summary
Critical errors for Conformance Headings organize content
  • One or more headings necessary to locate the content needed to complete a process are missing.
3.1 Normative requirements
Rating for Headings organize content
Rating scale for "Headings organize content"
Rating Criteria
Rating 0 25% or less of expected headings are present and describe the content contained in the section OR there is a critical error In addition to this section, in the Guidelines process
Rating 1 26-50% or less of expected headings are present and Conformance describe the content contained in the section AND no critical errors sections in WCAG 3 provide normative the process content
Rating 2 51-80% or less of expected headings are present and define requirements that impact conformance claims. Introductory material, appendices, sections marked as non-normative , diagrams, examples, describe the content contained in the section AND no critical errors in the process
Rating 3 81-95% or less of expected headings are present and notes describe the content contained in the section AND no critical errors in the process
Rating 4 96-100% or less of expected headings are informative present and describe the content contained in the section AND no critical errors (non-normative). Non-normative material provides advisory information to help interpret in the guidelines but does not create requirements that impact a conformance claim. The key words process MAY
,
MUST

Uses visually distinct headings (outcome for Structured content ) ,

MUST NOT , NOT RECOMMENDED

Uses visually distinct headings so sighted readers can determine the structure.

, Outcome, details, and methods for Uses visually distinct headings RECOMMENDED

,
Functional categories for Uses visually distinct headings SHOULD

This outcome relates to the following functional categories:

  • Sensory - Vision & Visual
  • Cognitive - Language & Literacy
  • Cognitive - Learning
  • Cognitive - Memory
  • Cognitive - Executive
  • Cognitive & Sensory Intersections
Critical errors for Uses visually distinct headings , and
  • One or more headings necessary to locate the content needed to complete a process are not visually distinct.
Rating for Uses visually distinct headings SHOULD NOT in this document
Rating scale for "Uses visually distinct headings"
Rating Criteria
Rating 0 25% or less of headings are to be interpreted as described visually distinct OR there is a critical error in BCP 14 the process [ RFC2119
Rating 1 26-50% of headings are visually distinct AND no critical errors in the process ] [
Rating 2 51-75% of headings are visually distinct AND no critical errors in the process RFC8174
Rating 3 76-95% of headings are visually distinct AND no critical errors in the process ] when, and only when, they appear
Rating 4 96-100% of headings are visually distinct AND no critical errors in all capitals, as shown here. the process

Conveys hierarchy with semantic structure (outcome for Structured content ) 3.2 Approaches to conformance

WCAG 3 will include a new conformance model to address a wider range of user needs, test a wider range of technologies and support new approaches to testing. We are exploring several approaches to conformance. After studying the comments on the previous draft, these are the concepts Provides semantic structure that showed promise. We are giving an overview in this draft, but we continue to test conveys the combination of hierarchy to help explore and navigate the concepts. content.

There are several goals Outcome, details, and methods for this new conformance model: Conveys hierarchy with semantic structure

Develop a model that encourages web sites to continue
Functional categories for Conveys hierarchy with semantic structure

This outcome relates to improve accessibility (vs. stopping at the previous AA level); following functional categories:

  • Sensory - Vision & Visual
  • Better reflect the lived experience of people with disabilities, who successfully use sites that have some content that does not meet WCAG 2.0 AA, or who encounter barriers with sites that meet WCAG 2.0 AA; and Sensory Intersections
  • Allow for bugs and oversight by content authors, provided the impact of them upon users with disabilities is not substantial. Physical & Sensory Intersections
  • The proposed approaches can fit together in a variety of ways. We will be testing these approaches and others
  • Cognitive - Language & Literacy
  • Cognitive & Sensory Intersections
Critical errors for validity , reliability , sensitivity , adequacy , complexity and equity . We welcome suggestions on ways to improve these approaches to better meet these criteria and concerns about how they might affect accessibility. The working group plans to select from Conveys hierarchy with semantic structure
  • One or even replace these options based on feedback, prototyping, and testing. There are two main approaches more headings necessary to evaluating accessibility that are promising. There locate the content needed to complete a process are also detailed ideas that support these approaches. The two main approaches are: not coded as headings.
Rating for Conveys hierarchy with semantic structure Outcomes
Rating scale for "Conveys hierarchy with tests : Outcomes semantic structure"
Rating Criteria
Rating 0 25% or less are verifiable statements that allow testers to reliably determine if the content being evaluated satisfies the user needs identified correctly semantically coded (including level) OR there is a critical error in the Guideline. Outcomes process
Rating 1 26-50% or less of the visual headings are addressed correctly semantically coded (including level) AND no critical errors in Section 3.3.1 Outcomes . Tests the process
Rating 2 51-80% or less of the visual headings are addressed correctly semantically coded (including level) AND no critical errors in Section 3.3.2 Testing Outcomes . the process Assertions and procedures: Assertions are attributable statements by a person
Rating 3 81-95% or organization that they followed a procedure to improve accessibility. Assertions less of the visual headings are addressed correctly semantically coded (including level) AND no critical errors in Section 3.4.1 Assertions . There are additional ideas that support these two approaches and can be used the process
Rating 4 96-100% or combined less of the visual headings are correctly semantically coded (including level) AND no critical errors in many different ways. the process Conformance levels : Bronze, Silver, and Gold levels continue to receive positive feedback as an approach for overall conformance rating.
Issue severity : Outcomes may allow for the concept of varying severity. High severity issues are those which prevent users from completing user processes (tasks). Editor's note
Adjectival ratings : Adjectival Ratings allow testers to grade a test, outcome, or

We included the structured content guideline by an adjective rating (such as fail, pass, exemplary) or a numeric rating (such as 1-5) an example of an “easy” guideline that potentially can be closer was well understood and addressed diverse disability needs. While WCAG2 addresses headings from the semantic needs of screenreader users, little has been done to directly address the lived experience needs of a person people with cognitive disabilities around headings. This guideline shows how a disability. Pre-assessment checks : Pre-Assessment checks are tests or criteria that implementers well-known area of accessibility can use to determine if they are ready to assess conformance. address more user needs of different groups of people with disabilities. The intent is structured content guideline has multiple outcomes working together to help organizations prepare cover the different aspects of accessibility needed for conformance testing, not to create a new level different categories of conformance. people with disabilities.

The details of these approaches change structured content guideline began as we assemble them into a coherent whole. This draft gives a high level overview guideline on use of these approaches so headings. Going through the content development process, we can give an update realized that it was a broader topic than simply headings, but there is little content developed beyond headings. Note that this guideline is used for prototyping, and receive feedback on is the individual approaches we are considering. most uneven in style of content. Additional outcomes and content will be added in future drafts to make this guideline more complete.

Editor's note

As we continue developing conformance, we seek input on Structured content guideline also shows how several WCAG 2.1 success criteria can be re-combined and include AAA level success criteria such as 2.4.10 Section Headings. The scoring shows how the following: Which option has rating can be improved by including all headings, but does not fail the best chance lack of adoption and why? How well do these approaches support regulatory needs? How section headings, unless that section heading is essential to accomplishing a task. We think this will these approaches allow organizations to continually improve their use of headings without failing them for what was formerly required by an AAA success criterion.

We are looking for feedback on using scoring as a way to encourage adoption of AAA success criteria without failures. Do you like the inclusion of broader needs for structured content than providing semantics for screenreader users? Do you think this should be integrated into a conformance model (including levels separate guideline, or scores)? Next steps include: Further refine options, Test do you like having multiple, testable outcomes supporting the validity, reliability, sensitivity, adequacy, complexity and equity of guideline? Do you like the various models using these approaches, and Write sample guidelines to test out each option. approach of merging WCAG2 success criteria with related user needs?

End of note

3.3 3.5 Outcomes and methods Visual contrast of text

Developing

Guideline: Provide sufficient contrast between foreground text and its background. Editor's note Visual contrast of text how-to

Luminance contrast between background and text (outcome for Visual contrast of text )

As we continue developing outcomes Provides adequate luminance contrast between background and methods, we seek input on how well text colors to make the approach text easy to outcomes, assertions read.

Outcome, details, and tests defined here supports additional requirements not addressed in 2.2. methods for Luminance contrast between background and text

Functional categories for Luminance contrast between background and text

Next steps include: This outcome relates to the following functional categories:

  • Get feedback from designers, developers, and other communities on wording choice, Sensory - Vision & Visual
Critical errors for Luminance contrast between background and text
  • Finalize names None.
Rating for Luminance contrast between background and descriptions of scope text
Rating scale for "Luminance contrast between background and tests, text"
Rating Criteria
Rating 0 Any failures on the Advanced Perceptual Contrast Algorithm (APCA) lookup table or the lowest APCA value is more than 15% below the values on the APCA lookup table
Rating 1 The lowest APCA value is 10-15% below the values on the APCA lookup table
Rating 2 The lowest APCA value is 5-9% below the values on the APCA lookup table
Rating 3 The lowest APCA value is 1-4% below the values on the APCA lookup table
Rating 4 All reading text meets or exceeds the values on the APCA lookup table
Editor's note

Visual Contrast is a migration from WCAG 2.1 with significant updates:

  • Develop detailed examples New calculations of methods and tests, contrast based on more modern research on color perception.
  • As we develop example outcomes and methods, further explore conditions Merging the 1.4.3 AA and how multiple measurements might be used to meet an outcome. 1.4.6 AAA levels into one guideline.
  • Address all GitHub issues under New test types and terminology milestone . of text contrast.
  • At this time, it only includes textual visual contrast.
End of note 3.3.1 Outcomes

Outcomes are verifiable statements that allow testers to reliably determine if the content being evaluated satisfies the user needs identified in We propose changing the Guideline. All outcomes names of Contrast (Minimum) and assertions that relate Contrast (Enhanced) to Visual Contrast of Text as a Guideline will be listed together signal of a paradigm change from one about color to encourage adoption one about perception of higher levels light intensity . The reason for this change is that the understanding of accessibility. contrast has matured and the available research and body of knowledge has made breakthroughs in advancing the understanding of visual contrast .

Each outcome is associated with at least one method . Methods are informative and kept The proposed new guidance more accurately models current research in how human visual perception of contrast and light intensity. The goal is to documents. Each method contains techniques for meeting improve understanding of the outcome, examples, resources, functional needs of all users, and sets more effectively match the needs of tests those who face barriers accessing content. This new perception-based model is more context dependent than a strict light ratio measurement; results can, for evaluating example, vary with size of text and the outcome. Methods can apply to a specific technology, such as HTML, darkness of the colors or can be background.

This model is more generic where the advice applies no matter what technology, responsive to user needs and allows designers more choice in visual presentation. It does this by including multi-factor assessment tests which integrate contrast with inter-related elements of visual readability, such as the methods supporting the Clear Language guideline. Outcomes are written so that testers can font features. It includes tests to determine the accessibility an upper limit of technologies contrast, where elevated contrast may impact usability.

This outcome will eventually include a second rating approach based solely on the outcome, even when methods do not yet exist mean average APCA value for those technologies. all text in a process and view based on a character count.

3.3.2 4. Testing outcomes

3.3.2.1 Types
Plain language summary of Testing - What types of tests are used?

WCAG 3 3.0 includes two (2) types of tests which are evaluated: tests:

Although Some content may satisfy all will meet outcomes using quantifiable and qualitative if it passes atomic tests, the but that content may still might not always be usable by all people with a wide variety of disabilities. The assertions (see Section 3.4.1 Assertions ) are designed to address this problem. Holistic tests can help you fix that.

End of summary for Testing 3.3.2.1.1 Quantifiable tests

Editor's note
Quantifiable tests rely on measuring properties of the content based on nominal values.

The test results are objectively verifiable, and avoid variation of test results between different testers. Values are quantifiable. They could be boolean (true/false), model presented provides a structure for example testing that can be built upon to check the presence of titles, text alternatives, and accessible names. Other values could include numerical thresholds; for example, better accommodate dynamic or very frequently updated content than WCAG 2.X. We are exploring additional approaches to check color luminosity ratios. Each method testing using quantifiable tests includes: the values being tested; holistic tests, sampling and/or other alternatives for reaching conformance in situations where testing all content is not possible. We also plan to include a definition and an algorithm concept for substantially conforming in order to measure the properties of address the potential difficulties presented when testing all content based on the values. Example 1 in large digital products and 3rd party content.

Example quantifiable test

These tests are already part of WCAG 2. Example for 1.1.1: Image has non-empty accessible name Example for 1.4.3: Text has minimum contrast 3.3.2.1.2 Qualitative tests Qualitative 3.0 tests rely on evaluating content based on a set of defined qualities and exceptions. The set of qualities and exceptions limit the scope of decisions, to minimize variation of test results arrived at by different testers. Still, some level of qualitative assessment is required, therefore the accuracy of the test results also depends on the knowledge and context of the scores outcomes . Outcomes are written as testable criteria that allow testers to some degree. Each method using qualitative tests includes: the defined qualities being tested; and guidance on evaluating how well objectively determine if the content meets the defined qualities. Example 2 Example qualitative test These tests they are already part of WCAG 2. Example for 1.1.1: Image accessible name is descriptive evaluating satisfies the criteria.

Example for 2.4.6: Form field label is descriptive 3.3.2.2 Test scopes Testing outcomes use items , uses both views , user processes , and the aggregate processes to define what is being tested. Items are the smallest testable unit. They may be interactive components such as a drop down menu, a link, or a media player. They may also be units of content such as a word, a phrase, a label or error message, an icon, or an image. Views include all content visually and programmatically available without a substantive change. Conceptually, views correspond it corresponds to the definition of a web page as used in WCAG 2, 2.X, but are is not restricted to content meeting that definition. For example, a view could be considered a “screen” "screen" in a mobile app or a layer of web content – such as a modal. app.

User processes Processes are a series sequence of user actions, and the distinct interactive views and items steps that support need to be completed in order to accomplish an activity/task from end-to-end. When testing processes, the actions, where each action is required content used to complete an activity. the process as well as all of the associated views need to be included in the test. A user process may include is a subset of items in a view or a group of views. It includes only the sections of the view needed to accomplish the activity or task.

Examples of a process include:

A process is comprised of one or more views .

4.1 Types of tests

or subsets

WCAG 3.0 includes two types of views. Only tests: atomic tests and holistic tests . Testing the part outcomes using the atomic tests might involve a combination of automated evaluation , semi-automated evaluation , and human evaluation .

Although content may satisfy all outcomes using the views that support atomic tests, the user process are included in content may not always be usable by people with a test wide variety of disabilities. The holistic tests address this gap by evaluating more of the user process. experience than atomic testing.

The aggregate is the combination
Editor's note

We are looking for more appropriate terms to distinguish between these two types of items, views, tests and user processes that collectively comprise the web site, set of web pages, web app, etc. welcome suggestions

3.3.2.3 4.1.1 Conditions Atomic tests

Some Atomic tests only apply evaluate content, often at an object level, for accessibility. Atomic tests include the existing tests that support A, AA, and AAA success criteria in certain situations. Testing WCAG 2.X. They also include tests that may occasionally require determining additional context or expertise beyond tests that fit within the WCAG 2.X structure. In WCAG 3.0, atomic tests are used to test both processes and referencing which specifications views. Test results are being tested. Methods will note whether then aggregated across the selected views. Critical errors within selected processes are also totaled. Successful results of the atomic tests are used to reach a test always applies Bronze rating.

Atomic tests may be automated or under what conditions manual . Automated evaluation can be completed without human assistance. These tests allow for a test applies. Both quantitative and qualitative larger scope to be tested but automated evaluation alone cannot determine accessibility. Over time, the number of accessibility tests that can be conditional. automated is increasing, but manual testing is still required to evaluate most methods at this time.

Example 3

4.1.2 Holistic tests

Example conditions include: The language Holistic tests include assistive technology testing, user-centered design methods, and both user and expert usability testing. Holistic testing applies to the entire declared scope and often uses the declared processes to guide the tests selected. Successful results of holistic tests are used may have different grammatical rules which only apply to that language. An interface with multiple contrast modes may have different contrast requirements than an interface with only reach a default contrast mode. In situation A, text X may apply. In situation B, text Y may apply. silver or gold rating.

Editor's note

Future drafts will further explore holistic tests and provide examples as well as detail how to apply them.

3.4 4.2 Assertions and procedures Technology specific testing

Developing

Each outcome includes methods associated with different technologies. Each method contains tests and techniques for satisfying the outcome. The outcome is written so that testers can test the accessibility of new and emerging technologies that do not have related methods based solely on the outcome.

Editor's note
As we

We continue developing to test this content, approach and others for validity, reliability, sensitivity, adequacy, and complexity.  Alternatives that we seek input are exploring are noted as separate editor’s notes where applicable. We welcome suggestions on ways to improve the following: Can assertions be used scoring to record accessibility work better meet these criteria.

5. Scoring

Plain language summary of Scoring - How are tests scored?

Besides true/false scoring methods, we’ve included testing options for new guidance, such as rating scales.

Each outcome has a section that shows how it is not required in scored.

End of summary for Scoring

Editor's note

One of the guidelines? This could include advance work on guidance not yet added goals of WCAG 3.0 is to expand scoring tests of methods beyond a binary true/false choice at the guidelines. What optional supporting documentation should organizations provide with an assertion? Is there page level. We have included tests within the sample outcomes that demonstrate alternatives such as rubrics and scales . We are also exploring integrating these options into Accessibility Conformance Testing format. We will include example tests in a need future draft. Our intent is to include detailed tests for WCAG 3 methods to require proof support each outcome within the WCAG 3.0 model.

Each outcome has methods associated with different technologies. Each method contains tests and techniques for meeting that outcome. Testers can test the accessibility of an assertion, new and if so, what documentation should be required as proof? Should assertions be dated, expire, or be reviewed emerging technologies that do not have related methods based on a regular basis? Can steps in a procedure duplicate the outcome.

5.1 Scoring atomic tests

In most cases, testing individual objects will result in other parts of binary, pass / fail outcome for each element. This leads to either a pass / fail or a percentage rating depending on the guidelines? If so, how should those be handled? Can assertions exist outside of conformance? For example, can they test. A rating scale may be used as an internal benchmark rather than provided for some tests to allow the tester to assign a claim quality judgement of conformance? Can assertions be used at the most basic level an element or block of conformance? If so, how? How can small organizations use assertions without unrealistic burden? As written, outcomes content. Whether scoring is binary (pass/fail) or uses rating scales, will depend on the method, outcome, and assertions are at technology. Binary scoring works well when the same level. Would moving assertions to unit being tested has clear boundaries and pass/fail conditions. Rating scales work better when the unit being tested does not have clear boundaries, when evaluating success requires a quality judgement, or when the test level be more effective? The AGWG is considering whether and how assertions includes gradations of quality. Each of these results can then be applied to the Bronze level. The AGWG is considering what will qualify as assigned a procedure in WCAG 3. A procedure may be limited percentage or averaged to guidance: inform the overall score of an outcome.

Test results for views:

End of note 3.4.1 Assertions

An assertion is a formal claim of fact, attributed to a person or organization. In WCAG 3, an assertion addition, critical errors is an attributable within selected processes will be identified and documented statement of fact regarding procedures practiced totaled. Any critical errors will result in the development and maintenance score of the content or product to improve accessibility. very poor (0).

3.4.2 5.2 Using assertions Scoring outcomes

Assertions may supplement methods in one or more outcomes. Assertions should only be used on outcomes The results from the atomic tests are aggregated across views and guidelines that allow assertions. Organizations can make used along with the number of critical errors to assign an assertion that they followed a procedure adjectival rating to claim conformance. Results when testing assertions are true/false - the organization making outcome. Testers will then use the assertion either guidance provided in the required documentation or it did not. Procedures used outcome along with reasonable judgement of the context that the errors occur in assertions may be implemented at to assign an accessibility score of the organization level, during design and development, or during testing. outcome.

Examples Potential thresholds for adjectival ratings of procedures that may be used during implementation might include: test results:

Training, FTE (Full Time Equivalent) assignments, Skills testing, Coordination and documentation of accessibility processes,
Very Poor (0)
Any critical errors or Setting the priority less than 50% of related tests pass
Poor (1)
No critical errors, approx. 50% to 79% of related tests pass
Fair (2)
No critical errors, approx. 80% to 89% of related tests pass
Good (3)
No critical errors, approx. 90% to 98% of related tests pass
Excellent (4)
No critical errors, approx. 99% to 100% of related tests pass
Note

The thresholds are different for remediation. different outcomes.

Examples of procedures that may be used These thresholds are still being tested and adjusted. These are included as examples to evaluate accessibility might include: gather feedback on this scoring approach.

Usability testing, Heuristic evaluation, or Assistive technology testing.

3.4.3 5.3 Documenting assertions Overall scores

Assertions must be documented as part of the conformance claim process. The required information may also be made available through the web site. Assertions might include the following information: The statement being asserted, The date of the assertion, The date or date range the procedure was completed, The scope of After all outcomes have been scored, the assertion, Contact information ratings are averaged for the person or group making the assertion, a total score and The outcome(s) or guideline(s) supported a score by the assertion. functional category(ies) they support. Conformance at the bronze level requires no critical errors and at least 3.5 total score and at least a 3.5 score within each functional category.

Editor's note

An alternative to specifying assertions at This approach, which allows the outcome or guideline level might be to require tester some flexibility in assigning scores, has the assertion apply advantage of simplicity and allowing a tester to take the scope of context into account beyond the conformance claim. End simple percentages. The second option we are exploring is to carry percentages from tests through to a final score. In this case a bronze rating would require a total score of note at least 90% and at least 90% within each functional need category. This number would likely shift as we continue testing.  We invite comment on these options as well as suggestions for an alternative solution.

3.4.4 5.4 Supporting documentation for assertions Scoring holistic tests

WCAG recommends maintaining additional information that an organization can use The points from holistic tests do not affect the scores of atomic tests . Rather a minimum number of holistic tests will need to improve or validate procedures be met in order to reach a silver rating and assertions. WCAG additional holistic tests will not require organizations to provide supporting documentation be needed to conform. reach a Gold rating. Getting a silver or gold rating requires a Bronze rating.

3.4.5 Testing assertions
Editor's note
The quality of an assertion can be tested based on how well the assertion meets the documentation requirements for assertions (See Documenting Assertions ). Conforming to WCAG does not require testing supporting documentation; however, organizations may decide

We continue to adopt additional documentation requirements based work on the procedure being asserted. scoring of holistic tests and will provide more details in a future iteration of this document.

3.5 6. Conformance levels

Exploratory
WCAG 3 defines three levels
Plain language summary of conformance: bronze, silver, and gold. While it is easy to replicate the WCAG 2 A, AA, AAA by renaming the levels, there is an opportunity Conformance

You might want to improve accessibility for people with disabilities by using make a more advanced approach. Bronze is the minimum conformance level. Content claim that your content or product meets the WCAG 3.0 outcomes. If it does not meet the requirements of the bronze level does not outcomes, we call this “conformance.” To conform to WCAG 3. To reach Bronze level, the scope claimed in the conformance statement 3.0, your test results must pass a subset of outcomes and assertions. The subset will require enough outcomes and assertions to improve equity across functional needs . show that your project is accessible.

Silver level incentivizes organizations to go further If you want to improve accessibility. One possibility that we are examining is that Silver level points make a conformance claim, you must use the process described in this document. Your content can accumulate even prior to completing bronze but are not usable until Bronze is achieved. The goal is to encourage organizations conform to go beyond the minimum, especially where organizations WCAG 3.0, even if you don’t want to be recognized for their efforts make a claim. You can still use this process to go beyond minimum test your project’s accessibility.

Gold level identifies measures we want to include for those organizations that do achieve Silver so that some can stand out as exemplary, cutting edge, and role models. There are a number of ideas that will be developed further once more

End of the conformance structure is solidified. summary for Conformance

3.6 6.1 Issue severity Conformance levels

Exploratory
Editor's note

Severity rating could contribute towards scoring WCAG 3.0 includes a new conformance model in order to address a wider range of user needs, test a wider range of technologies and prioritization. As we continue developing support new approaches to testing. There are several key goals for this content, we seek input on the following: new conformance model:

  1. Is every issue critical to someone, making this concept invalid? How best to assign severity, particularly if testers have different ideas on what is critical? How do we incorporate context/process/task? Is Develop a scoring model that part of scoping, or issue severity? Both are important encourages websites to the end result. What continue to do with non-critical issues? If included, how will situations where severity depends on context be handled? Can the matrix inform designation of functional categories? For example, better and better (vs. stopping at the Text Alternative Available outcome . How will issue severity fit into levels? For example: “Bronze” could be an absence of any critical or high issues; previous AA level);
  2. “Silver” could be an absence Better reflect the lived experience of any critical, high, people with disabilities, who successfully use sites that have some content that does not meet WCAG 2.0 AA, or medium issues. How to account for cumulative issues becoming critical? who encounter barriers with sites that meet WCAG 2.0 AA;
  3. Would another approach be more effective, Allow for example assigning critical issues after testing is complete based on task or type of task rather than bugs and oversight by test? Next steps include: Testing content authors, provided the assumption that some failures cause a greater impact of them is limited to users than others or whether all guidelines and contexts are important to some individuals. Explore whether the concept of issue severity can be applied consistently and effectively. with disabilities.
  4. End of note

Outcomes may To do this, the conformance model prioritizes content needed to complete tasks while still testing the entire view for accessibility errors. This priority is reflected in the scoring system, which does not allow for errors along the concept paths needed to complete processes but allow for some accessibility errors outside process completion. This means that sites may conform at the lowest level (Bronze), while still containing a small amount of varying severity. High severity issues are those which content that does not meet one or more guidelines, so long as that content doesn’t prevent users people with disabilities from completing user processes (tasks). successfully using the site.

Tests could include critical issues. Each test could have a category of severity, so some tests We seek feedback on whether this flexibility will be flagged beneficial in encouraging content providers to meet conformance because it is more achievable or whether content providers are less likely to improve accessibility if they aren't required to. We also seek feedback on the conformance approach as causing a critical issue. Examples whole.

WCAG 3.0 defines three levels of critical issues in tests are at Text Alternative Available conformance: bronze , silver , and Translates Speech And Non-Speech Audio gold .

3.7 6.1.1 Adjectival ratings Bronze

Exploratory

Adjectival Ratings allow test results to go beyond Pass or Fail Bronze is the minimum conformance level. Content that does not meet the requirements of the bronze level does not conform to show progress towards a goal or exceeding WCAG 3.0. The bronze level can be verified using atomic tests . While there is a goal. Example lot of Possible adjectival ratings are: overlap between WCAG 2.X and WCAG 3.0, WCAG 3 includes additional tests and different scoring mechanics. As a result, WCAG 3.0 is not backwards compatible with WCAG 2.X.

For content that conforms to the bronze level:

  • Fail, Pass, Exceptional; or The total score and score within each of the functional categories MUST be at least 3.5; and
  • Fail, Progress, Pass, Better, Exceptional. Outcomes or guidelines could be evaluated using adjectival ratings on both directly quantifiable Views outcomes and qualitative processes measures that are asserted. Outcomes might be assigned an adjectival rating based on methods used MUST NOT have critical errors .

Conformance to meet this specification at the outcome and issue severity. Guidelines might be assigned an adjectival rating based on bronze level does not mean every requirement in every guideline is fully met. Bronze level means that the outcomes content in scope does have any critical errors and assertions completed under meets the guideline. minimum percentage of @@

3.8 6.1.2 Percentages Exploratory Editor's note Silver

We are exploring whether percentages could apply to Bronze but have not found Silver is a model higher conformance level that addresses additional outcomes. Some holistic testing is necessary to verify conformance to date where this works without adding complexity and time needed for testing. level.

As we continue developing this content, we seek input on For content that conforms to the following: silver level:

  • How can percentages be used in a way that is equitable across disabilities? All views MUST satisfy the Bronze criteria; and
  • End
  • Use of note In holistic tests to meet this approach, percentage of outcomes and assertions passed or percentage passed at a certain adjectival rating might level will be used to conform to Silver and Gold levels. further explored in future drafts

3.9 6.1.3 Pre-assessment checks Gold

Exploratory

Pre-Assessment checks are tests or criteria Gold is the highest conformance level that implementers can use to determine if they are ready to assess conformance. The intent of specifying these would be addresses the remaining outcomes described in the guidelines. Additional holistic testing is necessary to help implementers prepare for verify conformance testing, not to create a new level of conformance. Examples of pre-assessment checks might be: this level.

For content that conforms to the gold level:

  • The video player used has All views MUST satisfy the ability to display captions Silver criteria; and support multiple audio tracks,
  • All non-text elements on a page have an accessible name. Use of holistic tests to meet this level will be further explored in future drafts

3.10 6.2 Conforming alternative version

Editor's note

For this first draft, the Accessibility Guidelines Working Group has focused on the basic conformance model. For a next draft, we will explore how conforming alternative versions fit into the new conformance model.

6.3 Only accessibility-supported ways of using technologies

Placeholder
Editor's note

We continue to For this first draft, the Accessibility Guidelines Working Group has focused on the basic conformance model. For a next draft, we will explore how the WCAG 2 concept of accessibility-supported fits into proposed the new conformance models. End of note model.

3.11 6.4 Defining conformance scope

Exploratory

When evaluating the accessibility of content, WCAG 3 3.0 requires the outcomes apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more sub-sets of the whole. Reasons for this include:

WCAG 3 3.0 therefore defines two inter-related ways to scope content: views and processes . Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.

Conformance is defined only for processes and views . However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views. views.  All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.

Editor's note

We The AG WG and Silver Task Force recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3. End of note 3.0.

3.12 6.5 Conformance requirements

Exploratory

For In order for technology to conform to WCAG 3, 3.0, the following conformance requirements apply:

  1. Conformance level - Content MUST meet the requirements of the selected conformance level .
  2. Processes and views - Conformance (and conformance level) MUST apply to complete processes and views , and MUST NOT exclude any part of a process or view.

3.13 6.6 Conformance claims

Exploratory

Conformance claims are not required. Authors can conform to WCAG 3 3.0 without making a claim. The material below describes how to make a conformance claim if that option is chosen.

3.13.1 6.6.1 Required components of a conformance claim

A conformance claim MUST include the following information:

  1. Date of the claim;
  2. Guidelines title, version and URI W3C Accessibility Guidelines 3.0 at https://www.w3.org/TR/wcag-3.0/ ???
  3. Conformance level satisfied: (bronze, silver, or gold);
  4. A concise description of the views and processes , such as a list of URIs for which the claim is made, including any state changes which lead to a new view; and
  5. The technology including the hardware, software, and assistive technology used to test the claim.

3.13.2 6.6.2 Example conformance claim

On 12 August 2020, the following 10 views and 2 processes conform to WCAG 3 3.0 at a bronze level. Processes were selected because they are the most common activities on the web site and include 4 unique views. The other 6 views are the most commonly used.

These were tested using Firefox and Chrome on a Windows platform. The assistive technology used included JAWS and Dragon.

3.14 Conforming alternative version Placeholder Editor's note We continue to explore how the WCAG 2 concept of conforming alternative versions fit into proposed conformance models. End of note 4. User-generated content Exploratory This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative . Plain language summary of User-generated content User-generated content is content written by the public and customers. WCAG 3.0 may use different advice or steps for user-generated content to improve accessibility than for content created by the publisher. WCAG 3.0 proposes that organizations identify user-generated content and identify the steps taken to encourage accessibility. End of summary for User-generated content Editor's note It remains to be determined how to address user-generated content that has accessibility issues; and to define what minimum thresholds might be acceptable. We expect WCAG 3 to provide this guidance within individual guidelines and outcomes and to support testing for conformance. The working group is looking at alternative requirements to apply to user-generated content guideline by guideline, and is seeking feedback on what would serve as reasonable requirements on how to best support accessibility in user-generated content with known (or anticipated) accessibility issues. One example would be “alternative text”. The Authoring Tool Accessibility Guidelines (ATAG) has specific guidance for providing a mechanism for alternative text. The ATAG 2.0 Guideline B.2.3 - “Assist authors with managing alternative content for non-text content” could be adapted to provide specific, guideline-related guidance for user generated alternative text. The working group intends to more thoroughly address the contents and the location of an accessibility statement in a future draft. End of note Web content publishers may include content provided by the users of their digital products. We refer to such content as “ user-generated content ”. Examples of user-generated content include: social media postings and comments, uploaded photographs, or uploaded videos or other multimedia. User-generated content is provided for publication by visitors where the content platform specifically welcomes and encourages it. User-generated content is content that is submitted through a user interface designed specifically for members of the public and customers. Use of the same user interface as an authoring tool for publication of content by agents of the publisher (such as employees, contractors, or authorized volunteers) acting on behalf of the publisher does not make that content user-generated content. The purpose of the user-generated content conformance section is to allow WCAG 3 outcomes and methods to require additional or different steps to improve the accessibility of user-generated content. An important part of WCAG conformance is the specific guidance that is associated with individual WCAG 3 guidelines and outcomes. Not all WCAG 3 guidelines will have unique outcomes and testing for user-generated content. Unless user-generated content requirements are specified in a particular guideline, that guideline applies as written whether or not the content is user generated. The web content publisher should identify all locations of user-generated content (such as commentary on hosted content, product descriptions for consumer to consumer for sale listings, and restaurant reviews) and perform standard accessibility evaluation analysis for each. If there are no accessibility issues, the user-generated content is fully conforming. 4.1 Steps to conform If accessibility issues are identified, or if the web site author wants to proactively address potential accessibility issues that might arise from user-generated content, then all of the following must be indicated alongside the user-generated content or in an accessibility statement published on the web site or product that is linked from the view or page in a consistent location: Clearly identify where user-generated content can be found on the publisher’s digital product (perhaps by id href); Clearly identify the steps taken to encourage accessibility in user-generated content such as prompting the user for alternative text for their uploaded images before they are accepted and prohibiting text attributes except as they are part of semantic markup such as strong, headings, etc.;

5. 7. Glossary

Exploratory
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative .
Note

Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves. End of note

Adequacy Adequacy is subtle metric, but important to WCAG 3 proposals. Adequacy describes if the formulas being used to process and score the accessibility testing results are using such a small interval that small changes in accessibility do not cause large changes in scoring. Benchmarking Web Accessibility Metrics , Vigo, Lopes, O Connor, Brajnik, Yesilada 2011. Adjectival Ratings rating

A system to report evaluation results as a set of human-understandable adjectives. Assertion A formal claim of fact, attributed to a person or organization. An attributable and documented statement of fact regarding procedures practiced in the development and maintenance adjectives which represent groupings of the content or product to improve accessibility. scores.

Automated evaluation

Evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests.

Automated testing is contrasted with other types of testing that involve human judgement or experience. Semi-automated evaluation allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.

Complexity Complexity refers to the resources required to accomplish the conformance testing. These could be crawler time, or time for human judgment testing. This would be a useful metric to have to answer the question of how much time WCAG 3 takes to test as compared to WCAG 2. Benchmarking Web Accessibility Metrics , Vigo, Lopes, O Connor, Brajnik, Yesilada 2011. Conformance

Satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim.

See Conformance .

Critical error

An accessibility problem that will stop a user from being able to complete a process.

Critical errors include:

  • Items that will stop a user from being able to complete the task if it exists anywhere on the view (examples: flashing, keyboard trap, audio with no pause);
  • Errors that when located within a process means the process cannot be completed (example: submit button not in tab order);
  • Errors that when aggregated within a view or across a process cause failure (example: a large amount of confusing, ambiguous language).
Deprecate

To declare something outdated and in the process of being phased out, usually in favor of a specified replacement.

Deprecated documents are no longer recommended for use and may cease to exist in the future.

Equity Equity is the outcome of processes and actions that ensure the spectrum of human reality obtains what is needed to participate, not solely access. As equity relates to WCAG it is about the impact the standards/guidelines have on people with disabilities, along with actually including people with disabilities in the work. Evaluation
The process of examining content for conformance to these guidelines.
Different approaches to evaluation include automated evaluation , semi-automated evaluation , human evaluation , and user testing .
Functional category

A conceptual grouping of functional needs that represent generalized sets of user groups.

See Functional Categories .

Functional need

A statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context.

Guideline

High-level, plain-language content used to organize outcomes .

See Guidelines provide a high-level, plain-language version of in the content Explainer.

How-to

Provides explanatory material for managers, policy makers, individuals who are new to accessibility, and other individuals who need to understand the concepts but not dive into the technical details. They provide an easy-to-understand way of organizing and presenting the outcomes so that non-experts can learn about and understand the concepts. Each each guideline that applies across technologies.

This plain language resource includes a unique, descriptive name along with a high-level plain-language summary. Guidelines address functional needs information on specific topics, such as contrast, forms, readability, getting started, who the guideline helps and more. Guidelines group related outcomes how, as well as information for designers and are technology-independent. developers.

See How-tos in the Explainer.

Human evaluation

Evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated .

Human evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, by contrast with user testing which directly tests the experience of users with content.

Informative

Content provided for information purposes and not required for conformance .

Note

Content required for conformance is referred to as normative .

Method

Detailed information, either technology-specific or technology-agnostic, on ways to meet the outcome as well as tests and scoring information.

See Methods in the Explainer.

Normative

Content whose instructions are required for conformance .

Note

Content identified as informative or non-normative is never required for conformance.

Object

An item in the perceptual user experience.

Objects include user interface widgets and identifiable blocks of content.

Outcome

Result of practices that reduce or eliminate barriers that people with disabilities experience.

See Outcomes .

Process

A sequence of steps that need to be completed in order to accomplish an activity / task from end-to-end.

Reliability Rubric

The reproducibility and consistency of scores i.e. the extent An approach to which they are the same when evaluations evaluation that defines a set of criteria for conformance and describes the same resources are carried out in different contexts (different tools, different people, different goals, different time). This would be particularly useful to ensure that similar result qualitatively.

Scale

An way of reporting results are achieved by different testers. It would also be useful to see if different testers would select the same path or off-path decisions. Representative sampling tests also fit in this category. Benchmarking Web Accessibility Metrics , Vigo, Lopes, O Connor, Brajnik, Yesilada 2011. of evaluation using quantitative values.

Semi-Automated Evaluation

Evaluation conducted using machines to guide humans to areas that need inspection.

Semi-automated evaluation involves components of automated evaluation and human evaluation .

Sensitivity Success criterion

Sensitivity of a metric is related to the extent Testable statements that changes in compose the output normative aspects of the metric are quantitatively related WCAG 2.

The closest counterpart to changes of the accessibility of the web site being analyzed. This metric is useful for determining if the conformance proposal captures the impact of the severity of accessibility barriers on the final score and if different disabilities success criteria in WCAG 3 are treated equally by the proposal. Benchmarking Web Accessibility Metrics , Vigo, Lopes, O Connor, Brajnik, Yesilada 2011. outcomes .

Set of Tests Test

A group Mechanism to evaluate implementation of tests that supports a method .

Tests can include true / false evaluation or various types of rating scales as appropriate for the guideline , outcome , or technology.

Test Technique

Mechanism Technology-specific approach to evaluate implementation of follow a method .

Technique Text alternative

Technology-specific approach Text that is programmatically associated with non-text content or referred to follow from text that is programmatically associated with non-text content. Programmatically associated text is text whose location can be programmatically determined from the non-text content.

An image of a method . chart is described in text in the paragraph after the chart. The short text alternative for the chart indicates that a description follows.

User need

The end goal a user has when starting a process through digital means.

User testing

Evaluation of content by observation of how users with specific functional needs are able to complete a process and how the content meets the relevant outcomes .

Validity View
The extent to which the measurements obtained by a metric reflect the accessibility of the web site to which it is applied. Does the rating that a web site or digital product achieve in any conformance proposal actually reflect the rating that it should get? Benchmarking Web Accessibility Metrics , Vigo, Lopes, O Connor, Brajnik, Yesilada 2011. Accessed on 29 July 2020 A. Privacy Considerations Editor's note The content of this document has not matured enough to identify privacy considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact privacy. End of note B. Security Considerations Editor's note The content of this document has not matured enough to identify security considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact security. End of note C. Guidelines development methodology Plain language summary of Guidelines development methodology

WCAG 3 includes some of the information from WCAG 2, guidelines for tools to create web All content (ATAG), and guidelines for browsers, media players, visually and similar software (UAAG). The WCAG 3 design is programmatically available without a substantive change.

Views vary based on research. You can read more about the Requirements for WCAG 3.0 . End of summary for Guidelines development methodology C.1 Relationship to other W3C guidelines The Web Content Accessibility Guidelines (WCAG) 2.0 [ WCAG20 ] were designed to be technology neutral, and have stayed relevant for over 10 years. The Authoring Tool Accessibility Guidelines (ATAG) 2.0 [ ATAG20 ] being tested. While these guidelines provide guidance for various types of software that assist people in writing accessible content. User Agent Accessibility Guidelines (UAAG) 2.0 [ UAAG20 ] offers useful guidance to user agent developers and has been implemented on an individual success criterion basis. These guidelines have normative guidance for content scoping a view, the tester will determine what constitutes a view, and helpful implementation advice for authoring tools, user agents, describe it. Views will often vary by technology. Views typically include state permutations that are based on that view such as dialogs and assistive technologies. WCAG 3 incorporates alerts, but some states may not deserve to be all encompassing treated as separate views.

Visual Contrast

The combination of WCAG 2, ATAG, foreground and UAAG. WCAG 3 is not backward compatible background colors along with WCAG 2, ATAG 2.0, font weight and UAAG 2.0. For more details about differences from previous guidelines, see Appendix: Differences From WCAG 2 . size that make text readable.

C.2 A. Goals and requirements Guidelines development methodology

The goal of WCAG 3 and supporting documents is to make digital products including web, ePub, PDF, applications, mobile apps, and other emerging technologies more accessible and usable to people with disabilities. It is the intention for WCAG 3 to meet this goal by supporting a wider set of user needs, using new approaches to testing, and allowing more frequent maintenance of guidelines to keep pace with accelerating technology change. The hope is that WCAG 3 will make it significantly easier for both beginners and experts to create accessible digital products that support the needs of people with disabilities. Research and design work performed by the Silver Task Force identified key requirements needed to improve upon the existing WCAG 2 structure. These requirements, presented in the Requirements for WCAG 3 document, shaped the guidelines that follow and should be taken into account when evaluating and updating the guidelines.

D. B. Differences from WCAG 2

D.1 B.1 Outcomes

Outcomes are different from WCAG 2 2.X success criteria. Compared to success criteria, outcomes are written to be:

The design of outcomes allows more varied needs of people with disabilities than could have been included in WCAG 2.  2.X. 

Methods map approximately to WCAG 2 2.X Techniques documents.

D.2 B.2 Approximate mapping of WCAG 2 and WCAG 3 documentation

WCAG 2 WCAG 3
Success Criteria Outcomes
Techniques Methods
Understanding How-to

E. C. Change log

F. D. Acknowledgements

F.1 D.1 Participants who made notable contributions to the creation of this document Authors

This section documents participants who made significant contributions in leadership or participating in subgroups.

D.2 Subgroup leaders

D.3 Subgroup participants

Additional participants

Accessibility Guidelines Working Group participant list D.4 Researchers

Silver Task Force participant list

F.2 D.5 Prior Contributors Research Partners

F.2.1 Participants of the

These researchers selected a Silver Task Force research question, did the research, and Silver Community Group who contributed graciously allowed us to use the 2021 results.

D.6 Participating contributors

F.2.2 Participants of the Accessibility Guidelines Working Group who reviewed the 2021 and 2020 versions of this document Jake Abma Shadi Abou-Zahra Chuck Adams Amani Ali Jim Allan Paul Adam Jon Avila Bruce Bailey Garenne Bigby Rachael Bradley Montgomery Judy Brewer Shari Butler Alastair Campbell Laura Carlson Pietro Cirrincione Michael Cooper Jennifer Delisi Wayne Dick Kim Dirks Shwetank Dixit Nicaise Dogbo E.A. Draffan Michael Elledge David Fazio Wilco Fiers Detlev Fischer John Foliot Betsy Furler Matt Garrish Alistair Garrison Michael Gower Charles Hall Katie Haritos-Shea Andy Heath Shawn Henry Sarah Horton Abi James Marc Johlic Andrew Kirkpatrick John Kirkwood Peter Korn JaEun Ku Patrick Lauke Shawn Lauriat Steve Lee Chris Loiselle Greg Lowney David MacDonald Chris McMeeking Jan McSorley Melina Möhnle Mary Jo Mueller Gundula Niemann Brooks Newton Caryn Pagel Justine Pascalides Kim Patch Melanie Philipp Ruoxi Ran Stephen Repsher John Rochford Cybele Sack Janina Sajka Lisa Seeman-Kestenbaum Glenda Sims Avneesh Singh Andrew Somers Jaeil Song Jeanne Spellman Makoto Ueki Kathleen Wahlbin Léonie Watson F.2.3 Research Partners These researchers selected a Silver research question, did the research, and graciously allowed us to use the results. David Sloan and Sarah Horton, The Paciello Group, WCAG Success Criteria Usability Study Scott Hollier et al, Curtin University, Internet of Things (IoT) Education: Implications for Students with Disabilities Peter McNally, Bentley University, WCAG Use by UX Professionals Dr. Michael Crabb, University of Dundee, Student research papers on Silver topics Eleanor Loiacono, Worcester Polytechnic Institute Web Accessibility Perceptions (Student project from Worcester Polytechnic Institute)

F.3 D.7 Enabling funders

This publication has been funded in part with U.S. Federal funds from the Health and Human Services, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), initially under contract number ED-OSE-10-C-0067 and now under contract number HHSP23301500054C. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Health and Human Services or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.

G. E. References

G.1 E.1 Informative references

[ATAG20]
Authoring Tool Accessibility Guidelines (ATAG) 2.0 . Jan Richards; Jeanne F Spellman; Jutta Treviranus. W3C. 24 September 2015. W3C Recommendation. URL: https://www.w3.org/TR/ATAG20/
[coga-usable]
Making Content Usable for People with Cognitive and Learning Disabilities . Lisa Seeman-Horwitz; Rachael Bradley Montgomery; Steve Lee; Ruoxi Ran. W3C. 29 April 2021. W3C Working Group Note. URL: https://www.w3.org/TR/coga-usable/
[RFC2119]
Key words for use in RFCs to Indicate Requirement Levels . S. Bradner. IETF. March 1997. Best Current Practice. URL: https://www.rfc-editor.org/rfc/rfc2119 [RFC8174] Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words . B. Leiba. IETF. May 2017. Best Current Practice. URL: https://www.rfc-editor.org/rfc/rfc8174
[UAAG20]
User Agent Accessibility Guidelines (UAAG) 2.0 . James Allan; Greg Lowney; Kimberly Patch; Jeanne F Spellman. W3C. 15 December 2015. W3C Working Group Note. URL: https://www.w3.org/TR/UAAG20/
[WCAG20]
Web Content Accessibility Guidelines (WCAG) 2.0 . Ben Caldwell; Michael Cooper; Loretta Guarino Reid; Gregg Vanderheiden et al. W3C. 11 December 2008. W3C Recommendation. URL: https://www.w3.org/TR/WCAG20/
[WCAG22]
Web Content Accessibility Guidelines (WCAG) 2.2 . Michael Cooper; Andrew Kirkpatrick; Alastair Campbell; Rachael Bradley Montgomery; Charles Adams. W3C. 20 July 5 October 2023. W3C Proposed Recommendation. URL: https://www.w3.org/TR/WCAG22/