TestingBeginner

ADA Compliance Audit

A systematic evaluation of a website, application, or physical space to determine whether it meets ADA requirements and WCAG standards for accessibility.

In simple terms: An ADA compliance audit is like a check-up for a website to see if everyone can use it. Testers look at the website with special tools and by trying to use it the way people with disabilities would — like navigating with just the keyboard or using a program that reads the screen out loud. Then they write a report that says what needs to be fixed.

What Is ADA Compliance Audit?

An ADA compliance audit is a structured evaluation process that assesses whether a digital property (website, mobile application, software) or physical space meets the accessibility requirements of the Americans with Disabilities Act and related standards, most commonly the Web Content Accessibility Guidelines (WCAG). For digital properties, the audit tests the interface against WCAG success criteria at a specified conformance level (typically AA) and documents any failures along with recommended remediation steps. The audit process combines automated scanning tools that detect programmatically identifiable issues with manual testing by trained accessibility specialists who evaluate aspects that require human judgment. A thorough audit also includes assistive technology testing — navigating the site with screen readers, keyboard-only interaction, and other assistive tools — to assess the real-world user experience. The output of an audit is typically a detailed report that catalogs each accessibility issue found, maps it to the relevant WCAG success criterion, rates its severity, and provides specific remediation guidance. This report serves as a roadmap for making the digital property accessible and, in some cases, as evidence of good-faith compliance efforts. ADA compliance audits have become increasingly important as digital accessibility lawsuits have surged. Over 4,000 web accessibility lawsuits were filed in 2023 alone. Organizations conduct audits proactively to identify and fix issues before they become legal liabilities, or reactively in response to demand letters, lawsuits, or complaints.

Why It Matters

Without an audit, organizations are essentially guessing at their accessibility posture. Most websites have accessibility issues — the WebAIM Million study consistently finds that over 95% of homepages have detectable WCAG failures — and many of these issues are invisible to sighted, mouse-using developers and designers. An audit provides three essential things. First, it establishes a baseline: an objective measurement of the current state of accessibility, quantified against WCAG criteria. Second, it creates a prioritized remediation plan, identifying which issues are most severe and most impactful to fix. Third, it demonstrates due diligence, showing that the organization is taking accessibility seriously and working toward compliance — a factor that can be relevant in legal proceedings. The legal landscape makes audits particularly important. The Department of Justice has issued rules requiring state and local government websites (Title II) to meet WCAG 2.1 AA. Title III lawsuits against private businesses reference WCAG as the de facto standard. Organizations that can demonstrate a comprehensive audit, a remediation plan, and ongoing monitoring are in a much stronger legal position than those with no accessibility documentation. Beyond legal risk, audits have business value. They identify usability issues that affect all users, not just those with disabilities. They inform design system improvements that prevent new accessibility issues from being introduced. They satisfy procurement requirements from government and enterprise clients who mandate accessibility. And they provide metrics for tracking accessibility improvement over time.

How It Works

A comprehensive accessibility audit follows a structured methodology that combines multiple testing approaches. ### Scope Definition Before testing begins, the audit scope must be defined. For large websites, auditing every page is impractical. Instead, auditors select a representative sample that includes the homepage, primary navigation paths, all unique page templates (product pages, listing pages, article pages, search results), key user journeys (registration, checkout, account management), forms and interactive components, and any pages specifically called out in a complaint or demand letter if applicable. The conformance target is also established — almost always WCAG 2.1 Level AA, which is the standard referenced by most legal frameworks and regulations. ### Automated Testing The first phase typically uses automated scanning tools to identify issues that can be detected programmatically. Tools like axe by Deque, WAVE by WebAIM, Lighthouse by Google, and enterprise platforms like Siteimprove or Level Access scan the DOM and identify issues such as missing alt text on images, form inputs without labels, insufficient color contrast, missing document language, empty headings or links, duplicate IDs, and missing landmark roles. Automated testing is fast and consistent, but it has significant limitations. Research estimates that automated tools can detect only 25 to 35 percent of WCAG issues. They cannot assess whether alt text is accurate, whether focus order is logical, whether content is understandable, or whether interactive components are usable with assistive technology. ### Manual Testing Manual testing by trained accessibility specialists covers what automation cannot. This includes keyboard-only navigation through all interactive elements, verifying that focus order is logical and that no keyboard traps exist. Screen reader testing with JAWS, NVDA, or VoiceOver verifies that content is announced correctly, labels are meaningful, and dynamic updates are communicated. Cognitive walkthrough assesses whether content is understandable, navigation is intuitive, and error handling is clear. Visual inspection checks focus indicators, color usage, text spacing, and zoom behavior. ### Assistive Technology Testing Auditors test with the assistive technologies most commonly used by the target audience. At minimum, this includes testing with a screen reader on desktop (JAWS or NVDA on Windows, VoiceOver on macOS) and mobile (VoiceOver on iOS, TalkBack on Android). It may also include testing with screen magnification, voice control, and switch access, depending on the scope. ### Reporting The audit report documents each issue with a description of the problem, the affected page or component, the WCAG success criterion violated, the severity level (critical, major, minor), a code example showing the current state, the recommended fix, and a code example showing the corrected state. Issues are typically prioritized by severity and frequency, creating a remediation roadmap. ### Remediation and Verification Following the audit, the development team remediates the identified issues. A verification round (re-test) confirms that fixes are implemented correctly and have not introduced new issues. Many organizations establish ongoing monitoring — regular automated scans plus periodic manual audits — to maintain accessibility over time.

Examples

**Automated scan finding:** An axe scan of a checkout page identifies 15 form fields without associated labels. Severity: critical. WCAG criterion: 1.3.1, 4.1.2. **Manual testing finding:** Keyboard testing reveals that a modal dialog does not trap focus — pressing Tab moves focus behind the modal to invisible page elements. Severity: critical. WCAG criterion: 2.1.2, 2.4.3. **Screen reader finding:** VoiceOver testing reveals that a custom dropdown announces "clickable" rather than its role and state. It does not announce the selected option or that it is expandable. Severity: major. WCAG criterion: 4.1.2. **Cognitive review finding:** A multi-step form provides no progress indicator and no way to save progress. Users who leave mid-process lose all entered data. Severity: major. WCAG criterion: 3.3.4. **Visual inspection finding:** The custom focus indicator has a contrast ratio of 1.5:1 against the background, making it nearly invisible. Severity: major. WCAG criterion: 2.4.7.

Frequently Asked Questions

How often should an ADA compliance audit be performed?
Accessibility audits should be conducted at least annually for established sites, plus whenever significant changes are made (redesigns, new features, platform migrations). Many organizations integrate ongoing automated scanning with periodic comprehensive manual audits. Continuous monitoring is becoming the industry standard.
What is the difference between automated and manual accessibility testing?
Automated tools scan code for detectable issues (missing alt text, missing labels, contrast failures) and can catch roughly 25-35% of WCAG issues. Manual testing — including keyboard navigation, screen reader testing, and cognitive walkthroughs — is required to catch the remaining 65-75% of issues that require human judgment, such as whether alt text is meaningful or whether focus order is logical.
Who should perform an accessibility audit?
Ideally, audits are performed by trained accessibility specialists who understand WCAG, assistive technologies, and disability perspectives. Many organizations use a combination of internal accessibility teams and external auditing firms. Including people with disabilities in the testing process provides invaluable real-world insights. You can find verified accessibility auditing firms at adacompliantwebdesign.com.

Need help making your website ADA compliant?

Our team specializes in ADA-compliant web design and remediation. Get a free accessibility audit today.

Last updated: 2026-03-15