Color accessibility for AI-generated UIs
AI can build a landing page in thirty seconds. Beautiful layout. Clean typography. A color palette that looks great on the designer's 5K display.
Then someone with deuteranopia opens it and can't tell the success state from the error state. Someone with low vision squints at light gray text on a white background. Someone using a phone in direct sunlight can't read a single word.
The AI didn't check. It never does.
The problem with AI and color accessibility
AI coding tools optimize for what looks good, not what works for everyone. They've been trained on millions of websites, and most of those websites have accessibility problems. The training data is the problem.
When you prompt Claude or GPT to "build a pricing page," it produces something visually polished. But it doesn't compute contrast ratios. It doesn't simulate color blindness. It doesn't check whether your accent color is distinguishable from your error state for someone with protanopia.
It's not that AI can't understand accessibility. It's that nobody told it to prioritize it. And in the gap between "looks good" and "works for everyone," real people get excluded.
The numbers that matter
Roughly 8% of men and 0.5% of women have some form of color vision deficiency. That's not a rounding error. On a product with 100,000 users, approximately 4,000 people are seeing your interface differently than you intended.
WCAG 2.1 Level AA is a legal requirement in the EU, UK, US (for federal agencies), Canada, and Australia. It's not optional. It's not aspirational. It's the law in many jurisdictions, and the standard everywhere else.
The minimum contrast ratio for normal text is 4.5:1. For large text (18px bold or 24px regular), it's 3:1. These aren't arbitrary numbers — they're based on research into visual perception across different conditions and abilities.
What AI gets wrong
Low contrast text
The most common failure. AI loves light gray text on white backgrounds. It looks "clean" and "modern." It also has a contrast ratio of 2:1, which fails WCAG AA by a mile. Body text needs 4.5:1. That light gray isn't cutting it.
Color as the only indicator
AI will generate form validation where errors are indicated by red text and success by green text. Nothing else — no icons, no text labels, no pattern changes. For someone with red-green color blindness, the error state and the success state look identical.
WCAG 1.4.1 says color cannot be the only means of conveying information. The AI ignores this every time.
Accent colors used as body text
A vibrant orange accent might be perfect for a CTA button with dark text on it. But the AI will also use that orange as body text on a light background, where the contrast ratio drops to 2.8:1. Same color, different context, accessibility failure.
No dark mode consideration
AI generates for light mode by default. If you have a dark mode, the same colors might not work. A dark blue text that reads fine on white becomes invisible on a dark background. The AI doesn't check both contexts.
The fix: validate before you build
The solution isn't to stop using AI tools. It's to give them accessible palettes from the start. If every color in the system has been pre-validated for contrast and distinguishability, the AI can't pick a bad combination — there aren't any.
Pre-validated palettes
Every Paletter palette ships with contrast ratios computed for every meaningful color pair. Background/Ink, Background/Accent, Accent/Ink. You see the WCAG AA and AAA status before you export anything. If a combination fails, you know immediately — not after the code is written.
Color blindness simulation
Paletter simulates protanopia, deuteranopia, and tritanopia for your entire palette. You see what your users see. If two colors become indistinguishable under simulation, you know to add secondary indicators — icons, patterns, text labels — before a single line of code is written.
Role-based constraints
When your palette has roles — Background, Ink, Accent, Support, Neutral — the usage rules are built in. "Accent is for CTAs only, never body text" is a constraint that prevents the most common accessibility failure. The AI reads these rules and follows them. Try the contrast checker.
How COLORS.md prevents AI accessibility failures
A COLORS.md file in your project root gives AI tools the full context: hex values, roles, contrast ratios, and explicit rules. The rules are the critical part.
- "Never use Accent as body text on Background — contrast ratio is 2.8:1, fails AA"
- "Always use Ink for text on Background surfaces — contrast ratio is 9.2:1, passes AAA"
- "Use Ink-colored text on Accent fills — contrast ratio is 5.1:1, passes AA"
- "Do not use color as the only indicator of state — always pair with icons or labels"
The AI reads these rules at the start of every conversation. It doesn't use Accent as body text because the file explicitly says not to, with the reason why. It doesn't create color-only indicators because the rule forbids it.
You encode accessibility decisions once. The AI follows them forever.
Accessibility is a design decision, not a code fix
Most teams treat accessibility as a post-build audit. Ship the feature, run an accessibility scan, fix the failures. This is backwards. Accessibility should be a palette-level decision, not a component-level patch.
If your palette is accessible — if every text/background pair meets WCAG AA, if every color is distinguishable under color blindness simulation, if roles enforce correct usage — then everything built with that palette is accessible by default.
You're not fixing accessibility bugs. You're making them impossible to create.
Generate an accessible palette
Every palette ships with contrast ratios, WCAG badges, and color blindness simulation. Accessibility built in, not bolted on.
Generate an accessible palette