AI recruitment tools are rapidly becoming the first filter in high‑volume hiring, but their unintended consequences risk excluding millions of disabled people from work. Just imagine: you lose your dream job because your stammer made you go 15 seconds over the video interview limit and the algorithm automatically rejects you.
The submission highlights how standardised AI processes ignore the reasonable adjustments disabled candidates need, turning bias into systemic discrimination. Regulators are beginning to respond, the U.S. Equal Employment Opportunity Commission has issued guidance on AI‑facilitated disability discrimination, and the European Disability Forum is pushing for accountability standards.
Yet the industry still shows a “market failure”: neither tech creators nor HR buyers understand disability discrimination. The call is clear, regulators must require AI tools to be explicitly designed to protect the rights of the world’s 1.3 billion disabled people, or risk embedding exclusion into the future of work.
