Disability Ethical? AI
‘Responsible Tech’ influencers including data scientist, AI (artificial intelligence) developers and HR leaders are increasingly seeking to address data biases built into AI Recruitment tools. These champions care about avoiding discrimination based on gender, race, or age – while still discriminating against these same diverse candidates who also happen to have a disability.
What would it take for fair treatment of the world’s 1.3 billion disabled people, and those of us who inevitably become disabled over time, to ‘matter’ to those defining, developing, and purchasing AI-powered HR technology? Artificial intelligence that doesn’t understand reasonable accommodations is not intelligent. Will the cost savings for the employer outweigh the potential harm?
Susan Scott-Parker OBE
Look at the facts
The debate on race and gender bias in AI-powered recruitment has started, yet disability is so missing no one has even noticed it isn’t there, even though:
- 1 in 5 women will have a disability
- At least 1 in 3 people aged 50 – 64 will be disabled, regardless of ethnicity, nationality, socioeconomic status etc
- At least 1 in 3 data scientists will be disabled themselves or close to someone who has direct lived experience
- 15-20% of the world’s population has a disability
- Human Resources (HR) practitioners buying and deploying this technology should, in theory, understand disability discrimination
- 181 countries have ratified the United Nations Convention on the Rights of Persons with Disabilities (CRPD)
- The CRPD and numerous state legal frameworks and jurisdictions position standardised, inflexible recruitment systems as inherently discriminatory
Find out more about why Disability Ethical AI matters here.
AI powered recruitment technology threatens the life chances of hundreds of millions of people with disabilities worldwide, as well as those of us who will become disabled in time.
- Tools designed to detect AI Bias do not and cannot detect disability bias because ‘they aren’t in the database’.
- Developers of AI HR technology are not required to prove their products are safe for disabled job seekers or employees
- Neither buyers nor AI developers understand the difference between the inevitable disability bias in the data and the unfair, often discriminatory treatment triggered when the AI tool, which cannot make adjustments, is dropped into a standardised, rigid, process which by definition disadvantages ‘non-standard’ candidates
- The burden remains on individual disabled job seekers to prove they have been treated badly by an algorithm- an algorithm they didn’t even know was there
- Regulators are only beginning to address the potential impact, positive and negative, of Artificial Intelligence on persons with disabilities
- Those leading the worldwide ‘Ethical and Responsible AI’ debate remain, for whatever reason, ‘disability oblivious’
How you can help
- Start by asking the “What about” questions- interrupt every discussion on race and gender bias in AI – with the words: “And what about disability?” “What about disability equality?”
- Join our Alliance of forward-thinking organisations committed to raising awareness of Disability Ethical AI as a commercial, ethical and human rights imperative
- Contribute your knowledge and expertise to our resources library and help us to determine ‘who needs to do what differently?’
- Share your stories of disability discrimination in automated and AI powered recruitment
To get involved or find out more please connect with us
About the DEAI Alliance
Founded by Susan Scott-Parker OBE of Scott-Parker International with IBM and Oxford Brookes University Institute for Ethical Artificial Intelligence, we came together because there is the need to persuade the AI industry that disability – intrinsic as it is to the human condition – ‘matters’.