AI recruitment tools are increasingly regarded as the ‘first line of defence’ against high-volume online hiring. Recruiters need to discard as many applicants as possible, as quickly and as cheaply as possible, to narrow down to the talent deemed worthy of consideration by human recruiters. Often the goal is to recruit people who match as ‘microanalytically’ as possible the company’s ‘Ideal Hire’: that is, who match someone who already works for them.
You want a job at a company which uses a video interview screening tool that assesses how closely you resemble an employee they hold in high regard. The AI recruitment software notes thousands of barely perceptible changes in your posture, facial expression, non-verbal communications, vocal tone and word choice, and then compares these thousands of data points with those of the employer’s ‘Ideal Hire’. Let us call this ideal employee: ‘Harry’.
But what happens as the AI camera starts to assess you on screen if:
- Your face was scarred by acid and works very differently from Harry’s
- You have Down’s Syndrome and your ‘non-standard’ features are not recognised by the AI recruiter
- You are a maths graduate with Cerebral Palsy and a ‘non-standard’ speech impairment – unlike Harry
- Which means fellow graduate, Sally, who stammers, probably can’t get through either. Will the AI know that her stammer doesn’t get in the way when she is working? Will it give her a bit longer to answer each question?
- Your voice is nothing like Harry’s – due to your late onset hearing loss
- You are 55 and can do the job, but your word choice seems out of date when compared to the business jargon Harry tends to use
- You don’t look the camera or robot in the eye -because you can’t
- You have limited use of your hands – or prosthetic hands – and struggle to hold the game controls
- You are hearing impaired and naturally drop your eyes looking for captions or subtitles, which aren’t there, because Harry doesn’t need them
- You are autistic and need the questions re-structured if you are to do your best
Candidates with a wide range of disabilities stand very little chance of getting through – and will struggle to prove they were discriminated against by an AI powered process.
Yet neither AI developers nor their employer customers nor those influencing the ethical AI debate have even begun to address the potential impact of this fast-moving technology on the world’s more than 1.3 billion persons with disabilities.
In practical terms we aim to:
- Help AI creators, developers and buyers to understand how the disability discrimination they are facilitating impacts on an organisation’s talent pool, workforce and customers
- Demonstrate that AI regulation is inadequate by increasing pressure on regulators worldwide to bring these AI tools into the realm of consumer protection
- Highlight a need for an HR accreditation that builds specialist knowledge on the impact of discriminatory AI recruitment practices
- Develop a Disability Ethical? AI resource library to help organisations worldwide to demonstrate that building technology which works for disabled people is ultimately better for everyone.