‘Responsible Tech’ influencers including data scientists, Artificial Intelligence (AI) developers, robot engineers and business leaders who deploy AI powered HR technology are increasingly seeking to address data bias and the need for inclusion. These champions are determined to avoid potential discrimination based on gender and/or race– while still discriminating against these same diverse candidates who also happen to have a disability.
So, if data matters – why doesn’t the fact that we have at least 1.3 billion people with disabilities worldwide register with those influencing the responsible and ethical AI debate, bearing in mind that:
- 1 in 5 women will have a disability
- At least 1 in 3 people aged 50 – 64 will be disabled, regardless of ethnicity, nationality, socioeconomic status etc
- At least 1 in 3 data scientists will be disabled themselves or close to someone who has direct lived experience
- 15-20% of the world’s population has a disability- while 80% of disabilities are not immediately visible
- If you live to be 70, the odds are you will have at least 10 years personal experience of disability
The growing use of AI powered HR technology threatens the life chances of hundreds of millions of people who have disabilities now, and those of us who will inevitably acquire a disability with time. Yet no national artificial intelligence strategy has been identified that places particular emphasis on the human rights implications for disabled people. And no developer is required to prove their products will not harm job seekers or employees with disabilities before putting them on the market.
Indeed, people with disabilities are so missing from the global Responsible and Ethical and Inclusion AI debate that no one has even noticed they aren’t there.
Instead, we see:
- Resume screening tools which because they rely on profiles of previous hires automatically consolidate barriers to disabled people who are typically at least two or three times as unlikely to be employed
- AI screening tools that disadvantage candidates who use assistive devices or are unable to access the assessments in the first place
- AI HR Tech that ignores the need, and often the legal obligation, to enable the reasonable accommodations or adjustments which make possible both an accurate assessment and equal opportunities
- AI screening tools embedded in standardised recruitment processes which precisely because they are ‘standardised’ discriminate against applicants with disabilities who need flexibility to demonstrate their ability to do the job
- Facial recognition systems that do not work properly for millions of people with a wide range of disabilities, particularly those with facial difference due to, for example, eye anatomy, albinism, birth marks, Down’s Syndrome, acid burns, inability to smile, disfigurements
- An inherent bias in some facial recognition systems against disabled people who are judged to be untrustworthy because their faces or their voices or their way of speaking were ‘non-standard’
- Virtual reality and task simulation recruitment assessments that assume every candidate can walk, speak, hear, see, use their hands, and learn in a ‘standard’ way
- Emotion recognition systems used to make evaluative judgements about people that raise significant risks – noting none of these systems have been validated on or for or with people with a wide range of disabilities
It isn’t only people with disabilities who are at risk
We have further evidence from an impressive experiment by Bayerischer Rundfunk (German Public Broadcasting) which found that one AI Powered assessment reacted to visual ‘information’ from actors pretending to be job seekers, such as displaying a painting or wearing a head scarf, by significantly improving – or worsening!- your personality scores-if putting glasses on changes your personality dramatically – just imagine what a glimpse of your wheelchair or your unusual smile or your hands moving in a ‘non-standard’ way might do to your scores.
The Disability Ethical AI? Alliance has launched a new online hub to help put disability and human reality into the Ethical & Responsible AI debate through:
- data revealing the global human, societal and economic impact of disability
- evidence revealing the risks for people with disabilities and other marginalised groups, triggered by the use of these still unproven technologies, and the latest thinking on how to mitigate those risks
- access to a curated selection of the research, white papers, thought pieces, ethical frameworks, regulatory reforms addressing the need for AI tools that are neither disability biased nor used in ways which enable unfair treatment and disability discrimination
- the chance to connect with thought leaders worldwide addressing the ‘why’ we need to act and the ‘how’ to enable a truly inclusive future for all of us
And a call to any and all job seekers or employees with a disability willing to share their experiences with AI powered processes to help us all understand the real-life impact of this technology…. as we set out to generate creative solutions
DEAI Alliance founder Susan Scott-Parker OBE, asks “What will it take for the world’s 1.3 billion disabled people, and those of us who inevitably become disabled over time, to ‘matter’ to those defining, developing, and purchasing AI-powered HR technology? Artificial Intelligence that doesn’t understand reasonable accommodations is not intelligent. Should the employer or the AI developer be liable? Can the cost savings to employers justify the scale of potential harm? and most importantly – who needs to do what now?”
Addressing the disability-oblivious ethical AI debate
The DEAI hub will bring together a likeminded community committed to sharing the resources which will enable us all to help AI developers and their customers to tackle the discrimination so often built into the fabric of AI driven recruitment processes – and to share stories which illustrate the real-life impact on job seekers and employees with disabilities.
And we are launching our very easy to join “What about Disability?” campaign – as we all begin to interrupt every conversation about race, gender, age etc. bias to ask: And what about disability? What about job seekers and employees with disabilities?
- ‘What about’ the discrimination triggered when ‘disability biased AI tools are dropped into already hard to navigate automated recruitment processes?
- ‘What about’ the fact that you can’t predict this disabled candidate’s future performance if the AI tool ignores the role of the reasonable adjustments that make her high performance – and fair treatment – possible?
- ‘What about’ the guidance developers will require if their AI powered recruitment tools are to treat persons with disabilities fairly- including the 1 in 5 women with disabilities?
- ‘What about Procurement?” Can they mitigate the growing legal and reputation risks triggered for the employer who buys this technology?
- ‘What about new regulations’ -Will regulators require developers to prove their products are safe for job seekers and employees with disabilities before they can put them on the market?
The hub and resources are freely available to anyone interested in addressing this challenge: visit the hub at www.disabilityethicalai.org.
Notes to Editors:
The Disability Ethical AI Alliance (DEAI) founded by Scott-Parker International, IBM, Simmons+Simmons and Oxford Brookes University Institute for Ethical Artificial Intelligence, came together as an informal thought leadership initiative because we see the need to persuade the AI industry and associated influencers worldwide that disability – intrinsic as it is to the human condition – ‘matters’.
The allies group includes: The ILO’s Global Business Disability Network, Face Equality International, The World Bank, The Australian Employers Network on Disability, The Essl Foundation, The Inclusive Design Research Centre (Canada), The European Disability Forum, G3ICT, Lisbon University, the Centre for Democracy and Technology, 50 Million Voices, MyAbility, New York University and the Special Rapporteur on the Rights of Persons with Disabilities.
For media enquiries
- Susan Scott-Parker OBE, Scott-Parker International, email: email@example.com
For hub enquiries
- Vanessa Hardy, Scott-Parker International, email: firstname.lastname@example.org