We want to educate and inform others about unethical AI practices so that you have the confidence, knowledge and resources to ask the question ‘But what about disabled people…’ whenever discussion about AI and age, gender or race discrimination appear on your business agenda.
We are building this resource library to share a wide range of information on disability discrimination in AI which is free to download and share. If you are involved in leading change on disability ethical? AI and would like to contribute to our resource library, please do get in touch.
How AI-powered HR tech & automation threatens the life chances of persons with disabilities
AI Powered Unfair Recruitment by Susan Scott-Parker OBE HonD.
Recruiters are increasingly using AI Screening to recruit people who match as ‘microanalytically’ as possible the company’s ‘Ideal Hire’: that is, who match someone who already works for them. Disabled people, are at least twice as likely as anyone else to be excluded from the labour market, and so are highly unlikely to be any employer’s ‘ideal’ colleague. Yet neither AI developers nor their employer customers nor those influencing the ethical AI debate have even begun to address the potential impact of this fast-moving technology on the world’s more than 1.3 Billion persons with disabilities. Download AI Powered Unfair Recruitment here (PDF)
‘Objective or biased – The questionable use of Artificial Intelligence in job applications’ Research experiment and report conducted by BR (Bavarian Broadcasting) in partnership with Report München.
While it ignores disability – the findings of this fascinating experiment from German public broadcaster BR, reveals weaknesses in the AI alleged science which will inevitably also pose serious threats to the life chances of persons with disabilities – worldwide. Imagine if the AI that downgrades someone’s psychometric score just for wearing glasses glimpsed a wheelchair or crutches in the video interview – imagine the impact on the scoring if the applicant were Deaf, Blind, facially disfigured, whispered, was trying to lip read, stammered, eyes move oddly due to visual impairment etc etc etc ….when just sitting in poor light causes the candidate’s scores to drop. Watch the BR AI recruitment video experiment here
‘AI Powered Disability Discrimination: How to you lip-read a robot recruiter’ lecture by Susan Scott-Parker OBE HonD, for New York University’s Tandon School of Engineering, ‘AI Ethics – Global Perspectives’ series
AI-based technologies can solve some of the world’s biggest challenges, but for some individuals and groups – AI ethics are just unethical. Susan Scott-Parker OBE delivers the lecture ‘AI Powered Disability Discrimination: How to you lip-read a robot recruiter’ as part of a lecture series on the ethical implications of data and Artificial Intelligence from different perspectives, led by New York University’s Tandon School of Engineering. Watch the lecture and access course material here
How Automated Test Invigilating Software Discriminates Against Disabled Students by Lydia X. Z. Brown
Virtual invigilating software algorithmically profiles students for suspicious behaviour, creating anxiety and fears about surveillance in the exam room. For disabled students, it can be much worse – as merely being disabled can affect movement, appearance, communication, and information processing, and coping with anxiety. Virtual invigilating or proctoring software means a higher risk for being flagged as suspicious – and for being outed as disabled. Read the full article on the Centre for Democracy and Technology website
Students Are Rebelling Against Eye-Tracking Exam Surveillance Tools by Todd Feathers and Janus Rose
Invasive test-taking software has become mandatory in many places, and some companies are retaliating against those who speak out. One major point of contention between proctoring companies and university communities has been the algorithmic techniques the software uses to detect potential cheating. Proctoring software determines whether a test-taker’s “suspicion level” at any given moment is low, moderate, or high by detecting “abnormality” in their behaviour. If a student looks away from the screen more than their peers taking the same exam, they are flagged for an abnormality. If they look away less often, they are flagged for an abnormality. The same goes for how many keystrokes a student makes while answering a question. Variation outside the standard deviation results in a flag. That methodology is likely to lead to unequal scrutiny of people with physical and cognitive disabilities or conditions like anxiety or ADHD. Read the full article on VICE website
Thought leaders from the fields of business, science and academia set the scene on ‘unethical’ disability AI for AI developers and HR recruitment tech buyers
‘Recruitment AI has a Disability Problem’ White Paper by The Institute for Ethical AI
This white paper details the impacts to and concerns of disabled employment seekers using AI systems for recruitment, and provides recommendations on the steps employers can take to ensure innovation in recruitment is also fair to all users. In doing so, we further the point that making systems fairer for disabled employment seekers ensures systems are fairer for all. Download the ‘Recruitment AI has a Disability Problem’ White Paper here (PDF)
‘AI for disability inclusion’ by Accenture
This research based on Accenture’s ongoing study of persons with disabilities in the workplace, and advances in “human + machine” work, shows that Artificial Intelligence (AI)—when developed and used responsibly and ethically—has the potential to facilitate the entire employment journey for persons with disabilities. It can help organizations identify candidates (and vice versa). It can enable engagement at work. And it can drive a culture of confidence in this underutilized segment of the workforce while supporting advancement within organizations. The study encourages those committed to developing ethical AI to use the R(AI)S guiding principles (Responsible, Accessible, Inclusive, Secure) to inform decision-making about using AI to improve inclusion. This study has been created in collaboration with Disability:IN and the American Association of People with Disabilities (AAPD). Download AI for disability inclusion here (PDF)
How do you Lip Read a Robot: Is AI Powered Unfairness Avoidable? – International Association of Accessibility Professionals Live Broadcast April 8, 2021
This session, hosted by G3ICT/IAAP, explores the serious yet unacknowledged risks to the world’s more than 1.3 billion persons with disabilities triggered by the fast-growing use of Artificial Intelligence powered recruitment technology. Speakers including: Susan Scott-Parker, Founder, BDI; Julia Stoyanovich, Centre for Responsible AI at New York University; Inmaculada Placencia Porrero, Deputy Head of Unit, Unit D3, Rights of Persons with Disabilities, European Commission; and Nigel Guelome, Director of Research Goldsmiths, University of London will consider questions including – Why do leaders in AI Ethics disregard the more than 1.3 billion people living with disabilities and the hundreds of millions who will become disabled in time? How can data scientists, accessibility experts & AI developers minimize risks to employers & job seekers? Should developers be required to prove their AI products are ‘safe’ before putting them on the market? Watch the panel discussion on the ethical, legal, reputation and operational risks confronting organizations turning to artificial intelligence for help when recruiting and managing human beings here
We Count! video lecture featuring Jutta Treviranus, Director and Founder of the Inclusive Design Research Centre
In her ‘We Count’ lecture for the Walrus Talks Inclusion series, Jutta Treviranus, Director and Founder of the Inclusive Design Research Centre asked “What happens to small minorities, unique individuals or outliers in decisions based on numbers?”. This lecture considers that artificial intelligence makes manifest a bias that has always been there – the human inability to deal with diversity and complexity. Making room for differences and variability creates deeper commonality. Exposure to diversity guards against extremism and creates greater social equilibrium. Disability, difference and unbounded variability is a human reality, anything living should not be reduced to a digit. Watch the ‘We Count’ lecture here.
If data matters in the ethical AI debate – what about disability data?
‘AI for disability inclusion’ by Accenture
This research based on Accenture’s ongoing study of persons with disabilities in the workplace, and advances in “human + machine” work, shows that Artificial Intelligence (AI)—when developed and used responsibly and ethically—has the potential to facilitate the entire employment journey for persons with disabilities. It can help organizations identify candidates (and vice versa). It can enable engagement at work. And it can drive a culture of confidence in this underutilized segment of the workforce while supporting advancement within organizations. The study encourages those committed to developing ethical AI to use the R(AI)S guiding principles (Responsible, Accessible, Inclusive, Secure) to inform decision-making about using AI to improve inclusion. This study has been created in collaboration with Disability:IN and the American Association of People with Disabilities (AAPD). Download AI for disability inclusion here (PDF)
Understanding your disability demographic – data for disabled customers in India
Using UK data to illustrate in general terms the impact of disability on consumers in India and their access to goods & services. Out of 1.4 Billion customers in India:
- 140m customers may have mobility impairments (10%)
- 462m customers likely to have a disability or be close to someone who does (1 in 3)
- 42m Customers may have a visual impairment (3%)
- 1 in 3 of your customers aged 50-64 will have a disability
- 224m Customers are likely to have experienced a mental health condition (1 in 6)
- 1 in 5 women customers are likely to have a disability
- 140m Customers may be dyslexic (10%)
- 197m Customers are likely to be Deaf or hard of hearing (1 in 7)
- 140m Customers may have mobility impairments (10%)
Download the data for disabled customers in India here (PDF)
Understanding your disability demographic – sample data for a global corporate workforce
Using UK data to illustrate in general terms the impact of disability on a global corporate workforce of 500,000 employees, of which:
- 125,000 could experience a mental health condition in one year (1 in 4)
- 50,000 could have dyslexia (10%)
- 10,000 colleagues could become disabled each year (2%)
- 310,000 computer users could be more productive using existing accessibility features (62%)
- 40,000 may have caring responsibilities (8%)
- 165,000 could be disabled or close to someone who is disabled (1 in 3)
- 62,500 could have a disability (12.5%)
- 48,750 became disabled after age 16 (78% of disabled people)
Download the data for disabled employees from a global corporate workforce here (PDF)
Addressing the interaction between disability data bias and discriminatory treatment
AI Powered Unfair Recruitment by Susan Scott-Parker OBE HonD.
Recruiters are increasingly using AI Screening to recruit people who match as ‘microanalytically’ as possible the company’s ‘Ideal Hire’: that is, who match someone who already works for them. Disabled people, are at least twice as likely as anyone else to be excluded from the labour market, and so are highly unlikely to be any employer’s ‘ideal’ colleague. Yet neither AI developers nor their employer customers nor those influencing the ethical AI debate have even begun to address the potential impact of this fast-moving technology on the world’s more than 1.3 Billion persons with disabilities. Download AI Powered Unfair Recruitment here (PDF)
The challenges of navigating AI recruitment systems as a candidate with Tourette’s
Job seeker Serena shares her insights into the challenges of AI recruitment screening and interviewing as a person with Tourette’s. Serena explained, “For the first video interview I did, the company made a big deal about how exciting it was that AI would be analysing our interviews – body language and all. I’m sure that to many applicants this seemed like an impressive use of tech, but all I could think about was whether or not the AI would be able to suitably recognise my tics as tics – and not “nervous body language” or “signs of disinterest” or “disengagement with the topic”. Read the full article on tourettes.hero.com
‘How Do You Lip Read a Robot? – Recruitment AI has a Disability Problem’ – Webinar from ILO Global Business and Disability Network
This Zero Project conference session explored the unacknowledged risks to the more than 1.3 billion persons with disabilities triggered by the fast-growing use of Artificial Intelligence (AI)-powered recruitment tools. Why do leaders of the global ethical AI debate disregard the potential harm to more than 1.3 billion people living today with disabilities and to the hundreds of millions who will become disabled in time? Will the HR cost savings generated by AI technology outweigh the potential damage to the life chances of so many? The participants were Yves Veulliet (IBM), Susan Scott-Parker (business disability international), and Stefan Trömel (ILO Global Business and Disability Network). Watch the captioned video playback here.
Disability, Bias, and AI – AI Now Institute, New York University
On March 28, 2019, the AI Now Institute at New York University (NYU), the NYU Center for Disability Studies, and Microsoft convened disability scholars, AI developers, and computer science and human-computer interaction researchers to discuss the intersection of disability, bias, and AI, and to identify areas where more research and intervention are needed. This report captures and expands on some of the themes that emerged during discussion and debate. In particular, it identifies key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization. Download the report Disability, Bias, and AI (PDF)
What can we expect from regulators and advocates beginning to bring disability into the Ethical & Responsible AI debate?
Designing AI Applications to Treat People with Disabilities Fairly by Shari Trewin and Yves Veulliet, IBM
AI solutions must account for everyone. As artificial intelligence becomes pervasive, high profile cases of racial or gender bias have emerged. Discrimination against people with disabilities is a longstanding problem in society. It could be reduced by technology or exacerbated by it. IBM believes in working to ensure their technologies reflect the organisation’s values and shape lives and society for the better. Often, challenges in fairness for people with disabilities stem from human failure to fully consider diversity when designing, testing and deploying systems. If this is not taken into account, there is a risk of systematically excluding people with disabilities. This article outlines IBM’s ‘Six Steps to Fairness’ in the presence of rapidly advancing AI-based technologies. Read ‘Designing AI Applications to Treat People with Disabilities Fairly’ on the IBM website.
Report of the Special Rapporteur on the rights of persons with disabilities on Artificial Intelligence – published December 2021
The Special Rapporteur’s report on Artificial intelligence and the rights of persons with disabilities addresses the rapid growth in the use of artificial intelligence, automated decision-making and machine-learning technologies from a disability rights perspective. These new technologies can be of enormous benefit to persons with disabilities and drive the search for inclusive equality across a broad range of fields such as employment, education and independent living. However, there are many well-known discriminatory impacts.
In this thematic study, the Special Rapporteur describes the risks that this technology constitutes to the enjoyment of the human rights of persons with disabilities, as provided by the Convention on the Rights of Persons with Disabilities. He maintains that the human rights of persons with disabilities should be placed at the centre of the debate about these technologies. Once these risks are addressed, then the practical benefits of artificial intelligence might be realized. To that end, he also proposes some practical recommendations as to how this could be achieved in the final section of the report. Download the report including an ‘ Easy read’ version from the UN Human Rights website.
European Disability Forum (EDF) Position Paper on EU proposal for regulating Artificial Intelligence
EDF welcomes the European Commission’s proposal for regulating Artificial Intelligence (AI) in the EU. The proposed Regulation for AI will help ensure protection of fundamental rights of persons with disabilities in the context of new technologies. The Regulation can also help promote AI that will improve accessibility for persons with disabilities and support their participation in society. To ensure this, however, the Commission proposal needs significant improvements with strong safeguards against potential discrimination by AI systems and practices, and proactive measures to promote AI that will benefit accessibility and equality of persons with disabilities.
In view of this, the EU AI Regulation must ensure: Accessibility, Non-discrimination and equality, Privacy and data protection, Strong enforcement mechanisms, Trustworthy European AI beyond the EU and Trustworthy European AI beyond the EU. You can read the full EDF position paper in response to the European Commission’s Proposal for the EU Artificial Intelligence (AI) Regulation here (Word and PDF).
New York City Passes Bill to Address Bias in AI-Based Hiring Tools
The New York City Council has passed a bill meant to address bias in AI-based hiring tools. If the bill is enacted, it will go into effect on Jan. 1, 2023. That gives vendors, employers, and employment agencies over a year to make sure their tools meet the bill’s standards. The bill would require that a bias audit be conducted on an automated employment decision tool prior to the use of the tool. The bill would also require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion. The Associated Press reports that this bill is specifically meant to address racial or gender bias; it doesn’t include protections against bias related to disabilities or age. It does allow job applicants to request an alternative review process, however, including the option of having another human being review their their application. Read more about the NYC bias in AI bill on the PCMag website.
Addressing AI-powered recruitment tech in the context of the American with Disabilities Act
The Americans with Disabilities Act (ADA) and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees Issuing Authority
This technical assistance document was issued upon approval of the Chair of the U.S. Equal Employment Opportunity Commission, discusses how existing ADA requirements may apply to the use of artificial intelligence (AI) in employment-related decision making and offers promising practices for employers to help with ADA compliance when using AI decision making tools. Employers now have a wide variety of computer-based tools available to assist them in hiring workers, monitoring worker performance, determining pay or promotions, and establishing the terms and conditions of employment. Employers may utilize these tools in an attempt to save time and effort, increase objectivity, or decrease bias. However, the use of these tools may disadvantage job applicants and employees with disabilities. When this occurs, employers may risk violating federal Equal Employment Opportunity (“EEO”) laws that protect individuals with disabilities.
The Questions and Answers in this document explain how employers’ use of software that relies on algorithmic decision-making may violate existing requirements under Title I of the Americans with Disabilities Act (“ADA”). This technical assistance also provides practical tips to employers on how to comply with the ADA, and to job applicants and employees who think that their rights may have been violated.
Visit the Equal Employment Opportunity Commission website to view the full document
Algorithm-driven Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination? The Centre for Democracy & Technology
Algorithm-driven hiring tools have grown increasingly prevalent in recent years. Thousands of job-seekers across the United States are now asked to record videos that employers mine for facial and vocal cues. Employers using these tools seek a fast and efficient way to process job applications in large numbers. They may also believe that algorithm-driven software will identify characteristics of successful employees that human recruiters would not identify on their own. But as these algorithms have spread in adoption, so, too, has the risk of discrimination written invisibly into their codes. For people with disabilities, those risks can be profound. This paper seeks to highlight how hiring tools may affect people with disabilities, the legal liability employers may face for using such tools, and concrete steps for employers and vendors to mitigate some of the most significant areas of concern. We hope it will serve as a resource for advocates, for regulators, and – above all – for those deciding whether to develop or use these tools to consider the risks of discrimination, and ultimately to ask if the tools are appropriate for use at all.
Use of Artificial Intelligence to facilitate employment opportunities for people with disabilities – A policy brief from the Employer Assistance and Resource Network on Disability Inclusion (EARN)
The use of artificial intelligence (AI) in the workplace is becoming commonplace, including the use of AI to screen applicants, streamline the application process, provide on-the-job training, disseminate information to employees, and enable workers to become more productive. Companies are recognizing that their diversity and inclusion policies, programs, and activities should include individuals with disabilities. The confluence of the AI and diversity and inclusion movements is causing employers to focus heightened attention and scrutiny on whether AI is facilitating workforce diversity and inclusion. Employers are also starting to recognise that if they are not vigilant, it is possible that the use of AI may actually impede rather than facilitate efforts to recruit, hire, retain, and advance people with disabilities. This policy brief provides a roadmap for businesses to design, procure and use AI to benefit and not discriminate against qualified individuals with disabilities. Read the EARN Policy brief here (PDF)
EARN/PEAT ‘Checklist for Employers: Facilitating the Hiring of People with Disabilities Through the Use of eRecruiting Screening Systems, Including AI’
More and more companies recognize that a workforce representative of the population at large results in a more effective and innovative organization. Reflecting this, many are taking proactive steps to increase the recruitment, hiring, advancement, and retention of individuals with disabilities. At the same time, eRecruiting systems, including artificial intelligence (AI), in the workplace are becoming more commonly used to screen candidates, streamline the application process, provide training, disseminate information to employees, and increase productivity. This requires employers to consider whether the use of eRecruiting screening systems facilitates or impedes the hiring of qualified individuals with disabilities. This checklist highlights questions and issues that leadership, human resources personnel, equal employment opportunity managers, and procurement officers entering into contracts with vendors regarding the content of eRecruiting (including AI) screening tools should consider. Download the EARN/PEAT ‘Checklist for Employers’ here (PDF)
General guidance on the responsible use of AI-based HR tools
Human-Centred Artificial Intelligence for Human Resources: A Toolkit for Human Resources Professional – The World Economic Forum
Organizations are increasingly looking to harness the power of artificial intelligence to manage talent in ways that are more effective, fair, and efficient. However, the use of AI in Human Resources raises given AI’s potential for problems in areas such as data privacy and bias. The use of AI in HR also poses operational, reputational, and legal risks to organizations. To help organizations overcome these challenges, the World Economic Forum collaborated with over 50 experts in HR, data science, employment law, and ethics to create a practical toolkit for the responsible use of AI in this field. The toolkit includes a guide covering key topics and steps in the responsible use of AI-based HR tools, and two checklists – one focused on strategic planning and the other on the adoption of a specific tool. This White Paper highlights the lessons learned from the project and piloting experiences and discusses new issues that are on the horizon for AI in HR. Download The HR – AI Toolkit here (PDF)
How AI surveillance technologies in education and employment disproportionately harm disabled people
Ableism And Disability Discrimination In New Surveillance Technologies – Authors – Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, Andrew Crawford – The Center for Democracy & Technology
Algorithmic technologies are everywhere. This ubiquity of algorithmic technologies has pervaded every aspect of modern life, and the algorithms are improving. But while algorithmic technologies may become better at predicting which restaurants someone might like or which music a person might enjoy listening to, not all of their possible applications are benign, helpful, or just. This report examines four areas where algorithmic and/or surveillance technologies are used to surveil, control, discipline, and punish people, with particularly harmful impacts on disabled people. They include: education, the criminal legal system, health care and the workplace. Find out more about the report here
Other organisations championing equality and ethics in Artificial Intelligence
Race and AI Toolkit from We and AI
How does Artificial Intelligence encode and amplify the racial biases in our society? And how could it be used to reduce them? The Race and Artificial Intelligence Toolkit is designed to help people with any level of knowledge of AI to raise awareness of the issues, and explore a range of actions to make a difference. Visit the Race and AI Toolkit here