Researchers/legislation are focused on AI Bias with respect to gender and race. Few people are examining AI bias with respect to disability.

Image for post
Image for post
Four cartoon images of business people of different genders and races, each holding a white face mask partially covering their faces
  • One of the reasons frequently cited for the higher rates of false positives with respect to facial recognition for women, and especially women of color, is lack of robust data sets.
  1. There are fewer people with disabilities than there are women and people of color.
  2. The range of characteristics of disability is very, very broad.

Facial recognition will be biased against people with craniofacial differences

One of my daughters was born with a facial difference called Bilateral Microtia and Atresia. It means that both of her outer ears are not the size nor shape of what most people would consider “normal”, and her ear canals were incredibly narrow, almost invisible. An AI that uses ear shape or the presence of a canal in part to determine whether or not an image included a human face might not work correctly for people with this condition. Children with cleft lip or more severe craniofacial syndromes such as PRS, Treacher Collins Syndrome, Goldenhaar, and hemifacial microsomia just to name a few who have significant facial bone involvement are likely to experience the same AI bias, if not worse.

Image for post
Image for post
Pre-jaw reconstruction surgery and Post-jaw reconstruction surgery photos of a middle aged woman

Facial recognition will be biased against people who have had significant facial surgery

My daughter and I both had micrognathia. Our jaws were very small and extremely set back at a sharp angle. Both of us experienced such severe sleep apnea that it was life threatening. Also, the quality of our lives was crap since we rarely achieved significant REM sleep. Drastic surgery was required to correct the micrognathia since CPAPs work very poorly for people with this condition.

People with mobility problems may be falsely identified by self-driving cars as objects

I have heard anecdotally about one self-driving car company whose software identified a person using a wheelchair as a shopping cart. If you expect that all people can walk with a normal, even gait, then people with mobility issues — walkers, canes, crutches, wheelchairs, and limps, will not fall anywhere near what is consider “normal”. People with mobility disabilities are drastically under-represented in the tech community. It is possible that a programming team won’t have enough real-life experience with people with mobility disabilities and ignore this type of bias as being the real and significant issue that it is.

Image for post
Image for post
Trolley that can head in two directions, depending which way the switch is thrown. Five people in one direction, one person in the other. Image By McGeddon — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=52237245

The Trolley Problem

The trolley problem is a ethical thought experiment, with the following problem statement:

  • Pull the lever, diverting the trolley onto the side track where it will kill one person who you know well.

Self-driving cars are programmed by people. Programmers’ bias can be transferred into the software

Self-driving cars have moved the “trolley problem” thought experiment from the abstract to the tangible.

Self-driving cars decide what is the better thing to hit (and potentially kill) on a frequent and regular basis .

Whether your car is programmed by a group of Columbians or Finns can influence whether socio-economic status should be taken into account when determining which person to hit and which to save when the choice involves an inevitable collision with one of two people. The French prefer killing men over women. These cultural biases are well described in an MIT study that can be read on Nature. Human beings making moral decisions can easily be influenced by context. And if the context is being determined by software, the programmers’ bias — whether conscious or unconscious — is of paramount concern.

Image for post
Image for post
MIT Moral Machine Results Page — most saved character is a young child, most killed character is an old gentleman walking with a cane
Image for post
Image for post
More Moral Machine results regarding weight and socio-economic value
  • Are obese people less valuable than those who are “fit”?
  • What makes a poet or a homeless person less worthy of living than a doctor or an engineer?

Blogger, disability advocate, nerd. Bringing the fire on ableism. A11y Architect @ VMware. Wheelchair user w/ a deaf daughter. CS, Law, and Business background

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store