Certainly Uncertain - June 1, 2018
Leave a comment

Establishing New Procedures to Address the Cultural Implications of Algorithmic Bias

By Shadrick Addy

Historically, African Americans have been subjected to racial discrimination and prejudice, an injustice that is still sewn within American society. With the rise of technology, machine learning promised to improve decision making by removing human bias (Baer & Kamalnath, 2017). Yet, even with the wide use of automated programs in several American industries, people of color are still not given an equal chance at attaining the American dream. The historical implication of racial discrimination and prejudice is missing in the training data of machine learning algorithms. The absence of these historical data points, the transfer of human biases during training, and automated programs’ inability to account for future events are key contributors to algorithmic bias. Building an inclusive future driven by artificial intelligence demands that we understand the cultural implications of algorithmic bias and establish new procedures for training machine learning algorithms.

Defining Bias

Mitchell, a computer scientist and Professor at Carnegie Mellon University, describes bias as, “any basis for choosing one generalization over another, other than strict consistency with the observed training instances” (Mitchell, 1980). Mitchell argues that “learning involves the ability to generalize from past experience in order to deal with new situations that are related to this experience” (Mitchell, 1980). He suggests that “the inductive leap needed to deal with new situations seems to be possible only under certain biases for choosing one generalization of the situation over another” (Mitchell, 1980). Michell’s analysis shows that there is a need for bias in learning generalizations and that removing it is a useless goal. I agree with Mitchell and acknowledge that algorithmic bias in automated programs is inevitable. In addition, removing bias from machine learning algorithms is difficult because of the human bias that informs how developers, stakeholders, and users make decisions regarding machine learning applications.

For people of color, who have historically faced inequality, the presence of human bias in algorithms further enforces discriminatory barriers.

Tobias Baer, a partner and researcher at McKinsey & Company, and Vishnu Kamalnath, a specialist in the North American Knowledge Center, assert that algorithmic bias is one of the biggest risks because it compromises the very purpose of machine learning. The research states that “artificial intelligence is as prone to bias as humankind” (Baer & Kamalnath, 2017). For people of color, who have historically faced inequality, the presence of human bias in algorithms further enforces discriminatory barriers. Because the algorithms make predictions based on past correlations, they amplify the effects of historical prejudice against marginalized populations by reinforcing human bias found within the data set used for training. As designers, we must recognize that computer programs are encoded with human prejudice, misunderstanding, and bias (O’Neil, 2016), in order to establish new procedures to address algorithmic bias.

Algorithmic Bias in Employment

Automation has become an essential part of the hiring process for American businesses (O’Neil, 2016). 60 to 70 percent of prospective U.S. workers’ chances of getting a job are contingent on personality test results (O’Neil, 2016). Furthermore, computer programs can now parse through resumes and rank applicants based on how well they match the criteria for a job position (Abdel-Halim, 2012). Automation in hiring practices is a burgeoning industry, grossing $500 million annually and growing by 10 to 15 percent a year (O’Neil, 2016). For employers, the benefits of automation often outweigh its negative social implications. Yet, hiring processes that use these automated programs reinforce biases against African Americans by wrongly associating name, race, and other social identifiers with an applicant’s inability to fulfill the responsibilities of a position.

False correlations between data points have traditionally led employers to assume the race of applicants simply by looking at the names on their resumes. Can an applicant’s name, for instance, serve as a clear indicator of his race or gender? Could his name also serve as an indicator of his ability to fulfill the responsibilities of a potential job? The answer, of course, is no, for numerous reasons. Anyone can change their birth name through legal proceedings to one commonly associated with another race or gender. Therefore, using names as indicators can result in misidentification. Names are not clear indicators of an applicant’s race, gender, or merit because no empirical evidence establishes a correlation between the data points.

The same human bias present in conventional applicant screening is unconsciously programmed into automated applicant tracking software.

A 2003 study done by researchers from the University of Chicago and MIT revealed that applicants with white-sounding names are 50 percent more likely to receive callbacks than those with black-sounding names (Bertrand & Mullainathan, 2003). The study discovered that white names on a higher quality resumé received 30 percent more callbacks than African American names. The same human bias present in conventional applicant screening is unconsciously programmed into automated applicant tracking software. Resumé analyzing programs, however, are only one sub-category of many automated programs prone to algorithmic bias during job hiring processes.

Personality tests are still being used to determine who gets hired for a job, despite research showing they are poor predictors of job performance (O’Neil, 2016). In 1971, the Supreme Court ruled that job responsibility-determining intelligence tests were discriminatory and illegal (O’Neil, 2016). Yet, these illegal tests are still being used in other forms, to eliminate as many job applicants as possible (O’Neil, 2016). Personality tests and many other job screening programs have become ubiquitous hiring standards. An applicant who is rejected due to bad scores at one business is likely to face a similar fate at another. Cathy O’Neil, the author of Weapons of Math Destruction, explains that while employers had bias during traditional hiring practices, the biases varied at each business (O’Neil, 2016). In contrast to traditional hiring approaches, automated programs are more likely to repeat biased results because systems often share the same training data (and prejudices) amongst businesses. Most job applicants are unaware of the algorithmic correlations that influence the results of the automated programs. The obscurity of the correlations restricts many people of color with financial difficulties from challenging biased algorithmic predictions—predictions that could have severe implications on the livelihood of Americans workers.

Algorithmic Bias in Law Enforcement

The legal system widely implements these predictive algorithms. Research by Garvie, Bedoya, and Frankle (2016) shows that face recognition in law enforcement affects over 117 million American adults. Further investigations revealed that 16 states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos (Garvie et al., 2016).  For people of color, who have long faced facial discrimination and inequality, predictive policing and other uses of machine learning in the criminal justice system can lead to devastating consequences.

Extreme care must be taken to address wrongful predictions that could result from algorithmic bias in automated applications used in law enforcement.

Algorithmic bias in facial recognition can lead to wrongful accusations, resulting in devastating consequences for individuals and their families. “Someone could be wrongfully accused of a crime based on erroneous but confident misidentification of the perpetrator from security video footage analysis” (Buolamwini, 2018). Buolamwini, a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League, research shows that an underrepresented demographic group in benchmark datasets can nonetheless be subjected to frequent targeting. A 2013 study done by the New York Civil Liberties Union reveals that African Americans and Latinos made up only 4.7 percent of the city’s population, yet, they accounted for 40.6 percent of the stop-and-frisk checks by police (O’Neil, 2016). The presence of human bias in law enforcement means that extreme care must be taken to address wrongful predictions that could result from algorithmic bias in automated applications used in law enforcement.

Addressing Algorithmic Bias

A diverse development team creates an environment that reduces human biases.

A balanced representation of African Americans during program development is the first step to establishing new procedures to remedy algorithmic bias in machine learning. A diverse development team creates an environment that reduces human biases. African Americans programmers can work on development teams to create algorithms that are objective. Another approach to reducing algorithmic bias is establishing procedures that encourage developers to solicit feedback from a diverse population when programming algorithms. User testing with a diverse group can help prevent problematic algorithmic models.

Designers can play an essential role in establishing new procedures to address algorithmic bias. For example, they can work with developers to create interfaces that give agency to users (Borenstein, 2016) and provide transparency on how correlations are made by predictive algorithms. Transparency in algorithmic predictions can help a person determine if a model is working against their interest (O’Neil, 2016). A strong partnership between designers and developers will lead to the development of inclusive procedures to address algorithmic bias. To strengthen their collaborative relationships with developers, designers must develop a deeper understanding of artificial intelligence systems, their affordances, and how humans might use, misuse, and abuse these affordances (Borenstein, 2016).

As a dog is loyal to its trainer, algorithmic bias often serves the interest of its developers.

As we move towards an inclusive future powered by artificial intelligence, machine learning will have both positive and negative implications upon multicultural American society. However, the severity of the negative impact of algorithmic bias requires that new procedures be established to address the cultural implications of machine learning. A necessary step in establishing such procedures starts with the diversification of software development teams. As a dog is loyal to its trainer, algorithmic bias often serves the interest of its developers. A diverse team of developers and designers ensures that there is equal representation in decisions that are made when training algorithms—decisions that could reinforce human biases in machine learning algorithms. If the cultural implications of machine learning are not recognized and addressed, the mantra “garbage in garbage out” could soon become synonymous with “bias in bias out.”

Shadrick Addy is a Master of Graphic Design Candidate at North Carolina State University. A proud son of mama Africa, Addy research focuses on addressing the cultural implications of design and technology. 

References

Abdel-Halim, M. 12 ways to optimize your resume for applicant tracking systems. Mashable Website. https://mashable.com/2012/05/27/resume-tracking-systems/. Updated 2012. Accessed April 9, 2018.

Baer, T., & Kamalnath, V. Controlling machine-learning algorithms and their biases. McKinsey & Company Website. https://www.mckinsey.com/business-functions/risk/our-insights/controlling-machine-learning-algorithms-and-their-biases. Updated 2017. Accessed April 7, 2018.

Bertrand, M., & Mullainathan, S. (2004). “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination,” American Economic Review, American Economic Association, vol. 94(4), pages 991-1013, September.

Buolamwini, J. & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in PMLR 81:77-91

Garvie, C., Bedoya, A., & Frankle, J. (2016). The Perpetual Line-Up: Unregulated Police Face Recognition in America. Georgetown Law, Center on Privacy & Technology.

Mitchell, T (1980). The need for biases in Iearning generalizations. Technical Report CBM-TR-117, Rutgers University.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.

Borenstein, G. Power to the People: How One Unknown Group of Researchers Holds the Key to Using AI to Solve Real Human Problems. Medium Website. https://medium.com/@atduskgreg/power-to-the-people-how-one-unknown-group-of-researchers-holds-the-key-to-using-ai-to-solve-real-cc9e75b1f334. Updated 2016. Accessed May 25, 2018.

Leave a Reply