Last February at the World Government Summit 2019 in Dubai, Christine Lagarde, the first woman managing director of the International Monetary Fund (IMF), sat in front of her audience and gave them some news: “AI and the fourth industrial revolution will have a more severe impact on women than men,” she told the crowd, explaining that’s “not because women are stupid, but simply many of the tasks done by women are more routines tasks, are more easily automated.” She added that 11% of women’s jobs will be affected by technology in the future, compared to 9% for men.
The way in which AI could potentially reduce the need for specific segments of the human workforce is indeed a massive concern, but it falls within a larger issue related to the way artificial intelligence will interact with various human groups and categories: bias. As explained by Theodoros Evgeniou, one of the most urgent AI issues to be looked into today is fairness; how algorithms can reproduce the bias of developers, or even create their own biases due to flawed original datasets.
For example, a research conducted by the university of Virginia in 2016 showed that two large image collections used in machine learning, one of which is backed by Microsoft and Facebook, demonstrated severe gender biases: images of shopping and cooking were linked to women, while visuals of coaching and shooting were associated with men.
Biased dataset, biased AI
The problem came most to the fore last April, when a group of AI researchers from Google, Facebook, Microsoft and various universities penned an open letter calling on Amazon (a frontrunner in the field) to stop selling its facial recognition technology to law enforcement. A study by the MIT Media Lab in January had indeed found that Amazon’s Rekognition had a significantly higher rate of errors when identifying an individual’s gender if they were female or darker skinned. “Flawed facial analysis technologies are reinforcing human biases,” Morgan Klaus Scheuerman, a PhD student at the University of Colorado Boulder and one of 26 signatories of the letter, told The Verge.
Another signatory, Caltech professor and former principal scientist at Amazon’s AWS subsidiary Anima Anandkumar, told Wired that the risks AI systems will cause harm to certain groups are higher when research teams are homogenous. Her work shows that as a group of similar people input data, their biases are entered into the algorithms as well, resulting in biased outputs. With only 12% of leading machine learning researchers being women, according to a research by Wired and Canadian startup Element AI, no wonder that the gender imbalance in the development field translates into skewed AI results.
Even worse, machine learning doesn’t just mirror biases; it amplifies them, as explains Harvard PhD and data scientist Cathy O’Neil in her book Weapons of Math Destruction. O’Neil looked at how biases can manipulate mathematical models and ultimately reinforce discriminations. “Models are opinions embedded in mathematics,” she writes, warning that algorithms can affect our lives in aspects we wouldn’t even imagine, from finance to health, education, justice and recruitment.
Countering amplified discrimination
If AI can be used anywhere, including HR departments that go through resumes, its embedded gender biases become more worrisome than what it first seemed. Take Amazon’s aborted AI hiring and recruitment system: the algorithm went through and analyzed candidates but displayed the exact same biases that Amazon has originally used the technology to avoid, rating male applicants better than females due to the dataset’s historical preference.
Writing for the World Economic Forum last January, Ann Cairns, vice chairman of Mastercard, says that “the major problem with AI is what’s known as ‘garbage in, garbage out’. We feed algorithms data that introduces existing biases, which then become self-fulfilling. In the case of recruitment, a firm that has historically hired male candidates will find that their AI rejects female candidates, as they don’t fit the mould of past successful applicants.” According to Cairns, inclusion, empowerment and equality can help yield better results.
Similarly, Megan Bigelow, founder and president of PDX Women in Tech, a American nonprofit that strives to empower women and underrepresented groups in tech, tells Communicate that women and other minorities are indeed affected by AI due to the biases of scientists seeping into the technology. “Behind every new tech company or product are the humans who built it – each with their own assumptions, beliefs, values, biases, hopes and dreams,” she wrote in one of her pieces for Oregonbusiness.com in 2018. Bigelow also argues that to fix the problem, “First, we need to recognize that this is a problem for all of us, not just for women […] Diversity and inclusion are the right things to do for the healthy existence of humanity. Second, the systemic issue is a lack of financial access.” Therefore, people should get equal access to opportunities regardless of their background, gender or any other criteria.
Moreover, developing and testing standards for AI to identify biases early on is yet another strategy scientists are exploring, even though “the question is: what will be included in the algorithm to adjust for such biases?”, as asks Abeer El Tantawy, an educational specialist working for a chemoinformatics company.
MIT researcher and founder of the Algorithmic Justice League (a collective aiming to fight bias in algorithms) Joy Buolamwini’s thesis uncovered major racial and gender bias in AI services from multinationals such as Microsoft, IBM and Amazon. In her view, who, how and why we code matter. By answering these three questions, organizations can identify biases and curate training sets inclusively, taking into consideration the social impact of technology on people.
And another team at MIT Computer Science & Artificial Intelligence Laboratory (MIT CSAIL) is working on a solution, building an algorithm that can “de-bias” the data automatically. This algorithm would be designed to look at hidden biases within the data but is yet to be tested.
The search for solutions continues as AI evolves and increasingly becomes a reality, along with the challenges it brings.
This article has been part of Communicate’s June print edition.
This site uses cookies: Find out more.