5 ethical risks AI presents for Microsoft and other tech giants

Different things come to mind for different people when they hear the words "artificial intelligence (AI)." Some envision Hollywood representations of A.I. as depicted in movies like The Terminator, The Matrix or I, Robot. Others conceive a more conservative image, such as A.I. players in video games or digital assistants like Cortana. Still, others envision the complex algorithms powering the intelligent cloud that provides helpful insights for decision-making in business. All of these are AI or, "intelligence exhibited by machines or software and the study of how to create computers and computer software that are capable of intelligent behavior."

Though Hollywood renditions of AI are extreme exaggerations of technology far beyond what we are capable of today, they offer a cautionary warning rooted in the ethical challenges AI currently presents. AI is fundamentally "made in our image." It is founded upon machine learning scenarios where humans provide systems with data to "create" its intelligence.

When human biases or limited perspectives forge the basis for how these artificial systems perceive the world, they invariably reflect the negative and often stigmatizing perceptions that plague human cultures. Thus, as AI becomes part of judicial, health, employment, government and other systems, it behooves us to temper its evolution with proactive guidance that preempts a dystopian manifestation of human prejudices.

1. AI, African Americans, gender and algorithmic bias

Many people strive to separate their consumption of tech news from sobering realities such as the sordid history of racism and the current biases African Americans and others still face. The interweaving of technologies that mimic human perceptions within our social structures, such as facial recognition, makes that an impossible task, however.

We must acknowledge that computer systems are only as reliable as the fallible humans who make them. And when it comes to AI's ability to perceive and distinguish between individuals of various skin colors and genders, some human biases can make it into AI systems. A study revealed that Microsoft's and IBM's facial-analysis services were frequently unable to accurately distinguish features of dark-skinned people, especially black women. The system's accuracy rate for white males was significantly better. Its "training" with a low representation of dark-skinned people contributed to the disparity. That "oversight" is likely a derivative of the deeper problem of low representation of blacks in tech. A more diverse workforce probably would have realized the data-pool deficit.

A more disconcerting (and admittedly uncomfortable to discuss) AI-and-race incident was when Google's photo-organizing service tagged black people as "monkeys," "chimps" or "gorillas." As of 2018, Google's workaround was removing gorillas and other primates from the system's vocabulary. Given the history where blacks have been compared to primates, the ethical impact of AI algorithms that echoed those prejudices is reflective of the need for a broad and diverse pool of data and people to preclude these problems.

2. Criminal justice

From government, shopping, education, transportation, business, defense, health care, and more, Microsoft and others are pushing AI into every aspect of our lives and culture. AI was used in the judicial system to determine if a criminal should be released. AI "decided" to release the man, who later killed someone. It was discovered that relevant criminal history data was not part of the data set AI used to make its decision.

3. Intelligent cameras

In 2016, Microsoft introduced AI-driven camera tech that recognizes people, activities and objects, and can access data about individuals and act autonomously. The potential misuse of this tech by governments, employers or individuals to track people, their habits, interactions, and routines is profound. Additionally, last year Google came under fire, even from its employees, for a Pentagon partnership that uses camera tech to analyze drone data.

See more

4. Health care

In healthcare, multiple studies revealed physicians and residents believed blacks feel less pain than whites. Consequently, they prescribed painkillers less often for blacks than for whites with similar conditions. Consider the potential ethical and continued quality-of-care disparities if AI in healthcare is fed data from professionals who hold these and other biases.

5. Almost human

Last year Google demonstrated Google Duplex, an intelligent bot that could navigate phones calls, make appointments and was indistinguishable from a human. Ethical concerns abound when users are unaware they're talking to AI rather than a person.

Confronting the issues of bias and AI

Microsoft, Google, and others have begun addressing the ethical challenges AI presents. Internal boards have been formed and acknowledgments of AI's dangers have been included in companies' U.S. Securities and Exchange Commission (SEC) reports. Still, without external guidance, the lack of consistency and universally-applied standards remain, allowing an avenue for continued biases in AI.

Even with external boards, biases can remain an issue. Last year, Axon, the manufacturer of Taser, formed a board to review AI in body cameras used by police. In response, 40 civil rights, academic and community groups accused the company of excluding representatives from the communities most likely to be negatively impacted by the tech.

A.I. is increasingly part of our culture, and it and those creating it, governing its development and implementation should "look" like all of us. Groups like Blacks for AI and Women in Machine Learning are trying to ensure just that. Still, companies are pushing AI into products like smart speakers and facial-recognition checkpoints faster than adequate systems of accountabilty can be formed. It will take a collective effort from all of us, diligent oversight and an honest reflection on who we are to ensure the worse parts of us aren't part of the AI upon which we increasingly rely.

Jason Ward

Jason L Ward is a columnist at Windows Central. He provides unique big picture analysis of the complex world of Microsoft. Jason takes the small clues and gives you an insightful big picture perspective through storytelling that you won't find *anywhere* else. Seriously, this dude thinks outside the box. Follow him on Twitter at @JLTechWord. He's doing the "write" thing!