Skip to main content

5 ethical risks AI presents for Microsoft and other tech giants

Different things come to mind for different people when they hear the words "artificial intelligence (AI)." Some envision Hollywood representations of A.I. as depicted in movies like The Terminator, The Matrix or I, Robot. Others conceive a more conservative image, such as A.I. players in video games or digital assistants like Cortana. Still, others envision the complex algorithms powering the intelligent cloud that provides helpful insights for decision-making in business. All of these are AI or, "intelligence exhibited by machines or software and the study of how to create computers and computer software that are capable of intelligent behavior."

Though Hollywood renditions of AI are extreme exaggerations of technology far beyond what we are capable of today, they offer a cautionary warning rooted in the ethical challenges AI currently presents. AI is fundamentally "made in our image." It is founded upon machine learning scenarios where humans provide systems with data to "create" its intelligence.

When human biases or limited perspectives forge the basis for how these artificial systems perceive the world, they invariably reflect the negative and often stigmatizing perceptions that plague human cultures. Thus, as AI becomes part of judicial, health, employment, government and other systems, it behooves us to temper its evolution with proactive guidance that preempts a dystopian manifestation of human prejudices.

1. AI, African Americans, gender and algorithmic bias

Many people strive to separate their consumption of tech news from sobering realities such as the sordid history of racism and the current biases African Americans and others still face. The interweaving of technologies that mimic human perceptions within our social structures, such as facial recognition, makes that an impossible task, however.

We must acknowledge that computer systems are only as reliable as the fallible humans who make them. And when it comes to AI's ability to perceive and distinguish between individuals of various skin colors and genders, some human biases can make it into AI systems. A study revealed that Microsoft's and IBM's facial-analysis services were frequently unable to accurately distinguish features of dark-skinned people, especially black women. The system's accuracy rate for white males was significantly better. Its "training" with a low representation of dark-skinned people contributed to the disparity. That "oversight" is likely a derivative of the deeper problem of low representation of blacks in tech. A more diverse workforce probably would have realized the data-pool deficit.

A more disconcerting (and admittedly uncomfortable to discuss) AI-and-race incident was when Google's photo-organizing service tagged black people as "monkeys," "chimps" or "gorillas." As of 2018, Google's workaround was removing gorillas and other primates from the system's vocabulary. Given the history where blacks have been compared to primates, the ethical impact of AI algorithms that echoed those prejudices is reflective of the need for a broad and diverse pool of data and people to preclude these problems.

2. Criminal justice

From government, shopping, education, transportation, business, defense, health care, and more, Microsoft and others are pushing AI into every aspect of our lives and culture. AI was used in the judicial system to determine if a criminal should be released. AI "decided" to release the man, who later killed someone. It was discovered that relevant criminal history data was not part of the data set AI used to make its decision.

3. Intelligent cameras

In 2016, Microsoft introduced AI-driven camera tech that recognizes people, activities and objects, and can access data about individuals and act autonomously. The potential misuse of this tech by governments, employers or individuals to track people, their habits, interactions, and routines is profound. Additionally, last year Google came under fire, even from its employees, for a Pentagon partnership that uses camera tech to analyze drone data.

See more

4. Health care

In healthcare, multiple studies revealed physicians and residents believed blacks feel less pain than whites. Consequently, they prescribed painkillers less often for blacks than for whites with similar conditions. Consider the potential ethical and continued quality-of-care disparities if AI in healthcare is fed data from professionals who hold these and other biases.

5. Almost human

Last year Google demonstrated Google Duplex, an intelligent bot that could navigate phones calls, make appointments and was indistinguishable from a human. Ethical concerns abound when users are unaware they're talking to AI rather than a person.

Confronting the issues of bias and AI

Microsoft, Google, and others have begun addressing the ethical challenges AI presents. Internal boards have been formed and acknowledgments of AI's dangers have been included in companies' U.S. Securities and Exchange Commission (SEC) reports. Still, without external guidance, the lack of consistency and universally-applied standards remain, allowing an avenue for continued biases in AI.

Even with external boards, biases can remain an issue. Last year, Axon, the manufacturer of Taser, formed a board to review AI in body cameras used by police. In response, 40 civil rights, academic and community groups accused the company of excluding representatives from the communities most likely to be negatively impacted by the tech.

A.I. is increasingly part of our culture, and it and those creating it, governing its development and implementation should "look" like all of us. Groups like Blacks for AI and Women in Machine Learning are trying to ensure just that. Still, companies are pushing AI into products like smart speakers (opens in new tab) and facial-recognition checkpoints faster than adequate systems of accountabilty can be formed. It will take a collective effort from all of us, diligent oversight and an honest reflection on who we are to ensure the worse parts of us aren't part of the AI upon which we increasingly rely.

Jason L Ward is a columnist at Windows Central. He provides unique big picture analysis of the complex world of Microsoft. Jason takes the small clues and gives you an insightful big picture perspective through storytelling that you won't find *anywhere* else. Seriously, this dude thinks outside the box. Follow him on Twitter at @JLTechWord. He's doing the "write" thing!

27 Comments
  • Sometimes I wonder if Bleached is an AI driven Chat Bot experiment by Microsoft to make the future of chat (and artificial trolling) seem more natural 🤔🤔🤔 If he is, then MS is the best in that field of research.
  • I was hoping this one would be thoughtful and interesting, but leave it to Jason Ward to fearmonger instead of inform. This is right up there with "5G will give us cancer." I won't bother giving the benefit of the doubt again. BTW, Jason — a picture of you, thinking? Seriously?
  • How is giving actual examples of documented, real occurrences of ethical concerns of AI that companies have acknowledged occurred, formed boards to address, addressed in their SEC filings, and acknowledge as an ongoing concern fearmongering. Perhaps you were aware of each of the 5 points addressed but others were informed about the reality of AIs inability to distinguish between dark-skinned people, AI identifying blacks as primates, healthcafe where professionals believe blacks feel less paint than whites, and the incident where AI determined a criminal who ultimately murdered someone should be released. Some individuals do find informative the industry-acknowledged reality (do a search) that AI is often trained without the bensfit of the diversity represented in the broader culture and it can and has had the effects listed in the article. So, though you feel this isn't thoughtful I put a lot of thought into, and hopefully there are more people like me who are concerned about the implication already made evident and hope this type of article provokes empathy rather than this type of response.
  • He is thinking from the point of a layman, failing to acknowledge the information we feed the AI.
  • It's not a fearmongering, you must be an idiot to even think that.
  • I don't think these are examples of social biases showing up in the AI data input by humans, but are really just examples that clearly show the intricate nuances of life that are easy and obvious to us as humans but are hard to translate into data. After all, a computer only knows what we tell it and clearly for AI to be truly intelligent it's going to need a heck of a lot more data. For example, if you tell the computer that a face with two eyes, a nose and a mouth with dark skin colour is a monkey, but don't teach it the subtle differences between monkeys and humans then you're going to get the obvious result. It doesn't mean somebody programmed the AI to insult black people. It just means the AI needs more information.
  • Please reread the article Vincent. The claim is not that the AI is intentionally programmed to insult people, the point is expressly made that the process of machine learning that "creates the artificial intelligence" is not provided sufficient data reflective of the diversity of the human race. Which I point expressly point out is likely a derivative effect of the low representation of blacks in tech resulting in the limited supply of information by the demographic most represented in tech, white males, thus accounting for the high rate of AI accuracy identying white males and the low accuracy identying blacks. Again please reread the article because you seem to be debating a point that is neither made nor inferred in the text. FROM THE TEXT: We must acknowledge that computer systems are only as reliable as the fallible humans who make them. And when it comes to AI's ability to perceive and distinguish between individuals of various skin colors and genders, some human biases can make it into AI systems. A study revealed that Microsoft's and IBM's facial-analysis services were frequently unable to accurately distinguish features of dark-skinned people, especially black women. The system's accuracy rate for white males was significantly better. Its "training" with a low representation of dark-skinned people contributed to the disparity. That "oversight" is likely a derivative of the deeper problem of low representation of blacks in tech. A more diverse workforce probably would have realized the data-pool deficit.
  • Yes - and it is still a case of bias. No one is suggesting that bias and racism are intentional and malicious in all cases but they creep in and one of the biggest reasons, like you mentioned, is lack of data. Most AI training (in the US at least) happens on white faces and so they're not able to easily work with non white faces. They need more data and it is the *job* of those who lead AI development to make sure their AI has data that's enough not to cause such issues. It just goes against the commonly held but naive idea that AI is always unbiased and neutral. Theoretically, maybe. In practice, no.
  • Jason, kind of a side conversation, but still related to the article: according to your research and personal experience why do you believe there is such a disparity in racial diversity in the Machine learning or even tech sector in general? I realize that this is being overly broad, and could have thousands of reasons, but any points that stick out to you? I am sure we can all agree that tech should have a diverse workforce in backgrounds (not just race or sex) to reduce potential issues of bias as you state in the article, so what can we do as tech enthusiasts to encourage not just those in the stereotypical white male/asian male category that tech can be a great career or even hobby? If you have already written a piece about this, I apologize in advance.
  • Actually Sudokoode, thanks for. The question and I did in fact write an in depth article on this about a year ago. I'm very glad to rehare it here: A candid discussion on African-Americans, race and the tech industry https://m.windowscentral.com/microsoft-tech-industry-african-americans-and-inequality
  • Posting another response here as I am interested in thoughts on this topic: it seems to me that the article is leaning towards a "push" approach to ensuring that machine learning and AI remain less biased, meaning these companies should be hiring individuals of all sorts of racial backgrounds to help fix the issue. Why do you believe that a "pull" approach, whereby the customers of the products demand better trained algorithms will not work? Certainly people of color were turned off, and insulted when seeing Google's monkey algorithm. Although this may sound like a naive or overly-simplistic example it is well-intentioned: Say I am an english speaker from the southern United States that has a very thick accent. Also, let's say that most of the coders of Alexa exist in Seattle, with a very different accent. If Alexa does a terrible job interpreting my voice, will I not just opt to buy a google home, if they are able to do a better job?
  • Good question. A push approach as you put it is more comprehensive in that its proactive. It positions the company to feed from the perspectives on the diverse culture it serves and thereby is better equipped to build systems that reflect that diversity and is less likely to express biases because it will have been inherently designed with an inclusive foundation. The push approach also addresses the broader issue of hiring practices that may not be as inclusive as they should be. The "pull" approach is reactive and creates something absent the breadth of diversity that is reflective of the culture served. This approach leaves the company prone, as we see has already occurred in more than one occasion, to creating systems not reflective of the diversity of the culture and is more likely to manifest the cultural biases that are still present. It also leaves the companies with PR issues, potential bad press, loss goodwill, and additional financial and time costs with having to go back and do over what could have been done right the first time around. These are just some thoughts. I'm sure there are others.
  • That's an approach that works in theory. Often, there is no one offering a product that works. If you have nowhere to "jump ship," no one hardly has incentive to cater to you. And if they do, to get an edge over the competitor, they are really moving towards a "push" approach.
    Secondly, people 'shouldn't have to' deal with this and it's not a requirement but an ethical responsibility that many feel lies on the creators of AI (I believe so too) to ensure that things work on Day 1. Why not have a proactive approach? Why not push first and, if something doesn't quite work, "pull" as well?
  • You know that was funny, Jason😏
  • The issue I would point out, is the risk of imposing bias on the AI, when it's observing an objective correlation. There are conclusions AI might make that we don't like, that are still correct. For example, an AI might pick mostly men for a job application goal, because men are probabilistically more likely to have the desired characteristics for the job in question. IE, in the context of the AI's goal, a demographic piece of data might actually be informative. Which is not actually bias. In an age where politics are dominated by feelings, I suspect the presence of such groups as 'women in AI' is as likely to introduce bias to AI, as anything else. For example, here, the article claims bias in identifying black faces. However, black folk are a small percentage of the population, roughly equivilant to around twice the number of east asians. They are, for all intensive purposes, a statistical minority and there's no real reason to suspect some kind of bias at play if AI has received less data to train on for their faces. Moreover, there's no special reason why they would be singled out; there is a great divisity of ethncities in western countries, it's not merely, as is often represented simply black and white people. And none of the statistical reality has anything to do, nessasarily, with programmers or 'badly selected training data', but could merely be a product of existing demographic populations. The ASSUMPTION that any disparity is arrived at, because of systemic or personal bias, is a bias in itself. You might validly ask, "to be inclusive can we train the faces of ALL ethnic groups that are minorities in our locale?", but to leap from that to "there are not enough black programmers", or "the programmers are biased", is not logical at all. And actually, to me, it suggests that we might well end up introducing bias to AI, under the banner of preventing it because the AI coldly comes to conclusions we just don't like. Much in the same manner we have banned many scientific studies that are perfectly robust in the modern era, because we didn't like their conclusions. To put it simply, I don't trust the current progressive activist mindset to know what bias actually is. There tends to be an emphasis on non-objective theories like social constructivism, and intersectional theory. These theories have already majorly butted heads with actual evidence in the academic and scientific realm, for example the noble prize winning discoverer of DNA, being ousted from the scientific community for saying that intelligence is mostly genetic. Which genetic studies suggest it is. Compassion, tolerance and inclusion are important ethical considerations, but the moment you make such activists arbiters of truth, you lose sight of truth. The chief goal of any AI, is intelligence, and there is plenty of real risk in infecting AI, with our lack of it if we give ideologues the final say. Thankfully no one is saying that AI's accuracy should be arbitrated by fundamentalist religion. But unfortunately, for all the good intentions, this is not much different as such people are also opposed to facts, when they disagree with their internal narrative and belief structure.
  • Hi I think you make a few leaps here the miss the impact of AIs shortcomings. The examples in the piece where AI failed to distingush differences between dark-skinned people but succeeded in correctly identifying white man poses real life dangers, biases etc when you consider the technology is not only existing for demonstrative purposes. Facial recognition is implemented at TSS checkpoints, is intended gir use in law enforcement, desired for use in tracking individuals through airports, is meant to be used in tracking and supporting patients and Dr's in hospitals (ie MS AI supported cameras), and employees in a work area, students etc. When the technology is meant to be integrated into our human socal structure and culture and support our decision making regard things from saftey, Healthcare, criminal activity and more, then those systems must be equally capable of identifying and distinguishing between, people of all skin colors. When its evident at this early stage that it failed in its ability to do this, then ethical questions are rightfully brought into greater focus because if the tech is allowed to move forward fasterthan sytems of accountability are established to ensure the tech will work for all people equally then the tech will be biased in its function. That's why a bipartisan group in Washington state introduced a bill banning the use of facial recognition tech in local and state government until it is proven it can accurately distinguish between race, skin color, gender and more. (By the way, Microsoft is fighting this bill.) So, asking, why the tech is less capable if distinguishing between people of daker complexions is a fundamental question to begin addressing the problem. One, of the immediate answers is that it was trained with less dark-skinned people. The next question to ask then is why was it trained with less dark-skinned people? The answer to that question has both some direct and more nuanced responses. There are less dark-skinned people in the culture in which the AI is being developed resulting in an ethnocentric, rather than, universal, training model, or training the AI using input that reflects primarily the skin color most seen in the culture, rather than proactively thinking of the applications of AI and training it with a model that ensures it can distinguish between all peoples within a culture. Therein lies a bias. The next point is more nuanced where most of the team developing the AI is likely white, thus they likely shared, in broad terms, similar cultural experiences and social and cultural perspectives. Thus, the lack of diversity on the team where different cultural experiences and cultural and social perspectives should have been part of the design process were omitted, leading to the lack of consideration of the broad social impact of AI that looked like, or reflected the homogeny of its designers and was incapable of seeing the diversity of the real world. Therein lies a bias. The fact that blacks are underrepresented in tech is a real and acknowledged fact, that even considering a smaller representation in the genral culture, is still acknowledged by multiple entities such as tech companies, education systems, STEM leaders and groups, community activists, political leaders, teachers, and more, as a problem that reflects inherent and systemic biases that are rooted in the present impact of centuries of racism. And the derivative affects that are still parts of various systems including wealth distribution, segregation that ranged from education disparities to social engineering that divided municipalities and resources strategically to marginalize African amecans in certain communicates with less access to resources, health care etc for decades after slavery and Jim Crow laws ended. And it now being only about 60 years after the Civil Rights movement, where there were battles, political fights, to eat from the same restaurants, drink from the same fountains, be treated equally under the law, and so much more. I mean no insult to anyone here when I say I think it naive to think that in a mere 60 years, the deep-rooted, systemic impact, resource division and more would be resolved. For centuries land, businesses, business relationships, houses, money and more, relationships in industry and education, opened doors to greater opportunities, have been handed down, passed down generally speaking (I know there are many exceptions), to whites in America at a fargreater extent than blacks due to slavery and instuttionalized racism. Just considering that reality minus feelings, personal perspectives, personal experiences, or values, perhaps as someone completely removed and outside the reality would logically analyze the impact of that history on members of society and what factors might persist over time and how long change toward true equality woukd really take and what consistent actions would be required to support that change? An objective observation would likely result in answer that it would take more than the mere 60 years, which is less than a lifetime, since the Civil Rights movement. And continued acknowledgement of persistent biases expressed in the culture, intential (like the editor who recently allowed and supported an editorial in a newspaper where the writer encouraged the Ku Klux Klan to clean up Washington, and where lynching was supported, or where thousands of white supremiscits march in the streets demanding an all white US, or where white supremecist leaders boldly share how they are actively putting people in positions of power (law enforcement, politics, etc)), to perpetuate their cause, or how cops consistently kill blacks with no repercussions. To the unintentional like the biases that make it into AI. There are biases and systemic issues still deeply rooted in the general culture that impact blacks greater representation in tech. Finally, since the purpose of AI, particularly in its facial recognition applications, is meant to be part of our world, used in various capacities from security in local, state, national and international settings and much more, where people of many ethnicities would be part of the pool of people affected, the programmers at companies like Microsoft, IBM, Google abd Facebook who are predominantly white men in Western culture, must realize that most of the world is not white. From a global perspective whites are a minority, so training a system that is ultimately targeted for application in a brown world that is most accurate with identifying white male faces is not only failing to think proactively, it's failing to think realistically.
  • Thanks Jason. It's important people recognize unintentional biases. The inability to pick our dark skinned faces is the bias in itself that must be confronted. It does not mean someone programmed it to be that way. Adding more relevant data to the AI is reducing that bias.
  • The study of things like implicit bias is largely a farcical area of academia thats been repeatedly debunked. It's very hard to tell, whether it's normal demographics, happenstance, the result of inaccurate bias, or genuine correlation. Because that's motivational. You can't read peoples thoughts. I think the honest answer to the why? question, is we don't know and rarely ever will. Maybe with some kind of well designed double blinded study, we might, but we certainly won't get there by guessing. You are broadening what you found, to areas where the facial recognition is not being applied. The product might have been developed for a target market. Perhaps users of that service are mostly white, so they trained that first. Clearly developing a feature, first, for your primary demographic is a commercial practicality. I don't know the products, nor the developers, so I don't know. But I can speculate reasons that have nothing to do with cultural or personal bias. I think that far left activists groups are more likely to inject worrying biases into AI, in left leaning silicon valley, than white supremicists. Which is why I think that giving too much control to activist groups is a terrible idea. Who has more influence in general in silicon valley, racists or left leaning ideologues? We know for example that such people believe in a perceived oppression, that is hard to empirically quantify, or depending on ones persuasion impossible. It doesn't nessasarily bear out in the data. An AI wouldn't see that, if it cannot be seen in data, unless it was intentionally injected by humans who believed in it. The ethical questions surrounding, especially agent AI, go far beyond the idea of them being programmed, because the whole idea of AI, is that it is increasingly independent and increasingly sophisticated. In fact, I'd be far more worried about man teaching machines violence, than an unconscious racism that is hard to prove is occuring at all. I guess in that respect, I don't share your utopian view: I think corporate toying with AI, will lead to nothing but disaster in the long run. My hope isn't that we have well meaning, but generally ineffectual ethics commities, that can't actually regular markets, but that we get the wake up call, when it comes, and turn around and make proper regulations confining AI development the way we have for human genetic engineering.. "When the technology is meant to be integrated into our human socal structure and culture and support our decision making regard things from saftey, Healthcare, criminal activity and more, then those systems must be equally capable of identifying and distinguishing between, people of all skin colors." Well, A) if you can provide an example of a flawed system that is actually intended to be used that way, I'm all ears B) I think almost ALL of those things are terrible ideas. People are flawed, and programming is flawed, and everything we teach machines will be human, and thus imperfect, no matter how hard we try. Hell, AI could easily make some cold and terrible deduction without any human influence (see the movies you mentioned). It might even be the logically correct one, no bias involved. But one that harms people. Outsourcing life impacting societal choices to machines will result in human suffering. That's guaranteed IMO. If my life depended on my windows machine never blue screening, or autocorrect being accurate, I'd have never have made it to puberty. I'd probably have never made it out of the womb. Likewise if I depended on some human being being always righ.
    The whole system of society we have, is designed around that. Removing checks and balances, peer review, democratic processes and ceeding it all to machines is just a bad idea. And you might say, well think of all the good it can do? My response would be, we perhaps some, but what are the major funding motivations for such technology? Military, commerce, government. Not helping people, or charity. I'm not a christian, but I'm inclined to paraphrase the bible: what fruits, is what is seeded. The motivation is selfish, the product will be selfish. I mean your argument about those applications is certainly fine, that those particular examples would require very accurate AI (as opposed to the applications those services are likely actually used for right now). Personally I think the ethical problem would be on relying on technology in that way at all, in the first place. Some kind of 1984 survelliance state, where human control is given over to machines, and people become literal data points, doesn't suddenly become ethical in my mind, because it can correctly identify black people. In fact, if I lived in such a state, and it couldn't identify me accurately, I might consider that an advantage. Just as confusing, when is AI actually aware? We won't know. We often don't treat higher animals as aware, or afford them rights. The entire point of us developing AI, isn't to birth some creature, but to replace the human slaves that we left behind in the industrial era with a new unpaid, rightless, underclass. We are rapidly approaching the point, in imitating human cognitive processes, where that becomes a major ethical issue IMO.
  • The answer to "why" the bias is manifest in AI systems is far simpler than the the reference you give to why biases are manifest in people. People make the AI systems. The system puts out what's put in. Simply put. This is not a claim that it is intentional bias, but it is evident. Mostly white faces into the machine learning, better accuracy of white faces out. It's that simple. The problem is that because the tech is meant for broad use, either short-sightedness, ignorance, ethnocentricity, or less likely but just listing possibilities, outright racism was part of their tech contributed to thier limited diversity in data to the machine leaning training system. I think more diverse teams, as I said above, would have been less likely to be as limited in the data they provide as the teams in various companies who are currently building AI, with similar biased results. And you reference who the product is for. Answer: everyone. Its the basis for the tech that is designed to be used in local and state governments, throughout airports and everywhere facial recognition in human society, like Microsoft's AI driven cameras in hospitals and on worksites, is being and is planned to be applied. I hope you really are all ears🙂 because, no, the high accuracy AI had for white men was not because the tech was put in a consumer product targeted for a market that is comprised predominantly of white men. As I said above, it is tech, designed to be broadly applied throughout various aspects of the world that again, is mostly non-white. So an inability to distinguish between dark-skinned faces when designed to support health care (watch Microsoft's AI driven camera presentation), recognize a lost or kidnapped child/person, and so many other applications real-world, is a problem. This isn't about compartmentalized left or right perspectives. The world is far bigger than those tiny little boxes. I don't categorize myself as either. The issue is an issue on its own merits and is not fundamentally political in nature, it speaks to what is right or wrong in that will some people be hurt due to the real biases in the system because white men used mostly white faces to train a system for a technology meant to be applied in a predominantly colorful world? Here is some additional reading, one of which is linked at the end of the piece: This deals with a bill to stop AI use in Washington's state local/state government until its proven it can distigush race, gender, skin color https://www.wired.com/story/microsoft-wants-rules-facial-recognition-just-not-these/ This is Microsoft's acknowledgement of the ethical challenges of AI https://www.wired.com/story/microsoft-wants-stop-ai-facial-recognition-bottom/ This references object research revealing racial and gender biases in AI https://amp.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals?__twitter_impression=true
  • If an AI picks out people qualified for a particular job who happen to be mostly men, it's probably not bias. If it keeps doing so for different kinds of jobs despite evidence that shows otherwise, that's an example of bias. It's more implicit. However, if the AI uses the demographic "men" as part of its decision making process to choose qualified candidates, that's an explicit bias. That should be avoided at all costs because people shouldn't get points for being men; but rather, their qualifications. Please name some genetic studies that show that intelligence is genetic and what the relevance of that is to this discussion - are you suggesting racial differences in intelligence? What you're not seeing is that the theories you mention are *also* making leaps. They are simply saying "we see a difference; it must be nature" whereas social theories are trying to say it must be nurture. They are both not entirely objective because it is incredibly hard to make conclusive judgments from one study or two. Your argument about studies being systemically censored sounds like conspiracy to me as someone in the natural scientific field. Social sciences and natural sciences are very different and the latter don't have nearly as much of a progressive "ideology" as the former.
  • There's a wealth of studies into intelligence and genetics, from twin studies, to studies of particular genes. The heritability estimate is pretty much consensus, somewhere between forty, and sixty percent. It's a guess, but it's getting narrower. Not posting studies though, google is there if you are curious. It's too large a topic for a comments section. Twin studies are generally pretty indicative of biological versus environmental, despite their shortcomings. I didn't claim racial differences in IQ. I claimed a strong genetic basis for IQ. There aren't even really such things as races, there are haplogroups. Scientifically, races don't exist, at least in any sort of hardline alt right way. A data point is only invalid, or biased, if it doesn't actually connect to the desired outcome. If a study for example used 'men' as a metric to estimate physical strength, or spacial logic, it could be accurate. Likewise if it used 'women' for language ability. If the AI was choosing jobs, and it selected maried men, and single women, for their expected hours at work and tenure, it could be generally accurate. My point there, was the reference to a culturally sensitive data point like demographic, doesn't in itself make the AI wrong. It's quite conceivable such a data point could be predictive, and if an AI's job is to detect patterns, and trends, then blocking that is not avoiding bias, it's implanting it. Pattern recognition, or betting on some outcome based on generalities, is the exact thing AI is designed to do a lot of the time. And it happens to be something humans are culturally forbidden from doing, because humans, rather than AI, often do it wrong. This is an important conflict to note; if you give an AI impartial unbiased data, it might make decisions that are ACCURATE, but that make us uncomfortable culturally because humans do it INACCURATELY sometimes, when they refer to patterns and generalities. This is already a problem, as I pointed out in science. There are statements, studies and so on that are very robust that have been socially rejected because people don't like the implications. Likewise science that has been promoted and embraced such as parts of social science, that have been shown to be indistiguishable from intentionally manufacturered word salad, because we find it 'well meaning'. Stuff that has no basis in empiricism or data at all. If apply the same standard of truth to AI, of 'I like it and I don't want to hear bad things', AI will suffer the same fate as academia, and it will churn out pleasant sounding, and potentially harmful nonsense. Don't get me wrong, if I was dictator of the world, I would declare tomorrow that using reward and punishment systems, developing true agent AI, is illegal, and that all AI decisions must be reviewed by real people, and transparently available to the public for criticism. I would create a sweeping law that would massively restrict AI. I'm not a fan. But if the point of AI, is to notice things we don't, or be more accurate than us, perhaps we should think harder about who is wrong or right, and try not to get feelings too involved. Or just not ask those types of questions in the first place!
  • Great journalism. Good research to back up your claims and fairly easy to read as well as being very informative.
  • Thanks for the support Rahsna👍
  • I think, Windows Central needs to put unbiased ML into their comment section, because I often see the same people brining the same negativity outlook with them to every Jason article, skewing everything meaningful in the direction of doubt and trolling. By logic, all comments that take away from the meaning the article was written, should be in the bottom, not the top.
  • Hmmmm. very intersting. 😉
  • Could I respectfully suggest Jason that Ms has a greater ethical question to address than AI. Is it ethically responsible to enter a USD480m contract to develop Hololens to train the military to be more efficient killers?
  • Hi Long Xuyen, I got you covered :-): How the U.S. military plans to use HoloLens 2 to gain an edge in warfare https://www.windowscentral.com/how-us-military-plans-use-hololens-modernize-warfare