Microsoft's moral stance on facial recognition is good for everyone (especially Microsoft)

In recent years, AI-driven facial recognition systems have been in the news for failing to recognise dark-skinned people and women as consistently as it does white males. IBM, Microsoft, Google and other companies that are driving this technology within their products, selling it to other companies and which hope to implement it throughout governments, municipalities and the private sector have all been hit with the impact of the technology's short-comings.

Google's system identified African Americans as primates forcing the company to remove certain content from its system to preclude the association going forward. A recent study showed how IBM's and Microsoft's facial recognition tech was far less accurate at recognizing women and minorities than white men.

To prevent the bias and discrimination that these systems will have on society if left unchecked, Microsoft is leading a movement for its regulation (opens in new tab) by the government. This commitment isn't only about doing the right thing, however. Microsoft stands to make a lot of money if governments, the tech industry, the private sector and consumers perceive it as a trusted leader and provider of AI-driven facial recognition camera technology.

Immature tech is bad for business

During Microsoft's 2017 BUILD Developers Conference the company introduced its AI-driven camera technology as part of its edge computing strategy. The technology is capable of recognizing people, activity, objects and more and can proactively act based on what it sees. Microsoft demonstrated how it notified a worker on a worksite that another worker needed a tool he was near. Microsoft also showed how in a hospital setting the system, connected to a patient's data, alerted staff to his needs as it "watched" him walking, distressed, in a hallway.

The strength of this system is that it is software-based and can be deployed across camera systems already in use by businesses, schools, governments, municipalities and more.

Microsoft seeks to dominate the industry as a platform company by providing industry standard software and tools like Office, Azure and more to businesses and governments to help them "achieve more." Microsoft's AI-driven camera tech is just another platform the company hopes to sell to businesses and governments so that as those entities "achieve more" Microsoft will gain market dominance and make more money. As a relatively young technology, however, bad press is bad for business.

Bias in, bias out

Image credit: facetofaceafrica

Image credit: facetofaceafrica

The biases currently reflected in AI-driven facial recognition systems are likely a result of the "biases" (though perhaps unintentional) inherent to the machine learning processes used to train them. White males make up the majority of the people working in IT. Thus, the perspectives of the teams creating these systems are relatively homogenous. The breadth of input, the array of considerations, the assortment of models and forward-looking impact of the technology on certain groups that a more diverse team would have contributed to building and training these systems was lost.

Microsoft is cognizant of the immediate social implications and ultimately the long-term financial impact of deploying its facial recognition tech with its current limitations. Thus, it has refused two potentially lucrative (in the short term) sales. The company declined selling to a California law enforcement agency for use in cars and body cams because the system, after running face scans would likely cause minorities and women to be held for questioning more often than whites. Microsoft's president Brad Smith acknowledged that since the AI was primarily trained with white men, it has a higher rate of mistaken identity with women and minorities. Microsoft also refused to deploy its facial recognition tech in the camera systems of a country the nonprofit, Freedom House, deemed not free.

Microsoft's refusal to make these sales is consistent with its stance articulated by Smith in a blog post (opens in new tab):

We don't believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success.

I'm confident altruistic considerations are contributing to Microsoft's "high-road" approach to facial recognition tech. I also believe the company realizes that if other industry players take a less socially-conscious position, negative fallout could impact the techs acceptance and ultimately Microsoft's goal to provide a platform with a wide and lucrative range of potential applications.

One bad Apple ruins facial recognition for the bunch

Microsoft wants high standards of quality and accuracy for facial recognition across the entire industry so that potential customers including governments, businesses and the public will have confidence in the tech. Microsoft realizes that despite the progress it has made, based on an evaluation by the National Institute of Standards and Technology (NIST) (and as a top-ranked developer of facial recognition tech), for the tech to be accepted there must be a uniformity of standards that precludes the biases observed in early systems. This is why Microsoft is taking the lead to petition the government to take steps to regulate the tech.

Microsoft has proactively provided three areas it hopes regulation will address (opens in new tab):

  • Bias and discrimination
  • Intrusion into individual privacy
  • Government's mass surveillance encroachment's on democratic freedoms

In 2017 I discussed many of these same concerns. Intelligent cameras that are deployed in the private and public sector has the potential of allowing tracking of individuals, logging behavior and actions, and giving unprecedented power to governments, employers or malicious actors who have access to this data.

Reading between the PR lines

Interestingly, despite Microsoft's gung-ho advocating for user privacy, Smith is somewhat lenient regarding consumer consent and facial recognition tech. He said:

[In Europe], consent to use facial recognition services could be subject to background privacy principles, such as limitations on the use of the data beyond the initially defined purposes and the rights of individuals to access and correct their personal data. But from our [Microsoft's] perspective, this is also an area, perhaps especially in the United States, where this new regulation might take one quick step and then we all can learn from experience before deciding whether additional steps should follow.

Reading between the lines, Smith seems to be saying Microsoft's privacy position may be less aggressive than Europe's in relation to use of user data beyond initially defined purposes and limiting user access to their data. Microsoft stands to gain a lot if its continually evolving products and services, can share what they currently know and continuously learn about users from their activity with products and what intelligent cameras observe.

Image is everything

Image credit: CNBC

Image credit: CNBC

Microsoft's CEO Satya Nadella has been driving the company forward with a mission of empathy, social and environmental responsibility and inclusion for people with disabilities. I believe that those efforts along with a desire for universal standards for facial recognition tech are genuine. Still, Microsoft is a business. Parallel to these noble motives is a desire to dominate the industry and shine on Wall Street.

Microsoft is pushing for government regulation of facial recognition tech with laws that require (opens in new tab) transparency, enable third-party testing and comparisons, ensure meaningful human review, avoid use for unlawful discrimination and protect people's privacy. As noble as this sounds, I believe that Microsoft's motives are also rooted in a goal to ensure companies don't hurt the public perception of the tech before Microsoft can establish itself as the market leader.

Jason Ward

Jason L Ward is a columnist at Windows Central. He provides unique big picture analysis of the complex world of Microsoft. Jason takes the small clues and gives you an insightful big picture perspective through storytelling that you won't find *anywhere* else. Seriously, this dude thinks outside the box. Follow him on Twitter at @JLTechWord. He's doing the "write" thing!

  • “Microsoft stands to make a lot of money if governments, the tech industry, the private sector and consumers perceive it as a trusted leader and provider of AI-driven facial recognition camera technology.” Bingo! Microsoft is built on a house of cards. If they break trust, they’re toast.
  • This actually cracked me up. I first thought "that's because all white people think blacks look alike". Yes, most programmers are white but come on facial recog people. This is freaking ludicrous.
  • Hi Ross you do understand that it's not merely because most programmers are white, but that the majority white programmers are using mostly white faces to train the machine learning models that power the facial recognition systems. The system only knows what is put in. If there are mostly white and male faces in, the system will, logically, be less accurate recognizing dark-skinned faces and women. It's logical. And more than that, proven. Microsoft's Brad Smith said: Microsoft's president Brad Smith acknowledged that since the AI was primarily trained with white men, it has a higher rate of mistaken identity with women and minorities. Also watch the embedded videos. They really break it down very well🙂👍
  • Please don't take this the wrong way, but if Microsoft were to purposefully train its AI with large amounts of data from minority men I would be worried that it could be racially profiling minority men as being the type of people that it would looking to target. Think of how it might look in regards to law enforcement or immigration if white men started requesting thousands of images of minority people to train their already controversial software.
  • Well the point is that it should be trained in a more balanced way, not more or less of certain groups of people. I don't think Jason is saying it should exclusively be trained with people of color.
  • Balance is actually part of the problem. Somewhere in the region of 70% of the American population is white, and many of the people who fall into that other 30% either have a light skin or a semi-European facial structure because they are mixed race or have their ethnic roots in regions such as Eurasia and share a closer ancestor to Europeans. Being "Balanced" means training the AI using "mostly white people". In order to have the system being equally effective across all races you would actually need to train it using a data set that is completely unbalanced, and this is where we see a significant ethical dilemma. Imagine how it would look to somebody from BLM, if they found that you were feeding your an equal number of African and European faces? It would look like you are training your AI to pick out African faces.
  • I feel like there's a leap being made about how it would "look to BLM", or how feeding an equal number of black and white faces somehow makes the system more prejudicial against black people. You're getting confused about the demographics of the US population vs. the statistics necessary to get accurate predictions. To take the example to an extreme, if you wanted to train an AI to recognize only one person out of the entire US population, it's absurd to thing that one image out of 300 million images would do the trick. You need a statistically significant number of each thing you're trying to identify, so obviously you wouldn't have less of one and more of another if you cared about the accuracy equally.
  • A very interesting perspective
  • "minorities" should be simply replaced with "people", or "dark skinned people", Jason. That's kinda irrelevant here, don't you think? Sorry, don't mean to nitpick😊
  • Everyone has their preferences but I prefer the term "people of color." The term minorities is a bit outdated to me. To address the other point, in this context, it is important to distinguish between white and other people of color.
  • Majority of human beings are not white. Colored is a term from white people point of view. If anything, white people are more colorful than non whites. Most people in far east for example, is black hair, brown eyes, similar skin color. A European in any country may have brunette, blond, or red hair. Eyes can be brown, grey, green, blue etc. Skin can be pale, burnt, natural or spray tanned. If you ask a child to draw a white person, he will need more color crayons to draw one than say, a Chinese person. It's all depends on your local environment who is considered a minority or "colored".
  • Which also explains the old 'all black/arabic/asian/etc. people look the same' phrase.
    Which again points to a probable issue in regards to developing a non-biased facial recognition system; it really is easier to recognize individual white people simply because the variety in features and colours is greater...
    Definitely still a work in progress!
  • Believe it or not native Asians and Africans often have difficulty telling white people apart. It's not racism, it's that people look for different cues, which vary between ethnicity. So its more a question of familiarity than anything else. To me all whales look the same but to somebody who is around whales all day they are very distinctive.
  • Hi Jamitofrog and Manus: I have a different opinion..I think there is a wide range of color and other variations among "people of color". African Americans (even in my family) can range from very light skinned (almost white), light brown, caramal colored, dark brown, to very dark. My wife is light-skinned (her father even lighter). I'm a darker brown, my oldest daughter has a complexion between me and my wife. My baby daughter is closer to my complexion, darker than my wife and oldest girl perhaps still lighter than me. My oldest daughter has brown hair. As many African American'ds do, I have black hair. Some black people have reddish brown hair. Some black people have wavy hair, some cury, some straighter than others, etc. Some light eyes, some darker eyes. Viola Davis is very dark, Steve Harvey is lighter but still brown skinned, etc. I say that to say that the range of variety of people of color is not as homogenous as you say. Thus coming to your statement Manus though perhaps you mean no harm such a statement has its roots in ethnocentric perceptions and racism. you say: Which also explains the old 'all black/arabic/asian/etc. people look the same' phrase. My take regarding that "old" saying is that it likely originated with bigoted whites of the time who saw little value in regarding non-whites as human, much less acknowledging thier individual value and worth. Who, particularly during American slavery, saw blacks as mere property treated with as much regard as a horse, cattle or any other thing considered inventory. Their physicality when displayed nude on auction blocks, designated as bucks or winches.not people, not individuals, not humans is the foundation for how they were perceived and treated for centuries. These ideas, perceptions, idealogies regarding the value of non-whites permeated the culture, and sayings that referenced whites as the standard or status quo and non-whites as "lesser beings" became part of the cultural dialouge. "Sayings like all blacks, etc. look alike." An additional angle is that perhaps from the perspective of someone exposed primarily to people in their own culture, or for those who look like them, and either limited in thier contact with people that don't look like them, then aptitude in distinguishing the features among individuals of any particular group, white, asion, Latinx, black might be challenging. Thus a person growing up in China may think all white American men look alike, or all white American women loom alike. An African may think everyone in Hawaii looks alike. So dont think a limited range of variation among blacks/arabics/asians/etc is the root of that statement all people in these groups look alike. One I think there is more variation within these groups than you guys realize. Two, I think the statement is rooted in racism and ethnocentric idealology. Three, Inaldo think a perception that everyone in a particular group looks alike is also attributable to the ignorance of an outside observer (perhaps due to limited exposure or willful disregard) to the variety of differences withing different groups highlighting the individual uniqueness of each person, which ensures by God's design that everyone in a particular group of the larger group of humanity to which we all belong, does not look alike.
  • I can't speak for Microsoft's system as I am not familiar with it, but i do know that other AI systems have experienced similar problem with facial recognition with regards to minorities. Those systems functioned by simplifying an image of a persons face and creating a map of "key features". The systems found that when a person had more rounded features that it was difficult to create an accurate map, as they would often be unable to distinguish were one part of the face ended and another part began. For example, if a person has rounded cheaks and a broad nose, or a rounded jaw line or if they had more rounded features around their eyes. The systems also struggled when people had even skin tones. They couldn't pick out the different parts of the face so accurately. Unfortunately (and I don't mean this in a bad way), many minorities tend to have rounded features and even skin tones. For example a woman of west african heritage will have a much more rounded face and a much more even skin tone that 99% of white men. And if she uses makeup to even out her skin tone further then those systems would struggle even more. So, it's not really a case of white men programing systems using pictures of white men, and more that white men have less rounded facial features and less even skin tones. making them easier to map. Yes, more effort needs to be made to get these systems to be more effective with minorities, but it would be wrong to blame he programers. I believe that African and Chinese AI comanies are having similar dificulties with thier software
  • Hi Aargh So, it's not really a case of white men programing systems using pictures of white men, and more that white men have less rounded facial features and less even skin tones. Actually based on the acknowledgments of those developing and driving this technology, like Microsoft's Vice President Brad Smith own admission when addressing why the company refused to sell its facial recognition tech, it is indeed a problem of the with how the programmers (who are mostly white) provided the models with mostly white faces. Microsoft's president Brad Smith acknowledged that since the AI was primarily trained with white men, it has a higher rate of mistaken identity with women and minorities. Microsoft also refused to deploy its facial recognition tech in the camera systems of a country the nonprofit, Freedom House, deemed not free. That admission is supported with other evidence presented by Joy Buolamwini who presented the Gender Shades thesis which highlights this issue (See embedded videos). She articulates the valid point that "failure to separate accuracy results across gender and skin types also makes it more harder to detect differences. This is what is happening. Accuracy is being based primarily of a dataset of predominantly white faces, thus the system is having more difficulty detecting differences. So its not wrong to blame the programmers, nor is it a malicious blaming. 🤔It's simply an acknowledgement that this is what is happening even based on the acknowledgement of the guilty parties in the industry who are seeking to improve in these areas. To ignore this reality would be ignoring a real barrier to removing the bias in these systems. You say: Yes, more effort needs to be made to get these systems to be more effective with minorities, Some of that effort must be, as Microsoft's Brad Smith, Joy Buolamwini and others are saying, put toward ensuring programmers are training the machine learning models with more data of people with dark skin and women. That and I'm sure other efforts would help get the systems more effective with women and dark-skinned people.
  • I think that it would have been helpful if you had chosen to address the technical limitations of AI in your original article. And I would feel more comfortable if you had addressed that aspect of my comment, too. As a veteran technology journalist, you will no doubt be aware that this isn't a recent problem. Companies have been struggling to overcome for quite some time. The issue first entered the public consciousness in 2009, when a gentleman of African heritage posted a video online demonstrating how his HP webcam was struggling to track his face. HP posted quite a good response explaining why this was. Higher resolution cameras and more advanced AI have since improved recognition, but the essential problems of skin tone and facial shape remain. There are also some political aspects to this as well. In today's America, it is simply not acceptable for people in a company such as Microsoft to request sets of training data with significant racial imbalances with regards to minorities. Particularly if the programmers involved are not from a minority background themselves. If a research team under my purview were to request a data set where minorities outstripped people of European heritage, or even equaled them, I would be reaching for my ethics manual. I may even turn down their request purely because it might cause public outcry if the media were to discover that we were training our AI to recognize minorities. Even if it were for the eventual benefit of minorities, it's simply too problematic. For similar reasons I would also issue a public apology acknowledging that we had a problem with racial bias, and promising to correct our past missteps. It's simply not acceptable for a company not to issue an apology, and it is even less acceptable for a company to deny that there is a problem or to attempt to justify their actions. Doing so would provoke public outcry.
  • Ah interesting point. I view the term "minorities" in the same way as you describe, that it's a term used from the point of view of white people who believe they are the "majority" or "norm." However, I view the term people of colored differently. I prefer that term as a neutral point of distinction, because when we have discussions about race we have to somehow use labels to be able to describe groups of people (but in a non demeaning way of course). I just personally believe that "people of color" is the more neutral way of labeling if that makes sense.
  • Black isn't actually a color though.
  • Neither is white.
  • I'm not really comfortable with any phrase that includes the world color or colored. In many parts of the country those words were used in place of The Bad Word that I can't use here. You'd see sings up saying "colored entrance", and your know what word they really meant Colored also excludes Arabs, Persians and eastern Europeans. Who may or may not have dark skin depending on their exact origins. Minority is more inclusionary. But this is just my opinion and other people will have their own views
  • Copying and pasting part of what I said in reply to another comment above. I view the term "minorities" as a term used from the point of view of white people who believe they are the "majority" or "norm." I do think you're right it depends on the location, which in my original post I was referring to the US (though I didn't explicitly say that). I view it as exclusionary, int he context of the US, because the way it was/is used in the US was/is not correct in the first place. I think your point makes sense if it is used accurately in the first place. I view the term people of colored differently. I prefer that term as a neutral point of distinction, because when we have discussions about race we have to somehow use labels to be able to describe groups of people (but in a non demeaning way of course). I just personally believe that "people of color" is the more neutral way of labeling if that makes sense. However, to your point about the history of the term "coloreds" and how it has been used before, that's something I didn't consider and can totally validate how/if that has a more negative association to someone that may have lived in those times when they had those kinds of signs up everywhere or that the way the term was used was offensive. I clearly didn't have that kind of lived experience, so it has less of a negative association to me. Good point though!
  • real0395
    You read my comments wrong. I did not have two points. I asked Jason if he thought the term "minority" (as in political, statistical, or social, stance) was relevant to this article. That's why I suggested using more literal terms to differentiate between skin colors.
  • Oh gotcha, I see what you mean now!
  • Sampling size and makeup will certainly have implications to the accuracy of any given group. I also wonder if there may be additional issues accurately detecting darker-skinned persons and women for other factors. I would imagine darker skin may, depending on lighting conditions, result in less overall difference in gradient/shading and contrast discernable in images evaluated by the software. I also wonder what the challenges are for women who might make significant changes to their makeup color/style from one day to another. I've dealt with my own issues in a similar vein at my company. I've been through no less than 3 different high dollar fingerprint scanning time clocks, and all seem to have far more issues with correct reads of our darker-skinned employees. It's frustrating for all involved, and I've spent considerable time, money, and energy trying to eliminate or mitigate the issue. At this point I have reverted to a PC-based PIN entry system until I can find something that works more uniformly and reliably. I know IR or near IR features goes a long way in dealing with these issues, but if the software is working with legacy cameras they may be lower resolution and only visible light capable, I can understand the technical hurdles. Personally, I'm not really a fan of face or fingerprint scanning tech. I refuse to use it on my devices these days as I find it just a bit too Orwellian and it in no way provides any benefit to me that overcomes the potential downsides. I'm white (Scottish background so about as white as it gets. Me out in the sun is like putting a fork in the microwave) with blue eyes, and my Lumia 950, when it was new, had a lot of problems using its iris scanning to unlock for me. Unreliable and frustrating so I just turned it off.
  • This convo is interesting cause there's also a real case to be made for companies NOT getting better at recognizing Black faces in praticular as my friend Nabil talked about on our blog. Just another prespective to consider. "The reality for the foreseeable future is that the people who control and deploy facial recognition technology at any consequential scale will predominantly be our oppressors. Why should we desire our faces to be legible for efficient automated processing by systems of their design? We could demand instead that police be forbidden to use such unreliable surveillance technologies."
  • Oppressors? I take offense at this frankly racist post, which in no way whatsoever does anything good to the very real problems of this world...!
  • To paraphrase ... One mans oppressor is another man's ...
  • I'll give you a couple of reasons why THIS person would like extremely fast, accurate and efficient facial recognition tech out there. First, better interaction with all my devices, systems and applications. I've said this before, I WANT the Tony Stark world. But with that would come the need for systems to properly and immediately recognize me and my authority to access or control any given device, system or data I own. Second, commerce. I happen to love the examples seen in some current movies where you are walking down the street and a store front system recognizes you and presents shopping options or sales you might be interested in. If I'm not shopping online (which is what I do most of the time) my shopping experience is typically walking around VERY expansive outdoor shopping malls. Now, if I'm very familiar with the mall and its stores, I know which ones I'm going to go into looking for (typically) clothes. But if I'm new to the area (e.g., business trip) I might have no idea what stores might have things of interest. Having potential items projected to me as I pass would enhance my shopping experience. This would require that their system recognized me quickly and could interface with my shopping history (which we all know is out there). So, yes, I could see useful reasons.
  • Great article and perspective.
  • Thank you Don Hackman!
  • Morality and Microsoft are words that should never be used on the same sentence. Even Google is more moral and that's an enormous statement.
  • I’d like to know more about this perception. Can you give a few examples on how or why is Microsoft is less moral than Google?
  • Oh, wow. You need to back that statement up, because there's not universe where a version of me agrees with that at all.
  • So you're saying if I search "Is Google more ethical than Microsoft" on, it should give unbiased results ? Let's try !
    Interesting, all the top results are articles reporting a 2011 Ethisphere Institute report of ethical companies where Microsoft made it into the list while Google got removed from it.
    So while it seems evidence proves your statement wrong, at least Google doesn't censor the reports, so it still has room for worsening. See
  • I really have no issues with Microsoft's stance and motivation on this. They've already done plenty to ruin their brand, so they need to make sure they get this right. And, for the most part, I think the same regulations/restrictions that temper how law enforcement/government can go after a person are pretty adaptable to facial recognition systems. They already use various forms of this now as they search picture databases for matches and, even going back to the old-school sketch artists. As far as I'm concerned, if a system can be devised that works 100% of the time (completely irrespective you age, size, race or ethnicity) then I say great. The application of the tech certainly isn't limited to government or law enforcement. My wife and I rely on the Kinect on our Xbox to identify us each when we are using the Xbox...and it works pretty consistently, but it does respond slowly sometimes. If we had a larger family, though, having a system that recognized us more accurately and immediately would always be preferable.
  • Computer based facial recognition is a long way off from being really successful. Human brains do things that can't always be expressed in math equations/logic algorithms. I remember the first computer chess programs and electronic chess boards in the late 1970s/early 1980s. Yes they could "play chess", but a strong human player could always beat them. A chess grand master was not yet required. These programs had no real understanding of the overall goal of the game of chess, which humans have. They worked on the basis of just achieving an "overall better chess position" on the board. You could do things like sacrifice a rook or even your queen, if it led to winning the game 6 moves later. Human perception of faces is similar. We don't just evaluate distance between eyes, shape of nose, shape of face, eye color, etc. We just look and in an instant say "Oh that's Jason Ward. I know him". There is an intuitive sense that is extremely difficult - perhaps even impossible - to translate into computer code. Today's computer chess software is vastly superior to that of 40 years ago. We are - perhaps - that far away from computers recognizing human faces with the accuracy of human brains. Problems like twins, this person wearing a hat or glasses or a wig, etc. are going to be problems for computers for a while yet.
  • "White males make up the majority of the people working in IT. Thus, the perspectives of the teams creating these systems are relatively homogenous." That assumes that melanin determines perspective. How perverse.
  • Not really. There are, of course, diverse perspectives among a certain group but things like familiarity-bias are bound to show up - melanin and race don't determine perspective but can determine lack of certain perspectives.
    If you think that's perverse, I wish it weren't the case either. It's the reality and one we should do something about.
  • Makes you wonder how such a simple solution as to just train it with groups other than white men could have become an actual thing. I can somehow imagine black people being an issue due to camera sensors, especially those tiny ones used for facial recognition, having difficulties with low light/dark content which might also be an issue with dark faces ( not necessarily an excuse ) but white women? Wat?