Skip to main content

Microsoft improves facial recognition error rates across skin tones, gender

Microsoft logo at Ignite
Microsoft logo at Ignite (Image credit: Windows Central)

Microsoft today announced (opens in new tab) that it has improved its facial recognition API to better recognize gender and darker skin tones, addressing two weak points in currently available recognition technologies. The improvements, Microsoft says, have led to a reduction in error rates for men and women with darker skin by up to 20 times, and overall error rates for women by nine times.

Currently, facial recognition tech tends to perform best on men with lighter skin and worst on women with darker skin. Microsoft pins this built-in bias on the data used to train the AI behind facial recognition systems. In order to improve its accuracy, the company needed to factor in a diversity of people, as well as other factors like hairstyle, jewelry, and eyewear.

In addressing the problem, Microsoft's team of researchers sought to focus on three key areas of improvement for its datasets: age, gender, and skin tones. Hanna Wallach, a researcher in Microsoft's New York Lab said:

We had conversations about different ways to detect bias and operationalize fairness. We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.

Going forward, Microsoft is looking to bring what it has learned about the elimination of bias in its AI tools to its other services. The effort will span "from idea creation and data collection to model training, deployment and monitoring," Microsoft says. Wallach explains:

If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases. This is an opportunity to really think about what values we are reflecting in our systems and whether they are the values we want to be reflecting in our systems.

Dan Thorp-Lancaster is the Editor in Chief for Windows Central. He began working with Windows Central as a news writer in 2014 and is obsessed with tech of all sorts. You can follow Dan on Twitter @DthorpL and Instagram @heyitsdtl. Got a hot tip? Send it to daniel.thorp-lancaster@futurenet.com.

5 Comments
  • Microsoft created the most unbiased AI ever invented and her name was Tay.
  • At first, I thought this was a joke and laughed. Turns out it was a troll instead.
  • That was due to the trolls and like any other person she acted as any teen would as in reflecting the environment around them. In that case it's peer pressure, in Tay's case it was trolls messing with the repeat after me capabality.
  • "silence is the gift of God" .. so why don't you just shut the eff up? :|
  • It still has problems with my facial hair and glasses. To the point where it is not useable.