Skip to main content

Microsoft AI chief warns of coming AI challenges and ethics risks

Speaking at MIT Technology Review's EmTech Digital event, Microsoft's Vice President of AI & Research, Harry Shum, drew attention to the risks associated with AI as it becomes more creative in the future. In particular, Shum called on tech companies to "engineer responsibility into the very fabric of the technology.

As MIT's Technology Review points out, we've already seen some of the fallout from the tech industry failing to anticipate flaws in AI. One such flaw is AI's difficulty thus far with identifying faces with dark skin tones, something Microsoft has been working to improve. But AI is also being used by China in alarming ways for surveillance, while, more recently, an Uber self-driving car killed a pedestrian in early 2018.

According to Shum, AI's challenges will only ramp up as it becomes more complex, adding the ability to produce art, maintain near-human-like conversations, and accurately read human emotions. These abilities will pave the way for AI to more easily create propaganda or misinformation to be spread online, including fake audio and video.

Microsoft is working to take these challenges into account. The company has created an AI ethics committee and is working with others in the industry to address problems posed by AI. Shum also told MIT Technology Review that Microsoft plans to add an ethics review step to its audit list before products hit the market "one day very soon," joining other steps such as privacy, security, and accessibility.

"We are working hard to get ahead of the challenges posed by AI creation," Shum told MIT Technology Review. "But these are hard problems that can't be solved with technology alone, so we really need the cooperation across academia and industry. We also need to educate consumers about where the content comes from that they are seeing and using."

5 ethical risks AI presents for Microsoft and other tech giants

Dan Thorp-Lancaster is the Editor in Chief for Windows Central. He began working with Windows Central as a news writer in 2014 and is obsessed with tech of all sorts. You can follow Dan on Twitter @DthorpL and Instagram @heyitsdtl. Got a hot tip? Send it to daniel.thorp-lancaster@futurenet.com.

8 Comments
  • Whatever they call an AI is not an AI. We are very far away from AIs in the sense of actually intelligent software. All that is called AI today is just marketing. It's not dangerous, you don't need ethics because they are only rebranded database lookups. They are as dumb as it gets (albeit very fast, which is why they are good at what they are designed to do). It's as if somebody replaced Skynet with SQLnet.
  • Even if AI, as presented, is nothing like what science fiction predicts and certainly isn't "conscious" (which was never claimed), it can still pose a danger. Not "let me turn on humans and eat them" danger but rather "applying a mechanistic data based model to make real decisions impacting humans who are more than mere numbers" danger.
  • That is not AI because AI is more akin to achieve something similar to a human conscience. What we are doing now with AI/Machine Learning is akin to predictions.
  • Self-driving cars are not predictions, they are attempts to human reactions giving the facts and ability to abide by traffic laws for safety which in the article that has failed once as such one of the reasons offered.
  • Autonomous cars are just as far away. What they try to sell you are just assistants that can do certain tasks. They have no awareness of traffic. Their limits end already at recognizing traffic signs. They work great under optimal conditions, but they are completely useless when something differs from that.
  • You miss the point. AI isn't just data bases. There are cases where AI can be used to create false images and videos of people, for instance. How AI is used and how it evolves in its use is the concern. People can be malicious and without restrictions on what we allow AI to do or be used for will be an issue for us in the future. Granted, there will always be individuals to use technology for malicious intent, but having guidance on what AI can be used for and what we program it to do will make all the difference, becuase there will be better understanding across the board. These restrictions will of course change, because you can't necessarily anticipate every use case, whether illegal, legal or moral matters.
  • But those are things that humans do as well. And in the end it will be humans abusing deep learning tools. We don't need ethics for the tools, we already have ethics for those that use them. And they are what matters because those "AIs" are not thinking.
  • Best (fictional?) show about AI is 'Person Of Interest', anyone who's never seen that I highly recommend it