Dystopian abuse of Microsoft's powerful AI camera tech is all but inevitable

Artificial intelligence-driven surveillance technology reminiscent of George Orwell's omnipresent "Big Brother" was presented. Microsoft showcased how this technology is capable of recognizing, people, places and objects, and could even act according to what it sees.

Using millions of cameras already in existence throughout our communities, hospitals and workplaces Microsoft explained this technology will allow us to "search the real world" just as we search the web.

Microsoft boasted this represents the coming together of the physical and digital worlds. I believe it represents the early stages of a dystopian implementation of hyper-surveillance, though Microsoft presented it within the context of how it will help keep us safe.

Safety first

Microsoft showed how in the workplace this AI-driven system could autonomously recognize a dangerous chemical spill and proactively notify the appropriate people to address it. It can also search a work site, find a tool an employee needs and "tell" the nearest authorized individual the tool's required elsewhere.

In a hospital, it could alert medical staff that a patient it's "observing" has surpassed prescribed levels of physical exertion. The system recognized the patient, had access to his records, "understood" his actual physical activity in relation to the digital record of his prescribed activity limits and "knew" what staff to alert.

Microsoft's carefully chosen scenarios reveal how this technology can help keep us safe. But how else might millions of artificial intelligence-enhanced cameras that are honed in on our comings and goings, activities, habits and routines be used?

This is just step two

Two years ago Microsoft introduced "How-Old" which uses Microsoft's facial recognition Cognitve Service to guess a person's age.

Beyond its fun aspects I expressed concerns How-Old could be a first step toward more dystopian applications:

Microsoft is likely subtly using [it] to hone its facial recognition technology for future practical and I imagine, ambitious implementations.Microsoft is a cloud-first and mobile-first company with ambitions to embed Windows 10 in as many IoT device's … as possible … And now they've designed and launched intelligent software that can recognize you.Imagine cameras, which already have a virtually ubiquitous presence in our communities, possessing intelligent software that will allow them to potentially recognize you everywhere you go. ATMs, stores, parks, traffic lights, police officer body and vehicle cams…snooping cameras on other people's mobile devices!

Microsoft How-Old, could facial recognition tech turn ugly?

I also referenced Cortana integration, access to the then 1.4 billion tagged faces on Facebook, creative third-party use of How-Old API's and the machine learning-powered image recognition tech of Microsoft's Project Adam.  

Two years ago my prediction of ubiquitous AI-driven cameras that could recognize us virtually anywhere probably seemed like sci-fiction musings or paranoid ramblings. Yet, as I predicted, Microsoft has taken the second step and made it a reality.

To ensure its widespread acceptance, Microsoft has begun marketing it as the "edge of its cloud" for workplaces and hospitals. Framed in the disarming context of making those environments safer Microsoft hopes to preempt the inevitable privacy and abuse concerns. Make no mistake; Microsoft's plans are far broader than work sites and hospitals.  

Step three, making an unsafe world "safe"

After the horrific events of 9/11, the Patriot Act was passed by Congress to provided the US government with greater powers of surveillance. We've proven willing to forgo certain levels of privacy in exchange for professed guarantees of security.

Within this context Microsoft introduced its AI-driven hyper-surveillance system as a means to increase hospital and workplace safety. Expanding that "safety" message to the broader scope of an "unsafe world" is the next step.

The path to widespread deployment of this system will likely meet with relatively little resistance. Particularly in the technological climate of selfies, self-promotion on social media and video platforms and where information-gathering "Terms of Service" are heedlessly and trustingly "clicked" through. Since privacy's an increasingly surrendered commodity, privacy-eroding measures to increase safety may be readily accepted by most.

Companies, governments and school systems that employ this technology will likely point toward our news headlines. School shootings, kidnappings, terrorist attacks, police violence, random public attacks, workplace misconduct and more will likely be used to "justify" its implementation to help keep us safe.  

For the common good?

Fear mongering combined with the willing surrendering of privacy (note the levels of personal disclosure on platforms like Facebook) will likely lead to a general embrace of an ever-watching, AI-enhanced and cloud-connected "eye" sponsored by governments and private institutions.

This will substantially differ from current dumb-surveillance. Even in its present iteration, what Microsoft introduced can recognize people, the context they're in, what they're doing and what objects they're interacting with.

At Build 2016 Microsoft gave us a prelude to this tech. Using smart glasses and Microsoft's Cognitive Services a blind Microsoft employee could "see" facial expressions, his environment and actions:

This technology, therefore, not only allows for viewing what's in a camera's line of sight like traditional surveillance. With Microsoft's "edge of the cloud" surveillance can interpret and act upon what it sees.

Practical applications

In a store, facial recognition and other Cognitive Services may determine a shopper's "demeanor" indicates he's likely to shoplift or attack. The system could proactively alert store staff of this threat. Could such a system be prone to profiling?

Moreover, due to violence, many schools employ security measures like metal detectors and checkpoints. Inappropriate teacher-student relationships are also problematic. Microsoft's AI-driven surveillance could monitor staff and students in and out of school (via school and public cameras) who may be likely to engage in dangerous or inappropriate behavior.

Cognitive Services could potentially recognize emotional cues that are antecedents to dangerous or inappropriate student behavior. If/then programming could cue the system to focus on individuals exhibiting suspicious behavior.

This could be coupled with the system's object recognition capabilities. The system's ability to recognize a student's handling of a gun or dangerous materials via public cameras outside of school isn't far-fetched based on what Microsoft demoed. The system, as it does in hospitals, could then proactively alert authorities. Potential terrorists can be similarly trailed.

Big brother?

Government implementation of this technology is troubling. Via the millions of cameras installed throughout the world, AI could proactively "follow" persons of interest wherever they are and watch whatever they're doing.

Under the presumption of acting preemptively, governments and law enforcement agencies may use Microsoft's surveillance technology and Cognitive Services to interpret patterns of behavior and activity that may indicate a person could be a threat.

What happens if politics begin governing this technology's use and particular groups are targeted? What if religion-oppressing governments use it to root out those who aren't compliant? Will dissenters of oppressive regimes be more easily found and "dealt with?" Can democracies like the United States become "less free" if such powers are placed in its hands? What about hackers?

We're simply not responsible

Like the movie Minority Report, governments may use these tools in an attempt to stop crimes before they occur - but at what cost? These increased powers of surveillance could be abused resulting in a subsequent erosion of freedoms, not just privacy. Innocent people may become the subjects of hyper-surveillance.

Nadella asserted we must use technology responsibly, but history dictates that's a standard we're not likely to attain. Though we've done and will do much good with technology, the good is almost always accompanied by the bad.

The splitting of the atom led to nuclear power and atomic bombs. The study of chemistry yielded both medicines and weapons. Gunpowder resulted in fireworks and guns. The Patriot Act's led to successful anti-terror operations while also paving the way for profound levels of surveillance of US citizens.

My pessimist's view of how things may turn out could be wrong. But given human nature, history and the direction we're headed I sincerely doubt it.

What are your thoughts on the moral and ethical implications of this technology?

Is quantum computing a genie we will wish we kept in the bottle?

Microsoft's Cognitive Services and AI everywhere vision are making AI in our image

Jason Ward

Jason L Ward is a columnist at Windows Central. He provides unique big picture analysis of the complex world of Microsoft. Jason takes the small clues and gives you an insightful big picture perspective through storytelling that you won't find *anywhere* else. Seriously, this dude thinks outside the box. Follow him on Twitter at @JLTechWord. He's doing the "write" thing!