Skip to main content

Dystopian abuse of Microsoft's powerful AI camera tech is all but inevitable

Artificial intelligence-driven surveillance technology reminiscent of George Orwell's omnipresent "Big Brother" was presented. Microsoft showcased how this technology is capable of recognizing, people, places and objects, and could even act according to what it sees.

Using millions of cameras already in existence throughout our communities, hospitals and workplaces Microsoft explained this technology will allow us to "search the real world" just as we search the web.

Microsoft boasted this represents the coming together of the physical and digital worlds. I believe it represents the early stages of a dystopian implementation of hyper-surveillance, though Microsoft presented it within the context of how it will help keep us safe.

Safety first

Microsoft showed how in the workplace this AI-driven system could autonomously recognize a dangerous chemical spill and proactively notify the appropriate people to address it. It can also search a work site, find a tool an employee needs and "tell" the nearest authorized individual the tool's required elsewhere.

In a hospital, it could alert medical staff that a patient it's "observing" has surpassed prescribed levels of physical exertion. The system recognized the patient, had access to his records, "understood" his actual physical activity in relation to the digital record of his prescribed activity limits and "knew" what staff to alert.

Microsoft's carefully chosen scenarios reveal how this technology can help keep us safe. But how else might millions of artificial intelligence-enhanced cameras that are honed in on our comings and goings, activities, habits and routines be used?

This is just step two

Two years ago Microsoft introduced "How-Old" which uses Microsoft's facial recognition Cognitve Service (opens in new tab) to guess a person's age.

Beyond its fun aspects I expressed concerns How-Old could be a first step toward more dystopian applications:

Microsoft is likely subtly using [it] to hone its facial recognition technology for future practical and I imagine, ambitious implementations.Microsoft is a cloud-first and mobile-first company with ambitions to embed Windows 10 in as many IoT device's … as possible … And now they've designed and launched intelligent software that can recognize you.Imagine cameras, which already have a virtually ubiquitous presence in our communities, possessing intelligent software that will allow them to potentially recognize you everywhere you go. ATMs, stores, parks, traffic lights, police officer body and vehicle cams…snooping cameras on other people's mobile devices!

Microsoft How-Old, could facial recognition tech turn ugly?

I also referenced Cortana integration, access to the then 1.4 billion tagged faces on Facebook, creative third-party use of How-Old API's and the machine learning-powered image recognition tech of Microsoft's Project Adam.  

Two years ago my prediction of ubiquitous AI-driven cameras that could recognize us virtually anywhere probably seemed like sci-fiction musings or paranoid ramblings. Yet, as I predicted, Microsoft has taken the second step and made it a reality.

To ensure its widespread acceptance, Microsoft has begun marketing it as the "edge of its cloud" for workplaces and hospitals. Framed in the disarming context of making those environments safer Microsoft hopes to preempt the inevitable privacy and abuse concerns. Make no mistake; Microsoft's plans are far broader than work sites and hospitals.  

Step three, making an unsafe world "safe"

After the horrific events of 9/11, the Patriot Act was passed by Congress to provided the US government with greater powers of surveillance. We've proven willing to forgo certain levels of privacy in exchange for professed guarantees of security.

Within this context Microsoft introduced its AI-driven hyper-surveillance system as a means to increase hospital and workplace safety. Expanding that "safety" message to the broader scope of an "unsafe world" is the next step.

The path to widespread deployment of this system will likely meet with relatively little resistance. Particularly in the technological climate of selfies, self-promotion on social media and video platforms and where information-gathering "Terms of Service" are heedlessly and trustingly "clicked" through. Since privacy's an increasingly surrendered commodity, privacy-eroding measures to increase safety may be readily accepted by most.

Companies, governments and school systems that employ this technology will likely point toward our news headlines. School shootings, kidnappings, terrorist attacks, police violence, random public attacks, workplace misconduct and more will likely be used to "justify" its implementation to help keep us safe.  

For the common good?

Fear mongering combined with the willing surrendering of privacy (note the levels of personal disclosure on platforms like Facebook) will likely lead to a general embrace of an ever-watching, AI-enhanced and cloud-connected "eye" sponsored by governments and private institutions.

This will substantially differ from current dumb-surveillance. Even in its present iteration, what Microsoft introduced can recognize people, the context they're in, what they're doing and what objects they're interacting with.

At Build 2016 Microsoft gave us a prelude to this tech. Using smart glasses and Microsoft's Cognitive Services a blind Microsoft employee could "see" facial expressions, his environment and actions:

This technology, therefore, not only allows for viewing what's in a camera's line of sight like traditional surveillance. With Microsoft's "edge of the cloud" surveillance can interpret and act upon what it sees.

Practical applications

In a store, facial recognition and other Cognitive Services may determine a shopper's "demeanor" indicates he's likely to shoplift or attack. The system could proactively alert store staff of this threat. Could such a system be prone to profiling?

Moreover, due to violence, many schools employ security measures like metal detectors and checkpoints. Inappropriate teacher-student relationships are also problematic. Microsoft's AI-driven surveillance could monitor staff and students in and out of school (via school and public cameras) who may be likely to engage in dangerous or inappropriate behavior.

Cognitive Services could potentially recognize emotional cues that are antecedents to dangerous or inappropriate student behavior. If/then programming could cue the system to focus on individuals exhibiting suspicious behavior.

This could be coupled with the system's object recognition capabilities. The system's ability to recognize a student's handling of a gun or dangerous materials via public cameras outside of school isn't far-fetched based on what Microsoft demoed. The system, as it does in hospitals, could then proactively alert authorities. Potential terrorists can be similarly trailed.

Big brother?

Government implementation of this technology is troubling. Via the millions of cameras installed throughout the world, AI could proactively "follow" persons of interest wherever they are and watch whatever they're doing.

Under the presumption of acting preemptively, governments and law enforcement agencies may use Microsoft's surveillance technology and Cognitive Services to interpret patterns of behavior and activity that may indicate a person could be a threat.

What happens if politics begin governing this technology's use and particular groups are targeted? What if religion-oppressing governments use it to root out those who aren't compliant? Will dissenters of oppressive regimes be more easily found and "dealt with?" Can democracies like the United States become "less free" if such powers are placed in its hands? What about hackers?

We're simply not responsible

Like the movie Minority Report, governments may use these tools in an attempt to stop crimes before they occur - but at what cost? These increased powers of surveillance could be abused resulting in a subsequent erosion of freedoms, not just privacy. Innocent people may become the subjects of hyper-surveillance.

Nadella asserted we must use technology responsibly, but history dictates that's a standard we're not likely to attain. Though we've done and will do much good with technology, the good is almost always accompanied by the bad.

The splitting of the atom led to nuclear power and atomic bombs. The study of chemistry yielded both medicines and weapons. Gunpowder resulted in fireworks and guns. The Patriot Act's led to successful anti-terror operations while also paving the way for profound levels of surveillance of US citizens.

My pessimist's view of how things may turn out could be wrong. But given human nature, history and the direction we're headed I sincerely doubt it.

What are your thoughts on the moral and ethical implications of this technology?

Is quantum computing a genie we will wish we kept in the bottle?

Microsoft's Cognitive Services and AI everywhere vision are making AI in our image

Jason L Ward is a columnist at Windows Central. He provides unique big picture analysis of the complex world of Microsoft. Jason takes the small clues and gives you an insightful big picture perspective through storytelling that you won't find *anywhere* else. Seriously, this dude thinks outside the box. Follow him on Twitter at @JLTechWord. He's doing the "write" thing!

76 Comments
  • Thanks for reading everyone. I think there a lot of good technology can do and has done. Sadly, the opposite, when in the hands of those with I'll intent, is also true. I know Microsoft deliberately and carefully presented the use of its AI-driven surveillance within a refined context and defined purpose. However, once in the hands of independent entities, governments or institutions discretion or range of use may greatly vary. In a world of unjust governments, crooked businesses, rival nations, criminally ambitious individuals, self-righteous nations, organized crime, state-sponsored hacking, eroding privacy and a world inundated in fear I believe the misuse of this technology, regardless of what Microsoft says about responsible use, is inevitable. The genie is out of the bottle and there's no putting it back. As I stated in the article, two years ago after Microsoft launched How Old, I predicted it was a prelude to what we see with this AI driven tech today. Unfortunately, I'm confident that my prediction of the misuse of this technology is equally as accurate. There must must ethical and moral effort put in place to address the use of this tech. Well let's talk.
  • I am mostly bothered about abuse by governments, because that's where all the bad economical energy comes together (e. g. lobbies), and where all the bad laws are passed through (e. g. mass surveillance). If we people can't get our right to democratically control our governments, then my kid (we're pregnant, yay!) won't know how freedom is spelled.
  • I am quite sure governments around the world have been doing this for decades of years.
  • How long before masks become a common fashion accessory? A different face for each day of the week? Or perhaps we'll all need to start wearing the same mask. And how long before people join their smart glasses and phones together to 'watch the watchmen'? Masked security forces would become the norm? Scary.
  • I do think that it's a scary step, but it was an inevitable one. The article shows New York, but I honestly expect London to be the place most interested in this technology. London is covered with cameras at a much scarier scale than anywhere in the US. Add to it far fewer protections of their citizenry than the US (and non-citizenry) and I expect this to be first justified to confront terrorism -- then who knows what's next. There will absolutely be misuse of this in countries that try to avoid it, but I'm actually most concerned about the evolution of this technology in countries that don't really try to avoid it. Imagine China with this kind of technology or practically any Middle Eastern country. The ability to control ones citizenry when there is an automated process that alerts you to them stepping out of line is downright horrifying. Not to bring in religion, but imagine one of the more conservative countries automatically deploying their religious police based on infractions (e.g., a female showing too much skin).
  • Person of interest!
  • The tech isn't necessarily the problem, the people using it are.. :P
  • Right that's why I focused on how it will inevitably be abused by people.
  • Isn't tech meant for people? Or is it for only "Good" people?
  • Tech is for everyone, so are safe practices
  • BTW, change AI cameras to guns and watch this very civil debate do a 180 on opinion. :)
  • I don't think I want guns pointed at me when I enter a business.
  • Yep that's what i thought of too
  • Exactly. But we need to make sure that the outcome is 'The Machine' and not Samaritan.
  • Personally I am not too bothered. When you're out in public, you are out in public. The only concern for me about "being seen" would be at home where you would think it is private. The simple fix for that is no unprotected cameras. Whether that be in the form of an IP Camera or a camera equip "personal assistant"
  • So if you are out with your family and friends and someone holds their phone at you all pulling all your info as they please, that would be okay with you? Now you look at people and they are strangers. But that would not be true if this AI software went public or leaked. Someone can potentially walk around with a camera to see how much people are worth and how far they are from home, giving them a chance to have their partners burglarize your home, while your are watched.
  • They can already do that. When. You. Are. In. Public. Everything. You. Do. Is. Public.
  • Totally agree. And I'd go farther that it's not just government that is a concern here. The Circle (the book, at least, haven't seen the movie) played this out with a corporation that had achieved near-government status. I recognize that corporations currently lack some of the enforcement powers of the state in some parts of the world, but that line gets blurred. SkyNet is coming!
  • This has to seriously be killed off.  Dead.  Kill it with fire.  This is absolutely frightening on so many levels of frightening.  They are talking about physcially mounted security cameras.  Has anyone ever pondered webcams built into laptops and PCs?  How about cameras in TVs and Amazons and inevitably Microsofts "Smart Assistants"?, ATMs?  HELL NO.  Screw this.
  • ...put a bag on yer 'ead when you go t pub lad!
  • That's a good idea!  Just reuse the bag that we're using for our beer.
  • I'll pay for a beer if your both going. and we can wear these CCTV blocking glasses lol https://technabob.com/blog/wp-content/uploads/2013/01/privacy-visor-cctv...  
  • I think your vision is accurate.
  • Based on history, this is exactly where we are headed. Let us hope it is a world where all of us will be able to thrive eventually. Time... will tell.
  • We'll if you dont think that this is already occurring, I think your misguided anti privacy laws aren't being initiated to start the process its to allow action based upon details already acquired. The goods news is that its now into the public domain to be used for health and safety etc.
    Car, phones, mobile devices all record data, voice and collate images taken by devices. It's now a matter of where the data is held and stored and government / public access to this. Google sells data about users to anyone willing to pay. I'm not sure about apple and I'm fairly confident about Ms privacy considerations.
  • Collating meta data, gps coordinates on a phone, recording a voice, etc... That can be circumvented.  This cannot.  This goes from covert intelligence to overt control.  There is a huge difference.
  • Google doesn't sell data. How could they sell ads if they already sold the data? They use the data to target ads if you allow them. Microsoft does the same and I am quite sure Apple does too. They are all trustworthy though. They have too much to lose by mishandling your data.
  • Google doesn't sell data? Ok they only give away your email address. My company dumped MS Exchange for google suite crap 3 years ago. Almost immediately started getting spam mail. Never ever got any spam while we were on Exchange. So how did the spammers get my email?
  • Didn't he just say, "they sell ads?" "Spam" is ads in your inbox, right?
  • Obviously, change rooms, rest rooms or in the comforts of your home, there is an assumed expectation to privacy. But I'd have to agree 100% while out in public that there really isn't any privacy. Honestly, everyone can watch or record crimes so I'm not sure criminals should get away with crime if there isn't enough police. But do understand every government around the globe will figure out a way to abuse this.
    On a personal note, we were forced to place security monitoring around our home and it's solved a number of serious crimes to the point our neighborhood doesn't have problems anymore and a friends business was able to stop hundreds of thousands in theft.
  • Now imagine someone tapping into those cameras to watch your movements.
  • If camera's were inside homes and not out on public property, that would be different story. But people can't walk around and say stop looking or listening to them either while in plublic spaces. That's what walls & private property are for.
  • We should just get back to living our lives and destroy all these unneeded technologies.
  • "Two years ago my prediction of ubiquitous AI-driven cameras that could recognize us virtually anywhere probably seemed like sci-fiction musings or paranoid ramblings. Yet, as I predicted, Microsoft has taken the second step and made it a reality." Two years ago Google was releasing their Photos app that had recognition technology. While Microsoft is announcing this technology, Google has had products using it for years. There is a reason Google Photos has 500 million active users. Windows Central is like Apple fans! Claiming Microsoft invented everything.
  • And 5 years ago Microsoft released Windows Live Essentials which did the same stuff... So, i think 5 is a bigger number than 2, which means... Yep, Microsoft did it first.
  • That is just facial recognition. That was also in Android in 2012. I am not saying Google invented this stuff, just that this site ignores other other uses of technology.
  • Remember Bing Vision?
  • <trolling>But, but … Remember when Picasa was recognizing faces in 2008?</trolling> There are diminishing returns to all of this bickering
  • Was going to say that.. :) But the pioneers of automated face recognition include Woody Bledsoe, Helen Chan Wolf, and Charles Bisson.During 1964 and 1965, Bledsoe, along with Helen Chan and Charles Bisson, worked on using the computer to recognize human faces. 
  • Hi Bleached the context of the excerpt you chose to use followed my warnings of ubiquitous AI driven camera throughout society that I warned could come based on the 2-year old How Old facial recognition tech: "Beyond its fun aspects I expressed concerns How-Old could be a first step toward more dystopian applications:
    Microsoft is likely subtly using [it] to hone its facial recognition technology for future practical and I imagine, ambitious implementations.
    Microsoft is a cloud-first and mobile-first company with ambitions to embed Windows 10 in as many IoT device's … as possible … And now they've designed and launched intelligent software that can recognize you.
    Imagine cameras, which already have a virtually ubiquitous presence in our communities, possessing intelligent software that will allow them to potentially recognize you everywhere you go. ATMs, stores, parks, traffic lights, police officer body and vehicle cams…snooping cameras on other people's mobile devices!" It is the fulfillment of that warning in the AI driven video surveillance that can search the real world as we search the web, recognizes people, objects and activities that is references in the excerpt you chose. Something that two years ago, Google of course was not doing. We're no talking about photos here, we're talking about real time video recognition that can proactively act on what it sees. Also just as How Old Microsoft Project Adam was recognizing still images like Google two years ago.
  • A friend of mine (who is now the CEO of a small company) has the patent for searching within videos and bookmarking "areas of interest". He initially worked for Google. So I'm sure there is something similar happening on Google end as well.
  • Sounds a bit like Google Lense that was announced yesterday. You can use the camera on your phone to recognize things around you. Microsoft keeps talking about all these technologies but they never actually put them to good use. Microsoft has become the lead provider of vaporware.
  • With every innovation there comes a risk of subterfuge. That doesn't make the innovation bad or mean we shouldn't innovate. GPS was originally developed for the military but is now integrated into almost everything we do. Nuclear technology has given us tremendous advances in energy and medicine. In the end, it will be the people that decide what new technology is used for. At least in the "free world" the people are the government and if we accept a certain level of surveillance to be in the best interest of the collective State, than so be it. If the State oversteps the boundaries prescribed by the people, then historically there has always been repercussions. So while there will inherently probably be abuses as entities test the limits, in the end I think we will all be just fine.
  • Except in areas of the world and governments that are unjustly oppressive even now with the limited powers of surveillance they currently have- those citizens in those countries are not "just fine." And if these oppressive govts get this tech, those citizens will be in an even worse state. Even in "democracies" not all citizens are treated fairly. Sadly as we walk by each other on the same streets, experiences of fairness and justice can be very disparate. This tech even in society's that look "fine" can be used to further oppress the oppressed. Snowden's disclosures showed us that when some thought things were fine - they were not.🙁 I'm not inclined to think they are now. Sadly this tech won't help that.
  • Abuse of technology in oppressive countries has most everything to do with the State and very little with the technology available. Oppression would occur regardless.  Changing the politics of a particular country is, and should be, up to the citizens of that country. By whatever means.  I strongly agree that even in "free societies" everyone is not always treated equal. That is a huge discussion in and of itself, with or without technology. That said, societal norms change over time, some times over a long or too long period of time, and generally for the better. But they do change. Time and again, even in our currently very polarized USA, we experience shifts in the direction of fairness. A lot of two steps forward, one step back occuring, but we do move. Sometimes it takes a village, sometimes a revolution, but we do move. Anyway, I find it hard to justify not persuing any technology solely because there is a chance it will be misused against all or even some.  We know it will.  And when it occurs it is usually addressed. The Snowden example you give actually meshes well with what I am saying. There was an abuse, it was exposed and it was dealt with. Is there more abuse going on?  Maybe.  But if there is an itch, it will be scratched. I'm hoping you see that I realize all abuses may not and will not be adequetly addressed in a certain period. But in the long term, I believe society tends to take the right path, even if that path has many detours.
  • I understand the source oppression are the human agents, but a person see kn murder can do a lot more damage with an M16 than a pocket knife. Technology exponentially increases the damage those human agents who wish to do wrong can do. The same is true with this technology.
  • Its not a matter of IF, its a matter of WHEN. There's no stopping the technology - all we can do is try to keep public officials as transparent and accountable as possible.
  • This can only be stopped by laws. Maybe Data Trespassing law or something.
  • And as we see, laws can be changed with executive orders... I agree with digital trespassing laws. The CC of Canada covers a wide range of digital laws for intercept any function, destroy, alter or render data meaningless, interrupt or interfere with services etc. but here, most evidence is allowed no matter how it was obtained.
  • Hahaha. What's the worry? Nutella will wait until this technology is close to perfect and kill it? What's the clammer about?
  • Hahaha. That's funny.
  • Dude, get help. You have a legit mental illness
  • creepy little jerk also has problems with Dona S.
  • It's ok Alice. Suck it up
  • The ascendance of the AI = The Demise of the Human Race. Issac Azimov... The 3 Laws of robotics.
    Maybe then, we'll have a chance.
  • people will have built in microchips that control motor functions, all devices will have built in connected AI, vehicles will have connected AI and it will all be run by central govt servers.  for our protection.
  • The Matrix is an amazing trilogy if you haven't seen it. 
  • I don't know... I see Matrix like Pirates of the Caribean, great first movie, but the second sucked, and the third couldn't fix the damage...
  • I liked the first movie as well. The purpose of the comment wasn't to critique the movie, it was my poor attempt of adding fuel to a fully developed fire.
  • Another good article Jason. I enjoy reading people who enjoy tech, but at the same time keep their eyes open. I think your analysis is very accurate, and that we will eventually head down this path regardless of what happens in the short term. It goes once again to show that things like this can be a wonderful tool in the hands of careful and responsible people, but a dangerous weapon in the hands of the unscrupulous. The question then remains, 'what, if anything, can or should we do about it?'
  • Why develop the tech in the first place?  It has no moral or ethical constraints.  That's by design and you know better already.  This isn't for our safety.
  • 'the machine' lives. I need Amy Acker, stat! seriously, this is really gonna suck.
  • "People born into a world where the ease of communication comes at the price of the loss of autonomy never experience privacy. They are unaware that a foundation of liberty has been lost. In our era of controlled print and TV media, the digital revolution serves for now as a check on the ruling elite’s ability to control explanations. However, the same technology that currently permits alternative explanations can be used to prevent them. Indeed, efforts to discredit and to limit non-approved explanations are already underway. The enemies of truth have a powerful weapon in the digital revolution and can use it to herd humanity into a tyrannical distopia. The digital revolution even has its own Memory Hole. Files stored electronically by older technology can no longer be accessed as they exist in an outdated electronic format that cannot be opened by current systems in use. Humans are proving to be the most stupid of the life forms. They create weapons that cannot be used without destroying themselves. They create robots and free trade myths that take away their jobs. They create information technology that destroys their liberty. Dystopias tend to be permanent. The generations born into them never know any different, and the control mechanisms are total.  And the digital screen serves as Soma."
  • Nice bullshit
  • Feel free to rebut with a clear thought.
  • The most beautiful flower in the world, is actually not a flower. It's a car.
  • Microsoft has nothing to worry about, everyone knows the Google version will be used instead.
  • Windows 10 is already being abused by its creator that literally made it a spyware: http://youtube.com/watch?v=wPFbAqICUJo These AI camera systems would only complement that abuse given that it's coming from the same organization who gives 0 f*cks about user privacy on their PC.