Microsoft Research showed off Project Adam, a project that could deliver more accurate visual searches and give Cortana and Bing huge potential in the future. Utilizing machine learning and artificial intelligence, Project Adam mimics the brain's neural connections to visually recognize objects with far greater precision than anything that's available today.
According to Microsoft Research, Project Adam relies on 30 times fewer machines and could deliver twice the accuracy fifty times faster than anything on offer today. Essentially, Project Adam not only recognizes a dog when you point and aim your smartphone camera, but could distinguish between different breeds of dogs.
So how did Microsoft train Project Adam? Over 14 million images were pulled in from the web spanning 22,000 different categories derived from user-generated tags.
Various technology companies are beginning to capitalize on visual search, such as rival Google which could leverage its own visual search using Google Goggles and other wearables and Amazon's Firefly technology that is debuting on that company's Fire Phone. Project Adam's highly accurate visual search could give its rivals a run for their money.
At this time, it's unclear if and when Microsoft will make this research commercially available, but the implications for Bing and Cortana are huge.
Are you excited about the possibility of being able to snap an image of your search rather than typing or speaking it?