Inroads in perception
Google's recent acquisition of Emu Messenger is just one of many items in recent news about improvements in perception and artificial intelligence (AI).
Emu, not to be confused with the Australian ostrich-like bird nor the European Monetary Union, is a small Palo Alto start-up comprised of some serious software talent with experience in machine learning, natural language processing and mashing up different data, databases and systems at Siri, Apple, AOL and Google. Perception in this case is the feeling that your words and intentions are understood and acted upon. No financial details were disclosed about the acquisition. However, the Emu app will be shut down next week.
Another form of perception, computer eyesight (AKA machine vision), took a giant step forward when the winners were announced for this year's Large Scale Visual Recognition Challenge. The NY Times reported the winners: the National University of Singapore, the Oxford University, Adobe Systems, the Center for Intelligent Perception and Computing at the Chinese Academy of Sciences, as well as Google in two separate categories.
Machine vision has been a challenge in automation and robotics but in recent years has become integral in countless applications including computer gaming, medical diagnosis and factory robotics. Car makers have also added the ability to recognize pedestrians and bicyclists, and trigger safety actions. Factory robots need improved perception systems in order to monitor the progress of their tasks and the tasks of those around them. [A quick Google Images search produces a collage of various uses, from milking to adaptive cruise control.]
Enhanced algorithms, data libraries and faster and cheaper computing are all contributing to the increased accuracy and speed of the systems recognizing objects and identifying them by type and in 3D space, nevertheless, at their best they are still no match for human vision and perception.