Tango was different from other contemporary 3D-sensing computer vision products, in that it was designed to run on a standalone mobile phone or tablet and was chiefly concerned with determining the device's position and orientation within the environment.
The software worked by integrating three types of functionality:
Motion-tracking: using visual features of the environment, in combination with accelerometer and gyroscope data, to closely track the device's movements in space
Area learning: storing environment data in a map that can be re-used later, shared with other Tango devices, and enhanced with metadata such as notes, instructions, or points of interest
Depth perception: detecting distances, sizes, and surfaces in the environment
Together, these generate data about the device in "six degrees of freedom" (3 axes of orientation plus 3 axes of position) and detailed three-dimensional information about the environment.
Project Tango was also the first project to graduate from Google X in 2012 [7