Time-of-flight cameras are sensors which deliver two types of information for each pixel: the intensity (gray-value) and the distance from the camera (depth). A time-of-flight camera consists of three basic components:
The figure above illustrates the basic working principle: The active illumination unit emits intensity-modulated light in the near-infrared range. Light hitting an object or surface is reflected back to the camera. The reflected light is projected onto the imaging sensor using the lens. By correlating the emitted and received signals it is possible to compute the distance of the illuminated object/scene to the sensor for each pixel.From the acquired depth information pretty nice 3-D surface data can be reconstructed in real-time.
Conventional imaging sensors, such as in a digital still image camera, consist of multiple photo diodes arranged in a matrix. Normally, these diodes provide a gray-scale or color image of the scene. In contrast to normal cameras, a Photon Mixing Device (PMD) sensor additionally acquires a distance value for each pixel simultaneously to the intensity (gray) value.
Concluding, a time-of-flight (ToF) sensor is a matrix composed of distance sensors. Despite the functional improvement (compared to conventional imaging sensors) a ToF sensor itself is still manufactured in standard CMOS technology. Therefore, imaging and 3-D measurement capabilities can be placed next to system-relevant electronics like analog-digital converters, etc. All ‘intelligence’ of the sensor is incorporated in the chip, including the distance computation for each pixel. Therefore, ToF pixels are sometimes also called ‘smart pixels’.
Recently, applications like gesture recognition or automotive passenger classification use ToF sensors and ToF is about to become a component of consumer electronics. As ToF sensors provide data at rates of more than 30 frames per second, they are suitable for real-time 3-D imaging. We are convinced that ToF technology can contribute to enhance applications within various business fields.
ToF cameras provide a real-time 2.5-D (only the camera-facing part of the surface can be observed by the ToF camera) representation of an object. The object is actively illuminated by an incoherent light signal. This signal is intensity-modulated with a cosine signal of frequency
Usually the emitted light is in the non-visible near infrared range of the spectrum.
The light signal travels with constant speed in the surrounding medium and is reflected by the surface of the object. By estimating the phase-shift (in rad)
between both, the emitted and reflected light signal, the distance d can be computed as:
where c [m/s] denotes the speed of light, d [m] the distance the light travels, f_mod [MHz] the modulation frequency, phi_d [rad] the phase shift. In addition to depth values, ToF cameras also provide intensity values, representing the amount of light sent back from a specific point.
Due to the periodicity of the cosine-shaped modulation signal, ToF cameras have a non-ambiguous range of
Within this range, distances can be computed uniquely. The range depends on the modulation frequency of the camera which defines the wave length of the emitted signal. To compute distances, the camera evaluates the phase shift between a reference (emitted) signal and the received signal. The phase shift
is proportional to the distance
The next figure below shows the relation between both.
Correlation of phase and distance.
Relation of a phase shift to the distance at a fixed modulation frequency of 20 MHz. E.g. if a ToF camera operates at a modulation frequency of about 20 MHz, then a single wave is of length 2pi or 15 m. Thus, the unique range of these ToF cameras is approx. 7.5 m. The range can be altered by adapting the modulation frequency of the active illumination.