Depth sensors and point cloud visualizations expanded the research and scientific progress in photography and film developed by Eadweard Muybridge, Harold Edgerton, and Étienne-Jules Marey. Their work made the invisible visible in the study of locomotion and understanding of human movement. The introduction of depth sensor cameras in the last decade gave birth to a new image capturing paradigm and aesthetic. The Xbox Kinect depth sensor camera has an inbuilt infrared projector in order to generate a “depth image.” In RGB cameras, each pixel records the color of light that reached the camera from that part of the scene. In depth sensor cameras, each pixel of the depth image records the distance of the object in that part of the scene from the camera. In the depth image, each pixel grayscale value corresponds to the depth of the image in front of the camera. The conversion of two-dimensional grayscale pixels into three- dimensional points is referred to as point cloud:
When we look at depth images, they will look like strangely distorted black and white pictures. They look strange because the color of each part of the image indicates not how bright that object is, but how far away it is. (Borenstein, 2012, p. xi)