Sensor based visuals

Depth sensors and point cloud visualizations expanded the research and scientific progress in photography and film developed by Eadweard Muybridge, Harold Edgerton, and Étienne-Jules Marey. Their work made the invisible visible in the study of locomotion and understanding of human movement. The introduction of depth sensor cameras in the last decade gave birth to a new image capturing paradigm and aesthetic. The Xbox Kinect depth sensor camera has an inbuilt infrared projector in order to generate a “depth image.” In RGB cameras, each pixel records the color of light that reached the camera from that part of the scene. In depth sensor cameras, each pixel of the depth image records the distance of the object in that part of the scene from the camera. In the depth image, each pixel grayscale value corresponds to the depth of the image in front of the camera. The conversion of two-dimensional grayscale pixels into three- dimensional points is referred to as point cloud:

When we look at depth images, they will look like strangely distorted black and white pictures. They look strange because the color of each part of the image indicates not how bright that object is, but how far away it is. (Borenstein, 2012, p. xi)

 

Screenshot of motion graphics created for Voice and Exit Festival in Austin, Texas in 2015 using depth image combined with RGB processed in real time.

Artists such as James George, Aaron Koblin have explored the qualities of depth image for interactive visualizations and storytelling. The interactive and depth-based visualization qualities of this new 3D cinematic medium have provided groundbreaking work in performance and installation interactive art.

The following screenshots are a part of my research and creative exploration of this technique, which I describe as painting with sensors.