I began experimenting with Volumetric Video Capture back in 2016 using a multi-kinect setup and Brekel. This allowed me to calibrate the kinect sensors to a common world coordinate system, and then record the point clouds which could later be merged and stitched together. Recording point cloud data on 5 machines creates a lot of data for even a 1 minute video, and requires a lot of processing. There are more modern techniques using software from DepthKit and Eve Effe that can do a better job at meshing and color matching, and the new Kinect Azure sensor provides a substantial increase in resolution.

Here is a piece I made back in 2015 using an Kinect camera and an optical therimin.