I started filming in 360 back in 2007 with parabolic mirrors and DSLR cameras. Back then, it was mostly a hobby because I liked the warped images it produced. As technology progressed, I began experimenting with GoPro cameras and created a few custom 3D printed 360 rigs. I was able to design a stereo 3D 360 camera in 2015 before these were commercially available. There was no off the shelf software that could render stereo 3D 360 and I had to develop my own pipeline for rendering the videos which involved PTGUI templates and ffmpeg to combine the images into a video file.

I created several prototype 360 rigs, and each iteration was able to improve my designs considerably. My first rigs were mostly a ball of 360 cameras stuck together that I carried around on a stick, literally. I quickly developed a 3D printed prototype that was much more stable and able to take much better shots. The size of the camera matters because ideally, all shots will be taken from a singular point, and if the cameras are displaced, this introduces a parallax effect that will introduce stitching errors. Some of these errors can be minimized using homography transformations, but this becomes difficult when the displacement is large. Another factor to consider is coverage. You want a camera that has a large overlapping region in order to create multiple control points for stitching. The minimum overlap for 360 would be 5% but larger is recommended, but this needs to be balanced by the fact that too much overlap results in a reduction in resolution. For stereo 360, you need a total FOV from all cameras to be greater than 720 degrees, because you will need to create equirectangular images for each eye.

I also developed stereo 180 cameras using a pair of GH4 cameras. I choose these cameras because at the time, they were the cheapest I could find that were able to be genlocked. Genlocking is when you use the clock frequency of a master clock to drive the internal clock so that frames will be exactly synchronized. At 30fps or even 60fps, the time between frames is 15ms or 8ms respectively, and without genlocking, the video feeds can be off by up to 1/2 a frame. This is critical for high speed action video where you want to capture accurate 3d information.

Besides creating custom 360 cameras, I also experimented with live streaming stereo VR. In 2016, I created my own encoder/decoder for live VR streamed to a website that was viewable using Google Cardboard. This tech was perhaps a little early for the marketplace and I was never able to find clients who could see the potential of this. Just too few headsets and it was not practical for most applications.

There are interesting differences between mono 360 and stereo 360, like the fact that it is impossible to correct stereo convergence across the entire FOV due to mathematical principle called the “hairy ball theorem”. While mono 360 is good enough for most cases, it lacks the perception of depth that we are used to. Stereo 360 improves this slightly by giving depth, but only gives depth at a single convergence point and therefore does not quite look right either. In order to achieve true depth, we would need something more similar to a light field, however, at least currently this is not feasible due to the amount of data required and technological limitations. 

Over the years, I’ve been able to do some incredible shoots. I filmed for The Weather Channel in Hawaii, I filmed a choreographed skydiving jump, I’ve filmed weddings, rap videos, horror films, and lots shots from my travels. These are a few shots from videos I’ve published.