How the Differences in Building 360 2D Vs. 3D Cameras Will Shape VR Adoption?
The definition of Virtual Reality has been evolving over the last couple years. While we are all clear about what a VR headset is and what it needs to do, people often are confused about what VR cameras should be able to do. Is it 360° or 3D or both? There is clearly a hype around 360° 2D consumer cameras right now, but the wave of 3D cameras is not far behind. However, to bring more clarity around why 3D is still behind, this article should explain some of the differences and especially challenges which make building 3D cameras much more difficult than a 360° 2D camera.
2D vs. 3D 360°: More than Just a Couple Lenses
First of all, the amount of camera modules and lenses needed to do 2D 360° is at a minimum two and can go up to four, but 3D needs way more than that. If you imagine that capturing 3D in one direction can only happen with at least two lenses – as we reproduce what a human eye can do in stereo – then you will understand that the most you can achieve in 3D with two wide angle lenses is 180° through and with six lenses is 360°. It is possible to create 360° 3D with four lenses, two facing one direction and two facing the opposite, but then you are sacrificing the sides which will be only in 2D. Many 3D cameras use eight or more lenses to create 360° 3D, because the more lenses you use the better resolution you can achieve with stitching.
Straightforward Vs. Additional Alignment: Manufacturing 2D and 3D
Second, 360° 2D cameras have a more straightforward manufacturing process than any 3D camera for Virtual Reality. In the assembly, two lenses are mounted on sensors back to back, and the key step is the mechanical calibration to precisely align them. However, 3D VR cameras have to go through the same step in manufacturing, but then add an additional software calibration process which guarantees a perfect 3D experience. Every single 3D camera has to go through that process in order to generate 3D data necessary for playback.
Stabilizing Multiple Points: Image Processing in the 2D vs. 3D
Finally, image processing needs for 3D cameras are much higher than for 360° 2D cameras because of stabilization, high dynamic range and calibration. For 360° 2D, you need the stitching software to combine the two lenses, and that happens mostly in real-time. However, for 3D cameras, you need more than just stitching, since you always have two reference points which you need to consider instead of one like in 2D. That means when you stabilize 3D, you are not stabilizing one point, but multiple points in differently shaking images. It can create a huge challenge, as for a perfect 3D effect and depth, you would need to keep the reference points at the same distance. Another impact is on the high dynamic range between two lenses capturing with a slight offset.
Even though you are recording the same direction, the light might be different due to the offset of the lenses. Besides those two you have to also deal with calibration which we briefly discussed above, but not just in manufacturing. What happens when the camera falls on the floor and the lenses get misaligned?
Those are just a few reasons why building 3D cameras is so difficult and needs time to get right. The smallest error can cause a huge impact on the viewer, leaving behind an experience which keeps people away from ever adopting Virtual Reality. Therefore, ramping up 3D cameras have taken an incredibly long time compared to all the 2D 360° cameras in the market. All we can hope is that the computer vision technology for 3D keeps getting better and better over the next couple years to finally take true VR content to the next level.