I was fortunate to work on this project as part of a hackathon at work. I’ve always had an interest in FPV flying, and I have a pair of FatShark goggles that I’ve use to fly with a mini CMOS NTSC camera. The quality is poor, and it can be difficult to gauge your surroundings as there are no depth cues. It works, but is far from ideal or immersive.
I needed this hack to be on the cheap, and since stereo cameras are expensive I had to find a way to make my own, using two cheap chinese IP cameras I had laying around. The idea is to run the cameras via the datalink on the drone to the ground, and use ffmpeg to serve the video over a web socket to the browser for each eye, where I can apply barrel distortion using webgl.
Or so that’s how it’s supposed to go. Ignoring timing issues with the camera left and right frames and no way to reliably sync them, it actually almost had hope of working and produced some initially good results testing indoors.
Following the 1/30th rule of stereography, which states that the interaxial separation (distance between eyes, or in this case camera lenses) should be no more than 1/30th the distance of the closet subject, or in this case about 2.5 inches to have features of about 75 inches away and background at infinity. Convergence of the two final images can then be adjusted while wearing the rift, as well as doing the necessary barrel distortion required due to the optics in the rift (what give the immersive feeling).
RTSP In The Browser
No worries, though. FFMpeg, the swiss-army knife of streaming video and video transcoding comes to the rescue with the ability to transform a conventional mpeg1 streaming video transport that we can use directly with jsmpeg. In fact, someone had already created the node module node rtsp stream that wraps and handles the ffmpeg process creation and sends the data over a websocket!
In my case, I had two cameras so I simply had two servers and two different node/ffmpeg processes, each running on their own websocket:
Decoding MPEG1 video in the browser actually works quite well thanks to jsmpeg, which takes data in via websockets and renders each frame to a canvas element. Using three.js, it is possible to create a texture from the contents of the canvas with video data:
Ready for VR: Lens Distortion
You may have noticed using a headset like the rift there is a lot of distortion when viewing content that isn’t specifically designed for it, and vice versa. You’re probably familair with the oculus screenshots showing two rounded views of images. That’s because the lenses apply a pincushion distortion, which is an optical trick to make it seem like a flat image is actually immersive. In order to reverse the effect, a barrel distortion must be applied to the image.
This is easiest done with a fragment shader. A shader is best described as a simple program that operates on either all verticies of the 3D scene or all rendered pixel values. These programs are run in parallel on the GPU for each frame of the scene (and pixel/vertex) so they are quite handy and well suited for the task. In my case, I was interested in the fragment shader.
The fragment shader operates on pixel values, so applying perspective transformations and my case barrel distortion can work in real-time, as fast as the video stream. Here’s a fragment shader to do a simple barrel distortion and the accompyning vertex shader:
The video in the first step of the jsmpeg client is being rendered to the canvas, element but we need a way to get it into a webgl texture so the fragment shader can do its work. To do this, simply read the canvas raw RGB image data and create a texture in three.js:
Video data is now being sent to the GPU in the form of a texture. Now, create a new three.js material with the barrel distortion shaders:
Now, simply create a flat plane geometry of arbitrary size (we’re using shaders to compose our image so it doesn’t really matter) and create a mesh of the geometry and material:
In standard three.js parlance, let’s create a camera, scene, renderer and insert the three.js webgl canvas to the document:
Pulling it all together, the last thing to do is to setup the render loop. On each render, we want to update the texture and render the webgl frame:
How well does it work? If you have a modern browser you can check it out right here! Here’s an example of streaming a static video file instead of live camera (aspect ratio and chromatic abberation are not corrected in this example):
Left & Right
Now that we’ve demonstrated the ability to apply a simple shader to our video, we can use a more complete example that takes care of things like chromatic abberation caused by the lens and handles both cameras. This is accomplished by drawing both video feeds to an enlarged canvas, side by side:
I’ve also used a more complete rift shader that handles chromatic abberation and the center spacing in the rift optics and LCD screens, but the same basic concepts hold true.
I mounted my camera rig on a gimbal and was able to do a few test flights before going for a fly at Treasure Island. The link drops out in some places, but overall worked well. The effect was certainly immersive and has given me the motivation to continue to work on the project. Here’s a rather compressed video of the flight:
I have plans to use a board like Nvidia’s Jetson and run the necessary code directly on the GPU, on the drone itself and send back a composite stream suitable for viewing directly in a headset, possibly using USB 3.0 high speed cameras, or using the built-in MIPI camera interface, which allows direct frame access on the GPU.
I posted the code of my hack to Github.