top of page

Oculus Rift Prototypes

 

 

 

 

This is my updated AR Oculus Rig. It uses 2 x 1.8 mm lenses with 170 degree diagonal FOV (close to Oculus 111.477 degree FOV natively). This rig is MOBILE, it can be attached to a laptop and requires no wall plug in. Also I've rigged it with a LEAP motion gesture device. 

This prototyping took place in late 2013

Here you can see the rig attached to a laptop, running an oculus/leap basketbal ldemo I made. In this image the rig is recognizing the image target and displaying its AR contents on screen. (No wall plug!)

This was my first Oculus Build. It allowed me to prove the pipeline from Unity to publishing on the device. I did not realize that the background image needed to be stereoscopic too, so this was the big lesson here. Vuforia webcam for Unity's play mode does not allow for more than one camera stream natively, so I had to get creative to get binocular vision. 

This build fixed the background issue, now it is stereoscopic, but the offsets aren't quite right, so no background convergence yet, 3D object are crystal clear and 3D though. In this demo I ported in part of my paint program and an augmented menu system from my "look UI" system. You can see the holdcounter debug text here from my touch and hold testing. 

Now it's getting interesting, I have my fingerpaint touch interface present, with my "look-UI" working AND leap motion hooked up so that I can move Ui elements around using my hand to point and pick them up. This is using LEAP motion's "Leap Selection Controller", which works pretty well, but I think I'd like to make my own for manipulating objects in 3D space. 

This build is the "Tuscany" Demo of Oculus rift, but I have added leap motion support (can pick up and drop items within the world) and can use AR targets to spawn my "look UI", which can be operated by looking at it, and can be moved around and stuck to parts of the world using the Leap motion gestures (touch and move). 

I was making great progress on the Oculus builds, but hit a wall with the AR software I was using. Apprently Qualcomm's Vuforia doesn't publish to Windows, so none of my demos could be shared or demoed without having Unity editor running in Play mode. So I made the leap to Metaio! This toolkit has some amazing features. Upon initial research, tracking is not as good out-of-the-box as Vuforia. I've complained on the forums about this lack of feature, so hopefully they will step up and add a windows publishing arm soon, I really liked that AR toolkit. 

I figured out how to get multiple targets working with Metaio. It's kind of clunky, you need to edit XML files to set up your targets, but unlike Vuforia, you don't need to register your targets with a server or generate trackable datasets, it just generates them automatically from the image, which is awesome. The tiger here is an animated character. I solved the problem of background convergence with this build, but have discovered 2 side-effects to my solution: I need to slice off part of the right image to get convergence, and the background is zoomed in to fill the whole oculus viewport. So it is like walking around with binoculars on. 

This test fixes the zoomed in problem with the background feed, but in order to achieve a 1:1 ratio with my real world vision scale, I needed to size down the projections, so the whole viewport in Oculus is not filled. What I noticed was that I could function very well with this AR Oculus Rig on though, I could pickup and use objects and type things on the keyboard with no problem. I just need better cameras for my rig to fill the viewport and get the full Oculus experience. 

bottom of page