This week I got my hands on an Oculus Rift. Our department had ordered a couple of these, but due to the shortage of Dev Kit 1s, we got bumped to wait for Dev Kit 2, which has not shipped yet. Luckily, I was asked by a group in another college to come work on a PhD project they were working on, they needed a Unity and AR expert to build prototypes, and they had a Rift, didn't take much convincing.
I started with the Tuscany demo that comes with the rift. I merged in the Vuforia QCAR functionality for targets and cameras and made some adjustments to the camera setup so that augmented targets show up in both cameras and both cameras use the Oculus distortion for 3D objects and for backgrounds. 3D convergence is really good for 3D objects, but something wasn't right with the background, I wasn't getting sterescopy with it. Vuforia's "webcam" option would project the entire camera feed across both lenses rather than splitting and duplicating it with offsets.
This pic is of the Tuscany demo, with Vuforia AR working and a sample target (crate) which is being recognized properly.
On the hardware side, I am just running a standard webcam (in this case an Adesso wireless webcam) which is strapped to the top of my Rift with rubberbands and some cardboard for positioning.