Owlchemy Labs, developer of popular VR title Job Simulator, has introduced a new method of MR called Depth-based Realtime In-App Mixed Reality Compositing (DRIMRC). Using a ZED stereo depth camera, green screen users can be sorted directly into the VR experience’s engine by using a custom shader and custom plugin.
So far, MR has typically been accomplished through video editing software by overlaying video footage onto engine footage. This however, has significant impacts on performance and relies on third-party software.
Owlchemy hopes to bring MR out of its infancy stage and use it to properly demonstrate what it feels like to be in VR. DRIMRC can sort green screen users into complex environments, as oppose to the flat depth that comes with the video editing software methods. This means that, for example, the user could reach over a cabinet with their hand whilst the rest of the body is properly rendered behind. What’s more, DRIMRC requires no extra software to stream and has no need to use multiple machines to composite MR footage.
Perhaps the most interesting feature for businesses, content creators and streams; Owlchemy’s MR system allows users to stand in the VR scene without wearing an HMD. An audience could be addressed in a VR environment without prohibited vision from a VR headset. What this feature also means, is that multiple people can be in MR at once.
The company will hold private beta testing of DRIMRC with a select group of developers and content creators, for which you can sign up here.