Two weeks ago we spoke with Brian Peiris, who created a compelling reason to believe in the future of content creation in VR He built a program called RiftSketch which allows users to code in virtual reality. A few days after speaking with Brian, I saw a post by Amir Khella about his newest project: GrayBox.

GrayBox is a prototyping tool that allows users to build scenes, landscapes, and environments right in VR. There is no need keep taking off and putting back on your VR headset. Using an Xbox controller and an Oculus Rift or Samsung Gear VR, you can build anything you want in virtual reality, regardless of technical knowledge or skill.

Like RiftSketch, GrayBox takes a relatively simple concept and makes the experience amazing and intuitive. Unlike RiftSketch, GrayBox can be used by completely non-technical people. VR has historically been an extremely technically oriented industry. As Amir points out, it is very important to veer towards consumer ready products in order to grow the VR community.

This week I was very fortunate to be able to speak with Amir and find out some of the things he learned while building GrayBox and where the product – and VR in general – will go in the months and years to come.

Q: What inspired you build Greybox?

For every emerging platform, there is always this trend where it starts catering to highly technical early adopters. Eventually, the slightly less technical people start using the technology, and then finally mainstream non-technical users can access it. In my opinion the inflection point of mass infiltration hits when 80% of people are able to build content for the platform. Just look at the web. You used to need a computer science background to build anything, but today any person, including a child, can make anything on the web.

Furthermore, Unreal just announced VR authoring tools, as did Unity. My concerns are about the people that don’t know anything about 3d modelling who still want to be content creators in VR. It’s like the web – people want to make websites and post content, but they don’t want to touch the complex technology layer. That’s what’s happening with VR. My hope with GrayBox is that it evolves into that tool for the 80% of people that don’t want to dive into that technical layer.

grayboxmap

GrayBox User Interface

GrayBox is targeting the people that want to sit down for 3 hours and prototype an idea. I want them to be able to build a prototype and then send that off to a developer. Right now, the learning curve is just too high. You need to fully learn a game engine such as Unreal or Unity just to build any content.

That’s where GrayBox finds use. It doesn’t compete with Unreal or Unity – it compliments the workflow. It’s for someone without the technical skillset who wants to build and explore VR content.

Q: How did you build Greybox?

GrayBox was built in 3 weeks using Unity and C#. I had a 3d graphics developer help me build it. I have a background in C#, animation, and graphics, but it felt like building a tool was a different beast than building a normal scene or experience. There was a lot more than just modelling or animation. This was my first year using Unity so I had a lot to learn.

Over 3 weeks I created a lot of prototypes (non-interactive) in Unity. I placed fake objects on the platform and on the scene. I then put my headset on and try them out.

Ironically one of the biggest challenges of building GrayBox was what we were trying to solve in the first place. A lot of time was wasted tweaking things and putting on/off the headset.

Q: What were some interesting and challenging interface problems you found while building GrayBox?

High level libraries for creating interactivity and capturing user behavior in VR do not exist or are fragmented. In an ideal world you be able to drag a gamepad library into your project and hook it up to events in your application. I was looking for prewritten functions for specific inputs from my gamepad. Those simply did not exist in a good form.

Another challenge was cross-platform issues. We developed GrayBox for Samsung Gear VR and the Oculus Rift at the same time. While we had no complex differences in our tool between the two platforms, making things for two different controllers was complicated. Between the Xbox controller and the Gear’s touchpad, there were many differences such as inverted axis and button presses.

gearvr side

Samsung Gear VR’s touchpad

Even if you’re tech savvy, figuring out input is difficult! What is an axis? I understand my gamepad has a joystick, but it’s not 1-to-1 in Unity. It isn’t drag and drop – there aren’t any libraries to plug in. The input took us a long time to figure out – much longer than we thought it would.

That’s another thing – input. Retical selection in menus is not standardized. I want to be able to drag and drop that into my project. Unity released a bunch of VR examples that were excellent, but to figure out how to use those examples in my project was easy. They weren’t modules, they were customized for those specific examples. I had to reverse engineer those projects and figure out how they worked. There aren’t any best practices I can just drag and drop.

Overall it went pretty smooth compared to other projects I have been involved with, but we ended up writing a lot of code for GrayBox but it was mostly core interaction.

Q: Why did you chose to use a gamepad as a controller rather than something like a Leap Motion camera or touch controllers?

We wanted to design something for people who have a VR headset right now. 80% of people with headsets don’t have motion tracked controllers. I got my Razer Hydras the week before we finished the first beta of the project though. Once I integrated them into Unity, it started to make a lot of sense why Unreal was using Valve’s controllers to do certain things. Tracked controllers are awesome but they won’t be out this year.

 

Razer Hydra controller

Razer Hydra Controller

GrayBox needed to be something for the 80% of us without those controllers. As for keyboards – am I going to interrupt your VR experience just to input text? No, you can’t look at your keyboard either. That’s why we chose gamepads.

Q: What’s version 2 of GrayBox?

We have a backlog of 6 or 8 features that we want to implement. Most importantly we want to be able to import any asset package that you have into GrayBox. This includes downloading things right from the Unity Store. For example if you wanted to build an apartment, you could download an apartment assets package and start laying out an apartment in VR without ever taking off your headset. Right away you have this really exciting use-case of worldbuilding or apartment building within VR.

There are a few areas that interest me for future updates. Placing text and images and text input. Typing, saying, or speaking text is an important problem. Importing images and text onto walls or objects will be extremely important. Animation – interacting and animating objects – will be interesting as well.

Q:  How do you think input will evolve in the future?

It will take a lot of training on the brain side. People will evolve (2 generations down) to use their brains directly with computers. There will be a parallel reality that people can switch between. If you think about your keyboard, it stands between your intent and your machine. If I can directly push that thought into a machine, why do I need that keyboard? The only purpose of my hands is to achieve the intent of my brain. I don’t need that intermediate action if we optimize inputs.

leapmotion-hands

If we start thinking about hand controllers, you’re just mapping real life 1-to-1 into VR. Think of The Matrix, people are totally immersed in VR. Think about your brain – you want to move something in life. Your brain wants to move an apple, but your hands are accomplishing the intent. Can you make an intent solution rather than an action solution? These are some of the things I’ve been thinking about with input. Everyone is trying to mirror real life in VR, but I think people will eventually use their brains to move things in the virtual world and just skip the “input” process.

Q: How do you think the VR industry will evolve?

Everyone is working very hard on figuring out the potential for VR. It’s super interesting and important. But I think 360 videos are not really VR. 360 videos are basically 180 video because 95% of the time you’re looking in front of you. A lot of people are wasting their time rendering 50% scenes that nobody will ever see. We have to figure out how we’re going to watch VR movies – standing, sitting, on swivel chairs? If VR can track you in a room, why would you stand up but not walk? It introduces some interesting problems.

Vive motion tracking

HTC Vive’s motion tracking setup instrucions

Light-field is amazing and much better than “flat circle videos”, but we need to figure out how we can put users into an immersive video experiences without having to render a full 360 video.

Another problem I see is that we’re focusing too much on games. That’s not a mistake, it’s just an early adopter problem. The biggest challenge to VR is the word “gaming”.

We have had several emerging technical platforms already – internet, smartphones – and we know what that tipping point looks like when it hits mainstream. The desirability of the device is important, just look at the iPhone. It became a desirable product because of design and ergonomic factors.

For VR, it’s still too early to start optimizing for all the mainstream factors. VR feels like Compaq used to feel like 20 years ago. Once VR looks like a cool pair of Ray-Bans instead of a lame screen strapped to your head, VR becomes a mainstream product. Making the form factor look good and have that desirability aspect hasn’t seen much work.

It’s tricky, you have to put up with looking like a freak because you’re having so much fun? It’s not easy!

Q: Any last words on VR in general?
VR excites and scares me in many ways. People in their mid 20s are so lucky because they are witnessing the birth and maturation of 3 different platforms: web, mobile, and now VR. It’s very rare for someone in their lifetime to see the birth and maturation of 3 huge platforms.

We are living in very exciting times right now. I enjoy everything about it. We’re all early stage and everyone is sharing. There are a lot of people standing on the sidelines waiting to see if it’s mainstream yet. Everyone who is creating right now is adding to the knowledge base of this huge upcoming industry. It’s exciting.


brian

Amir Khella is currently building GrayBoxVR, a virtual reality content creation platform. He is also the creator of Keynotopia, a prototyping tool for apps in Powerpoint and Keynote..

You can find Amir on Twitter, his website, and on the GrayBox website.

← Back to the blog.