Archive for the ‘VR’ Category

VR soft body physics with Nvidia Flex

October 22nd, 2016 No comments

I gave up on trying out Nvidia Flex in VR using the Unreal Engine a few months ago – entirely due to just not wanting to work with C++ and Unreal (posted on that here). I still had a cooking game idea I wanted to try with soft body physics, so once I saw someone had created a Nvidia Flex plugin for Unity called uFlex, I decided to put a weekend into trying it out.

Really all I had to do was fix some rendering issues to get it drawing correctly in both eyes – then add hand interaction to allow the player to grab particles. To grab particles it just checks what particles are within a certain radius to the controller when the trigger button is pressed. If they are being held it will set their velocity to be the delta position between the particle and where the particle was in relation to the controller (setting the particle positions directly would really mess up the physics).

Soft body physics in VR is incredibly satisfying, but there is no way I can see a viable game using this. Performance is not there yet for any large amount of particles. And VR games already need to be pretty minimalist for the best performance. I think I recorded these videos running at 45 FPS – which is nausea inducing for me (despite spending a lot of time in VR I am still easily motion sick from bad experiences, something I want to keep so I don’t make bad experiences myself)

Even ignoring the performance issues, every small game test idea I started to work on eventually hit a wall. I wasn’t intending to make a product out of this, but I at least wanted something I could distribute free online as a test demo. At first my plan was to make a quick cooking game – throw some bacon, butter, pancake batter – basically a bunch of different materials with varying hardness and viscosity on a pan, then slosh it all around as it cooks. I couldn’t even get this to work since there was no continuous collision detection on the particles – so if you picked up the pan and quickly moved it (which would happen a lot) the particles would all fall out of the pan.

What I really want to see is mesh deformation like in this demo video – but when that can actually be used in a VR game, and run well, I have no idea

Categories: VR

Voice Commands in VR – Modbox

August 29th, 2016 No comments

As a distraction from a large amount of Modbox bugs and adding online support I spent a weekend adding a voice command system to Modbox

Commands are:
– Open *Tool name*
– Go To *Menu Option*
– Spawn *Entity Name*

Then for a variety of actions it’s: Modbox *Action*. Such as toggling play mode on/off, open the load creation menu, selecting mods, etc

First thing I had to do to develop this was pick a good speech recognition library. Based on reading this Unity Labs blog post I tried out Windows dictation recognition, Google Cloud Speech, and IBM Watson.

Google Cloud Speech appeared to work the best – but by far the easiest to integrate was Windows Speech library since it’s already added to Unity (just need to include Unity.Windows.Speech), and there is a lot of great documentation behind it (since it’s used for HoloLens Unity apps). Biggest restriction with it was that it required the user having Windows 10 – so it not only restricted Modbox to Windows, but only Windows 10. If I eventually get Modbox on another platform I can switch then, but for now high end VR is entirely Windows dominated so I can’t see that being needed for years.

First thing I found was that Speech recognition is a LOT more reliable when it’s just checking for specific commands (like a list of 30 of them), rather than going directly from speech to text. I plan to eventually use direct speech to text for the user entering in text (like if they are naming their creations in Modbox) – but for now based on the context it just generates a list of possible commands. When in edit mode it goes through all Entities the user can spawn and generates a ‘Spawn *Name*’ commands. If in a menu (one of the large floating menu systems) it generates a voice command for each button (just based on the text on the bottom). Rather than manually creating hundreds of possible voice commands it was easy to just generate them based on context.

I was surprised to find voice commands actually useful! I expected this to just be a novelty additions for some users to try out – but now I think it could be a important part of the editor workflow. In many cases it’s more intuitive and quicker than going into the menu system.

For some commands, like switching to play mode, it’s definitely just as easy to push the menu button and select ‘Play’ – equal amount of time really and effort as saying ‘Modbox Play’. But for more complex actions, like spawning a specific entity, voice commands were massively faster. Rather than going through a menu system to find a ‘Dynamite’ entity in 1 out of the 100 entities (if you have a lot of mods active) you can just say ‘Spawn Dynamite’. I think for this use case, where your trying to select from hundreds of different options and you know what your looking for, voice commands win out of any possible option.

The problem with using a voice command system in a game is reliability. If your game depends on the user being able to do voice commands, and it only works 95% of the time, then that can be incredibly frustrating. Not working 5% of the time means it can’t be depended on for important gameplay – there is nothing more frustrating than dealing with unreliable controls in a challenging game. For a creation tool however, it’s a very useful alternative to a menu system – especially in VR when navigating menus can be complex.

Voice commands should be live in the next Modbox update.

Categories: VR

VR Experiments

April 27th, 2016 No comments

While giving demos for MaximumVR I had a few people mention to really feel like a Godzilla monster they wanted to stomp around the city.
So to add feet Kate used some old Vive Dk1 controllers and attached them to rubber boots:

Had to use old Vive Dk1 controllers rather than the pre/consumer versions since those have a bandwidth limit of 2 (they go through the headset, while old ones have a separate connection).
We originally just tried strapping them on the feet directly, but this had the problem of not knowing where the ground was exactly (your feet would be at a different position depending on where you strapped them to your feet, plus the angle you strapped them on would need to be perfect). Giant rubber boots ended up working well also since players just naturally felt ridiculous wearing them.

No way players will actually be able to try this yet – but hopefully full body tracking will eventually be a thing!

One of the main Vive experiments I wanted to try since getting the first dev kit was interacting with fluid and soft bodies. I wasn’t sure of the general game idea, but I knew that just interacting with non rigid bodies would be incredibly satisfying.

To do this I had to grab the Flex branch of Unreal and compile it, then just added standard motion controllers. My plan was to make a pancake simulator game (just since the most satisfying liquid was one with a lot of viscosity, and pouring it then moving it around in a pan was fun). I knew the Flex liquid simulation was pretty limiting game wise (no collision event data, no way to serialize, and can only change properties for the entire group), but just messing with the liquid in anyway would be enough of a game for me.

But, I got tired of dealing with the Unreal engine. I am glad I took a weekend to learn Blueprints and get a C++ refresher, but the time it would take me to go farther with this project wouldnt be worth it.

Categories: Modbox, VR

Our VR game: Modbox

December 11th, 2015 No comments

I am pretty excited about this project.

I did a interview with Killscreen here.

Categories: VR

Vive VR Experiments

November 25th, 2015 No comments

I was lucky enough to be one of the developers to receive a HTC Vive kit from Valve (after some begging to my only Valve contact).

I tried it for the first time at PAX 2015. The previous version I tried at Steam dev days, which I wrote about here, had a completely different setup and no input system. I couldn’t see it being used for games very well with no input, but it definitely made me realize the potential of room scaled VR at the time.
For this new demo I tried 5 games. Some of which were disappointing in how they didn’t use the Vive’s amazing input system to it’s full extent (I imagine since they were started / designed before this input was even possible) – so TiltBrush was the application that really made me see the Vive’s full potential. I felt more in control of the painting in Tiltbrush than I have ever had in any input device – better precision than even a mouse.

As soon as I left the demo I knew -exactly- the game I wanted to make, and spent the rest of my trip thinking about it and planning it. I’ve always enjoyed physics building games so this idea is just a extension of that – but I think with the Vive’s input it can really be taken to a new level. I now see it as a general holodeck builder in a sense – you build a environment/game and then share it with others.

I previously had no interest in doing VR work for a variety of reasons – mostly because of the lack of a good input system. I felt if all I could do was cinematic work, then the best games for it would be ones with large teams of artists, basically high budget film like experiences. Without a good input system I couldn’t imagine a systems based design – so being able to make a good product with a low amount of resources didn’t seem possible.
I also had no interest in having another shitty dev kit experience like I have had with all Oculus hardware. I barely even tried to develop anything with Oculus, mostly just running demos, and it was a horrible setup experience. Having to drag a window to another monitor while looking into the Oculus headset is something I never want to do again. Their old Unity integration meant quickly iterating on game ideas was also incredibly painful – usually having to build then test it. Sudden frame rate drops and crashes (both of which are common when developing) also meant constant nausea.
So not only did the Vive solve my problems with input, it’s integration with Unity is fantastic. I can test things in the editor itself, and even adjust the environment in scene view and hot swap code while the Vive is still running. Also when there was a frame rate drop or the application crashed it wouldn’t bring me back to a desktop (which is painful on the eyes), rather a blank VR room.

Before I started on my dream physics VR Vive game though I wanted to quickly try a project to get an idea of how to use the development kit. So within a few days I made this Godzilla sorta simulator using the game assets of another in development Alientrap game:

It took a bit to get the right scale – at first I was imagining the player as a much bigger monster, towering above everything, but I realized it was a lot more fun to be about the same height as the buildings you were destroying.
Instead of monster hands, or a hammer to destroy the city I decided to add a Morning Star type weapon. This was because there is no way to give tactical feedback to the player – if they have a hammer in their hand and they smash a building with it, it will go right through it with no physical feedback. It was important the enviroment feel tangible for destruction to really mean something. So with a Morning Star, the player would have to wind up to get a large force, and would have the feedback of the buildings stopping the morning star. So the world would still feel like it took effort to destroy.

Hopefully I’ll be able to show off our Vive game soon – kind of just waiting to see the best way to announce it. I should probably just give up on the idea of a big announcement and be entirely open with development – but we’ll see

Categories: VR