VR soft body physics with Nvidia Flex

October 22nd, 2016 No comments

I gave up on trying out Nvidia Flex in VR using the Unreal Engine a few months ago – entirely due to just not wanting to work with C++ and Unreal (posted on that here). I still had a cooking game idea I wanted to try with soft body physics, so once I saw someone had created a Nvidia Flex plugin for Unity called uFlex, I decided to put a weekend into trying it out.

Really all I had to do was fix some rendering issues to get it drawing correctly in both eyes – then add hand interaction to allow the player to grab particles. To grab particles it just checks what particles are within a certain radius to the controller when the trigger button is pressed. If they are being held it will set their velocity to be the delta position between the particle and where the particle was in relation to the controller (setting the particle positions directly would really mess up the physics).

Soft body physics in VR is incredibly satisfying, but there is no way I can see a viable game using this. Performance is not there yet for any large amount of particles. And VR games already need to be pretty minimalist for the best performance. I think I recorded these videos running at 45 FPS – which is nausea inducing for me (despite spending a lot of time in VR I am still easily motion sick from bad experiences, something I want to keep so I don’t make bad experiences myself)

Even ignoring the performance issues, every small game test idea I started to work on eventually hit a wall. I wasn’t intending to make a product out of this, but I at least wanted something I could distribute free online as a test demo. At first my plan was to make a quick cooking game – throw some bacon, butter, pancake batter – basically a bunch of different materials with varying hardness and viscosity on a pan, then slosh it all around as it cooks. I couldn’t even get this to work since there was no continuous collision detection on the particles – so if you picked up the pan and quickly moved it (which would happen a lot) the particles would all fall out of the pan.

What I really want to see is mesh deformation like in this demo video – but when that can actually be used in a VR game, and run well, I have no idea

Categories: VR

Voice Commands in VR – Modbox

August 29th, 2016 No comments

As a distraction from a large amount of Modbox bugs and adding online support I spent a weekend adding a voice command system to Modbox

Commands are:
– Open *Tool name*
– Go To *Menu Option*
– Spawn *Entity Name*

Then for a variety of actions it’s: Modbox *Action*. Such as toggling play mode on/off, open the load creation menu, selecting mods, etc

First thing I had to do to develop this was pick a good speech recognition library. Based on reading this Unity Labs blog post I tried out Windows dictation recognition, Google Cloud Speech, and IBM Watson.

Google Cloud Speech appeared to work the best – but by far the easiest to integrate was Windows Speech library since it’s already added to Unity (just need to include Unity.Windows.Speech), and there is a lot of great documentation behind it (since it’s used for HoloLens Unity apps). Biggest restriction with it was that it required the user having Windows 10 – so it not only restricted Modbox to Windows, but only Windows 10. If I eventually get Modbox on another platform I can switch then, but for now high end VR is entirely Windows dominated so I can’t see that being needed for years.

First thing I found was that Speech recognition is a LOT more reliable when it’s just checking for specific commands (like a list of 30 of them), rather than going directly from speech to text. I plan to eventually use direct speech to text for the user entering in text (like if they are naming their creations in Modbox) – but for now based on the context it just generates a list of possible commands. When in edit mode it goes through all Entities the user can spawn and generates a ‘Spawn *Name*’ commands. If in a menu (one of the large floating menu systems) it generates a voice command for each button (just based on the text on the bottom). Rather than manually creating hundreds of possible voice commands it was easy to just generate them based on context.

I was surprised to find voice commands actually useful! I expected this to just be a novelty additions for some users to try out – but now I think it could be a important part of the editor workflow. In many cases it’s more intuitive and quicker than going into the menu system.

For some commands, like switching to play mode, it’s definitely just as easy to push the menu button and select ‘Play’ – equal amount of time really and effort as saying ‘Modbox Play’. But for more complex actions, like spawning a specific entity, voice commands were massively faster. Rather than going through a menu system to find a ‘Dynamite’ entity in 1 out of the 100 entities (if you have a lot of mods active) you can just say ‘Spawn Dynamite’. I think for this use case, where your trying to select from hundreds of different options and you know what your looking for, voice commands win out of any possible option.

The problem with using a voice command system in a game is reliability. If your game depends on the user being able to do voice commands, and it only works 95% of the time, then that can be incredibly frustrating. Not working 5% of the time means it can’t be depended on for important gameplay – there is nothing more frustrating than dealing with unreliable controls in a challenging game. For a creation tool however, it’s a very useful alternative to a menu system – especially in VR when navigating menus can be complex.

Voice commands should be live in the next Modbox update.

Categories: VR

VR Experiments

April 27th, 2016 No comments

While giving demos for MaximumVR I had a few people mention to really feel like a Godzilla monster they wanted to stomp around the city.
So to add feet Kate used some old Vive Dk1 controllers and attached them to rubber boots:

Had to use old Vive Dk1 controllers rather than the pre/consumer versions since those have a bandwidth limit of 2 (they go through the headset, while old ones have a separate connection).
We originally just tried strapping them on the feet directly, but this had the problem of not knowing where the ground was exactly (your feet would be at a different position depending on where you strapped them to your feet, plus the angle you strapped them on would need to be perfect). Giant rubber boots ended up working well also since players just naturally felt ridiculous wearing them.

No way players will actually be able to try this yet – but hopefully full body tracking will eventually be a thing!

One of the main Vive experiments I wanted to try since getting the first dev kit was interacting with fluid and soft bodies. I wasn’t sure of the general game idea, but I knew that just interacting with non rigid bodies would be incredibly satisfying.

To do this I had to grab the Flex branch of Unreal and compile it, then just added standard motion controllers. My plan was to make a pancake simulator game (just since the most satisfying liquid was one with a lot of viscosity, and pouring it then moving it around in a pan was fun). I knew the Flex liquid simulation was pretty limiting game wise (no collision event data, no way to serialize, and can only change properties for the entire group), but just messing with the liquid in anyway would be enough of a game for me.

But, I got tired of dealing with the Unreal engine. I am glad I took a weekend to learn Blueprints and get a C++ refresher, but the time it would take me to go farther with this project wouldnt be worth it.

Categories: Modbox, VR

Our VR game: Modbox

December 11th, 2015 No comments

I am pretty excited about this project.

I did a interview with Killscreen here.

Categories: VR

Strange loop simulation – Why there is something rather than nothing

December 1st, 2015 No comments

WaitBuyWhy has a great article about the question of ‘why there is something instead of nothing’, I particularly liked this part:

Religious people have a quick answer to “Why is there something instead of nothing?” I’m not religious, but when I’ve thought hard enough about it, I’ve realized that it’s as plausible as anything else that life on Earth was created by some other intelligent life, or that we’re part of a simulation, or a bunch of other possibilities that would all entail us having a creator. But in each possible case, the existence of the creator still needs an explanation—why was there an original creator instead of nothing—and to me, any religious explanation inevitably hits the same wall.

 

So to give my own attempt to explain why there is  ‘something rather than nothing’, I want to give one that that doesn’t require a first cause – that the original creator of reality is reality itself – and it exists for this purpose. A self defining system, a strange loop.

I not only mean this as the ‘how’, which is a pointless circular definition, but as the ‘why’.

The main thing that would need to be the case for this to make sense is the universe being entirely digital and computable. This idea has been adopted by a large number of  physicists such as Max Tegmark’s with his book ‘A Mathematical Universe’ (who argues in his Mathematical Universe Hypothesis that reality is not just computable, but a static mathematical structure), or in the case of digital physics the writing of Stephen Wolfram. You can go over the history of Digital Physics to see its evolution over decades.

Essentially the idea of digital physics is that for the universe to be computable it must be able to be described digitally. So while treating spacetime as continuous still has use (like treating water as a continuous fluid rather than tons of individual particles) – it’s convenient as an approximation, but at the most basic level spacetime is actually discrete – like a grid rather than a continuous space. Where grid positions are separately by the smallest possible length (which might be the Planck length)

If reality is computable, then all existence is actually just information. If you look at an apple, the properties you think it has are not just 1s and 0s. It has taste, color, smell – but these are all higher level properties, and actually at the lowest level it’s entirely elementary particles – electrons, quarts, things that can be described entirely mathematically. The higher level properties are emergent from these base mathematical entities. A quark has no ‘smell’ or color, it simply has a number of mathematical values.

If reality is computable, it can be simulated. If it’s all information, we can take this information and it can be calculated by any Turing machine. Digital Physics then says the universe can be replicated perfectly by a computer simulation.

If the exact formulas were known, the theory of everything, we could simulate reality in a perfect simulation.

If the simulation of reality is a perfect simulation, and reality is in fact just mathematical, then it is actually just as real. And not only is it real, it’s actually exactly EQUAL to reality. If reality and the universe are just information, then I would argue the simulation of the universe is actually the exact same thing as the universe itself.

If the universe was simulated inside itself, it would be creating it’s own existence. Making A=A. It would be the ‘how’ there is something rather than nothing.

Max Tegmark in his Mathematical Universe Hypothesis argues that reality is part of a mathematical structure that is completely defined, and us experiencing time is really an illusion, because we are just part of this already defined static structure. His explanation for why there is something rather than nothing is really because ‘there is everything!’. Every mathematical structure exists, he argues. Not only is it depressing believing that everything is predetermined and free will doesn’t exist, or that my sense time is completely an illusion, but it also seems like another infinite regress answer to the question of existence.

So rather than Max Tegmark’s hypothesis where everything that can exist does exist – what exists is what can be defined. And what can be defined is what can define itself. Time is calculation, and this calculation is defining reality.

This is why the universe would need to be perfectly fined tuned, to have the exact values it has. Because those values were necessary to define itself.

So if the universe needs to define itself, how would it do it? And how does this explain the ‘why’?

It’s possible us humans are a good candidate to create this simulation – after all it’s pretty clear our technological progress is going more towards simulation technology than anything else.

So we would not only have to simulate the starting of the universe and all of it’s properties in a computer program – but we would need to assure that the possible configuration where the universe simulates itself is found.

This computer program would need to search for the possible universe configuration to find the ‘answer’, the configuration where it would be able to simulate itself.

In computer science this type of search problem is called a ‘NP hard problem’. Where it’s easy to verify an answer is correct, but extremely hard to find a solution for. So hard you literally have to pass through every single option – there is no way to predetermine.

A common simple example of this type of problem is:

Suppose you were building two large towers by stacking rocks of various sizes, and you needed to make sure the towers are the exact same mass. To check the answer to that is simple – you just add up the rocks and test if they are equal. But to find the correct configuration isn’t that simple – you need to go through every single possible arrangement of the rocks in order to find the answer. With 100 rocks, that’s 2 to the power of 100, meaning the amount of configurations is a number with thirty digits, and with each additional rock that number grows exponentially.

So even knowing the ‘answer’ and what our simulation would need, we would still need to go through every possibility in a search.

I think this begins to explain the ‘why’. Why reality is setup the way it is – because it’s doing exactly this, it’s searching through all possible logical configurations.

According to inflation theory different possible physics laws are realized in the multiverse (over the string theory landscape, which could be seen as adding varying ‘rules’ to the initial search)  – and according to quantum mechanics and the many worlds theory all possible logical interaction happens (in parallel worlds, which could be described in computer science terms as a ‘breadth first search’).

When I am setting up a computer program to simulate something and search for emergence, this is exactly how I would do it. There would be randomness in the programs initial conditions, fundamental randomness to ensure there wouldn’t be uniformity throughout, and breadth first branching so all possible interactions are realized.

In the universe Time is the computation, entropy the step function, string theory variance it’s different initial conditions, quantum mechanics it’s fundamental randomness and ‘many worlds’ it’s branching. So how we would ‘search’ for this configuration in a computer program is just like what reality is actually doing

That’s the reason for why reality has the rules it has – so that it can define itself. Reality is as complex as it needs to be. Why it’s perfectly tuned to provide emergence – and why the rules it has are completely necessary.

I would argue this is a good outlook to have in comparison to Tegmark’s Mathematical Universe Hypothesis. The future is then undefined. If we were able to put the current state of the universe in a simulation and predict the future, that is not predetermining the future, that is creating the future in the same way. As Stepham Wolfram puts it the universe is computationally irreducible – the only way to simulate it is to create it and actually compute it – which would be defining it in the same way.

You could have the outlook that life exists entirely for this purpose. Humans are special – we clearly have never found any evidence of anything like us. So maybe we have the chance of creating this simulation, of defining existence. The creators of the simulation are as necessary to exist as the fundamental laws of the universe itself.

My insane proposal is then for humanity is to build this simulation – and recreate our existence – so that everything can exist.

Or rather than creating the simulation ourselves, create a super intelligence AI more capable (which seems easier, and more inevitable). Create the self referential strange loop. So basically everything I am saying is the plot of The Last Question by Isaac Asimov.

 


Some random notes:

– When it comes to the question of how do we physically do this – quantum computers give some hope into how to actually run this simulation. Quantum computers don’t give the possibility of some infinite parallel computation – and to most application they aren’t even believed to be faster than classical computers – but one thing they are theorized to do quicker is simulation of quantum particles. It’s also the case that maybe we don’t need a revolution in computing – it would just take a of a lot of time to compute. It doesn’t require speed – the universe could take billions of years to calculate 1 second of a simulation and still be possible. Maybe it would take having a computer drift in space as the big freeze takes place calculating the simulation for near eternity before running out of energy…

– Tegmark has a brief part in his book ‘Our Mathematical Universe’ about why the universe isn’t a simulation. Part of it is that time isn’t global in reality (as most simulations would treat time) – but there is no reason that this needs to be the case. All modern 3d game engines definitely use time globally, but it’s pretty easy to imagine making a 3d simulation with independent time and relativity in mind. Making a game engine where the physics time step is not global would actually be a interesting project.

The argument that Max Tegmark also had is that there is no need to ‘run’ a simulation, if a universe can be described by a consistent mathematical description then it ‘exists’ and is ‘real’. The digital physics idea is that it needs to be computed to be defined, rather than just described. It seems impossible to resolve what would make something ‘real’ – but one example I would give is that if I emulate your brain on a hard drive, and let it sit there – would that be enough for your mind to exist? The view with digital physics is that it needs to be ‘ran’ and computed. I don’t know what makes things real, but I would argue its this information processing, and that this computation is needed. Our consciousness is what it feels like for our minds to be calculated.

– I don’t think it’s a requirement that the universe needs to be discrete to be computable. I could certainly represent continuous physics with a simulation – and only compute exact values when when measured. It is definitely possible to represent a uncountably infinite space in a digital simulation. It would just mean you can keep calculating into further precision, this is not finding its true value but defining a value out of a continuous function. You could have rules that are continuous functions but are ultimately in a discrete computable space.

– I am writing this with a lot of assumptions: that P!=NP (which is almost definitely the case but not proven), many worlds theory of quantum mechanics is correct, and that inflation theory is correct

– What I call the ‘universe’ a few times here should really be the result of the ‘post-inflationary bubble’

Categories: Random