Unity C# hotswapping with saved games

October 14th, 2017 No comments

One of the main things I missed moving to Unity from XNA was Visual C#’s edit and continue, allowing for quick code changes without having to recompile. Decreasing the time to iterate and adjust is incredibly important for quick development.
Unity has built in support for hotswapping – but its basically unusable. There are a few guides on how to structure code to allow for it – but that adds a lot of complexity, and easily breaks once you use any 3rd party code that isn’t designed for (basically all 3rd party Unity code). In my previous projects I just included a editor script to stop play mode once a recompile happens.

So instead of letting Unity try to serialize the scene (and then break horribly) I decided to just use Modbox’s save/load game system.
Once the ‘ModboxHotswap’ editor script detects a recompile, it saves the game and deletes all game objects from the scene. Once the recompile is finished it reloads the saved state.

[InitializeOnLoad]
public class ModboxHotSwap
{
    // Static initialiser called by Unity Editor whenever scripts are loaded
    static ModboxHotSwap()
    {
        Unused(_instance);
        _instance = new ModboxHotSwap();
    }

    private ModboxHotSwap()
    {
        EditorApplication.update += OnEditorUpdate;
    }

    ~ModboxHotSwap()
    {
        EditorApplication.update -= OnEditorUpdate;
        _instance = null;
    }

    private static void OnEditorUpdate()
    {
        if (EditorApplication.isPlaying)
        {
            if (EditorApplication.isCompiling)
            {
                if (!EditorPrefs.GetBool("HotSwapNeeded"))
                {
                    EditorPrefs.SetBool("HotSwapNeeded", true);

                    Debug.Log("Saving state before script compilation");
                    string SavedGameJSON = SaveSystem.Instance.SaveGame();
                    EditorPrefs.SetString("SavedGame", SavedGameJSON);

                    foreach (ModMetaData Mod in ModSystem.ModsMetaData)
                    {
                        // unload all modbox asset bundles
                        if (Mod.assetBundle != null)
                            Mod.assetBundle.Unload(false);
                    }

                    // destroy all game objects
                    foreach (GameObject o in Object.FindObjectsOfType())
                    {
                        UnityEngine.Object.Destroy(o);
                    }
                }
            }
            else if (EditorPrefs.GetBool("HotSwapNeeded"))
            {
                EditorPrefs.SetBool("HotSwapNeeded", false);

                // reload current Unity scene
                Scene scene = SceneManager.GetActiveScene();
                SceneManager.LoadScene(scene.name);

                // load saved game
                SaveSystem.Instance.LoadJSON(EditorPrefs.GetString("SavedGame"));
            }
        }
    }

    private static void Unused(T unusedVariable) { }
    private static ModboxHotSwap _instance = null;
}

Modbox was designed from the start to always be able to serialize any game state, mostly for networking synchronization, but this approach should also work well for any game project with a save game system.

Categories: Modbox

VR Chatbots and interactable NPCs

February 27th, 2017 No comments

The latest Modbox update includes the start of the NPC system – allowing Modbox creators to add and edit human characters.

One idea I wanted to try was having the user speak to the NPCs and have them respond – I already added voice commands to edit mode (and found speaking in VR to feel pretty natural), so it seemed pretty easy to just add a few voice commands you could give to NPCs, then have them respond with some preset responses. Then I decided rather than specific commands I would just let the user talk freely, using voice dictation APIs rather than voice commands, and rather than preset replies I could just hook up the NPC AI to a chatbot service and use text to speech for responses.

Surprisingly the hardest part of this was the speech to text. Unity has a Dictation Recognizer, which uses the Windows speech tools, but due to a Windows update that has been broken for half a year on 64 bit, and based on the Unity forums they apparently aren’t going to fix it. So I had to have Modbox create a new 32 bit process, then use the clipboard to send the text data back to the 64 bit application (using the clipboard for this might be the hackiest thing I’ve done in Modbox, but after spending 2 hours trying to get IPC working with Unity I opted for the quick solution).
For the text to speech I was expecting to just use the old Windows tools – the horrible robot voice every played with 15 years ago. I ended up trying out Amazon Polly – and while getting the API to work with Unity was a giant timesink the results were amazing. I am really hopeful these voice APIs will expand with more options like emotion selection. Then to make the lip sync work I used Oculus’ lip sync tools – I just needed to manually set what phonemes resulted in which mouth blendshapes on the Morph3d models.

For multiple characters I then added a start up command, where you say ‘Hello ‘Name” to the character to have them start listening. This is shown in the Modbox update video

I have no idea how I’ll end up using this for the future of Modbox, if at all, but it was a fun experiment to try. Problem with how it currently uses chatbots is it has zero connection to gameplay – the current API just gives a random response based on what other people have said to it (I just used Cleverbot, which can give pretty hilarious responses). I imagine there are chatbot API’s out now that I could program to understand the users intent (mostly used now by horrible AI help systems on websites, and messaging startups), but that’d be a lot more effort than I was willing to put into this for now – plus I didn’t have gameplay in mind yet for what I would do with these guys (just giving them commands to pick up objects and interact with the world would be great, but that’ll have to come after I give the NPC AI a lot better understanding of the environment).
Another easier way to do this, that I hope future VR games do, is just show the user a few voice command options. So rather than the traditional dialog selection system in RPG games have the user say the commands. Recognizing preset voice commands is a lot easier and less error prone than doing full speech to text.

Categories: Modbox, VR

VR soft body physics with Nvidia Flex

October 22nd, 2016 No comments

I gave up on trying out Nvidia Flex in VR using the Unreal Engine a few months ago – entirely due to just not wanting to work with C++ and Unreal (posted on that here). I still had a cooking game idea I wanted to try with soft body physics, so once I saw someone had created a Nvidia Flex plugin for Unity called uFlex, I decided to put a weekend into trying it out.

Really all I had to do was fix some rendering issues to get it drawing correctly in both eyes – then add hand interaction to allow the player to grab particles. To grab particles it just checks what particles are within a certain radius to the controller when the trigger button is pressed. If they are being held it will set their velocity to be the delta position between the particle and where the particle was in relation to the controller (setting the particle positions directly would really mess up the physics).

Soft body physics in VR is incredibly satisfying, but there is no way I can see a viable game using this. Performance is not there yet for any large amount of particles. And VR games already need to be pretty minimalist for the best performance. I think I recorded these videos running at 45 FPS – which is nausea inducing for me (despite spending a lot of time in VR I am still easily motion sick from bad experiences, something I want to keep so I don’t make bad experiences myself)

Even ignoring the performance issues, every small game test idea I started to work on eventually hit a wall. I wasn’t intending to make a product out of this, but I at least wanted something I could distribute free online as a test demo. At first my plan was to make a quick cooking game – throw some bacon, butter, pancake batter – basically a bunch of different materials with varying hardness and viscosity on a pan, then slosh it all around as it cooks. I couldn’t even get this to work since there was no continuous collision detection on the particles – so if you picked up the pan and quickly moved it (which would happen a lot) the particles would all fall out of the pan.

What I really want to see is mesh deformation like in this demo video – but when that can actually be used in a VR game, and run well, I have no idea

Categories: VR

Voice Commands in VR – Modbox

August 29th, 2016 No comments

As a distraction from a large amount of Modbox bugs and adding online support I spent a weekend adding a voice command system to Modbox

Commands are:
– Open *Tool name*
– Go To *Menu Option*
– Spawn *Entity Name*

Then for a variety of actions it’s: Modbox *Action*. Such as toggling play mode on/off, open the load creation menu, selecting mods, etc

First thing I had to do to develop this was pick a good speech recognition library. Based on reading this Unity Labs blog post I tried out Windows dictation recognition, Google Cloud Speech, and IBM Watson.

Google Cloud Speech appeared to work the best – but by far the easiest to integrate was Windows Speech library since it’s already added to Unity (just need to include Unity.Windows.Speech), and there is a lot of great documentation behind it (since it’s used for HoloLens Unity apps). Biggest restriction with it was that it required the user having Windows 10 – so it not only restricted Modbox to Windows, but only Windows 10. If I eventually get Modbox on another platform I can switch then, but for now high end VR is entirely Windows dominated so I can’t see that being needed for years.

First thing I found was thatĀ Speech recognition is a LOT more reliable when it’s just checking for specific commands (like a list of 30 of them), rather than going directly from speech to text. I plan to eventually use direct speech to text for the user entering in text (like if they are naming their creations in Modbox) – but for now based on the context it just generates a list of possible commands. When in edit mode it goes through all Entities the user can spawn and generates a ‘Spawn *Name*’ commands. If in a menu (one of the large floating menu systems) it generates a voice command for each button (just based on the text on the bottom). Rather than manually creating hundreds of possible voice commands it was easy to just generate them based on context.

I was surprised to find voice commands actually useful! I expected this to just be a novelty additions for some users to try out – but now I think it could be a important part of the editor workflow. In many cases it’s moreĀ intuitive and quicker than going into the menu system.

For some commands, like switching to play mode, it’s definitely just as easy to push the menu button and select ‘Play’ – equal amount of time really and effort as saying ‘Modbox Play’. But for more complex actions, like spawning a specific entity, voice commands were massively faster. Rather than going through a menu system to find a ‘Dynamite’ entity in 1 out of the 100 entities (if you have a lot of mods active) you can just say ‘Spawn Dynamite’. I think for this use case, where your trying to select from hundreds of different options and you know what your looking for, voice commands win out of any possible option.

The problem with using a voice command system in a game is reliability. If your game depends on the user being able to do voice commands, and it only works 95% of the time, then that can be incredibly frustrating. Not working 5% of the time means it can’t be depended on for important gameplay – there is nothing more frustrating than dealing with unreliable controls in a challenging game. For a creation tool however, it’s a very useful alternative to a menu system – especially in VR when navigating menus can be complex.

Voice commands should be live in the next Modbox update.

Categories: VR

VR Experiments

April 27th, 2016 No comments

While giving demos for MaximumVR I had a few people mention to really feel like a Godzilla monster they wanted to stomp around the city.
So to add feet Kate used some old Vive Dk1 controllers and attached them to rubber boots:

Had to use old Vive Dk1 controllers rather than the pre/consumer versions since those have a bandwidth limit of 2 (they go through the headset, while old ones have a separate connection).
We originally just tried strapping them on the feet directly, but this had the problem of not knowing where the ground was exactly (your feet would be at a different position depending on where you strapped them to your feet, plus the angle you strapped them on would need to be perfect). Giant rubber boots ended up working well also since players just naturally felt ridiculous wearing them.

No way players will actually be able to try this yet – but hopefully full body tracking will eventually be a thing!

One of the main Vive experiments I wanted to try since getting the first dev kit was interacting with fluid and soft bodies. I wasn’t sure of the general game idea, but I knew that just interacting with non rigid bodies would be incredibly satisfying.

To do this I had to grab the Flex branch of Unreal and compile it, then just added standard motion controllers. My plan was to make a pancake simulator game (just since the most satisfying liquid was one with a lot of viscosity, and pouring it then moving it around in a pan was fun). I knew the Flex liquid simulation was pretty limiting game wise (no collision event data, no way to serialize, and can only change properties for the entire group), but just messing with the liquid in anyway would be enough of a game for me.

But, I got tired of dealing with the Unreal engine. I am glad I took a weekend to learn Blueprints and get a C++ refresher, but the time it would take me to go farther with this project wouldnt be worth it.

Categories: Modbox, VR