Blog

Add locomotion to the Unity to WebXR Mozilla example tutorial / copy and paste script

2020/05/29 19:55

What this is

This is a pretty much plug-and-play script and tutorial for adding locomotion to the WebXR example from Mozilla.

By the end of the implementation, you will:

  • be able to navigate a virtual space with one or both hand controllers
  • move forwards and backwards at the speed you choose (depending on how far the thumbstick is tilted), in the direction your headset is facing
  • snap-turn by the number of degrees set in the code

Background

My next VR experiment will most likely have to run online, due to the implication of Covid-19 on in-perosn research (especially when that research involves strapping a face mask to the faces of tens of participants).

Thankfully (and coincidentally), Mozilla have recently updated their Unity-WebXR exporter tool.

Running a decent VR experience directly in the browser seems like a natural place for research experiments, as it stops users having to go to the trouble of downloading VR builds to install on their systems, and it should make it easier to collect additional data about participants.

While the implications and challenges of conducting remote VR research are currently unclear, it doesn't mean we can't get up and running.

Mozilla Unity->WebXR Example

Mozilla provide and download-and-run Unity project to get started with their WebXR exporter, giving you the following features:

  • Roomscale headset tracking
  • Hand tracking
  • Trigger and grip button controls
  • Interactable items (which you can grab)

This is a great start, but it misses out locomotion - you're totally unable to move in the space. For me, this was limiting, so I wanted to create a plug-and-play script to add this functionality for myself and others who might need it. The result is one script, but there is also some fiddling around the edges that is needed, too. Check out the tutorial below.

How to add locomotion + snapturns

1. Get started with the Mozilla example

First, head to https://github.com/MozillaReality/unity-webxr-export and download the example. It's a complete Unity project, so open it like you would a normal one - they recommend Unity 2019.3.

They have a walkthrough on the GitHub how to set this part up, so I'm leaving it in Mozilla's capable hands.

2. Find the Hand gameobjects

Once you have the example open and running, it's time to add locomotion. First, find the handL gameobject - it's a child of WebXRCameraSet.

3. Open up its input map

After selecting the hand object, look at the Inspector on the right. In the script “Web XR Controller”, there is an Input Map called XRLeftController. Double click on that to open up the controller script.

4. Add two new inputs

After opening the XRLeftControllerMap, it'll now take over the Inspector window. Make sure “Inputs” is expanded, and change the size from 2 to 4. This adds two new inputs for us to access.

5. Name the inputs

The two new inputs will appear at the bottom. We need to name them with terms that Unity's Input Manager will recognise. Therefore we must called them Horizontal and Vertical. Check the picture below and match the settings

6. Create a Locomotion.cs script

We now need to create a script to turn those inputs into actions. Right click anywhere in the Project window (I like to put it inside the Scripts folder, because I'm RPing a lawful good developer) and go to Create → C# Script. Call it Locomotion.cs and open it up, ready to edit.

7. Copy my code

Copy the code below into the file. If you're curious how it works, it is commented in-line.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using WebXR;
 
[RequireComponent(typeof(WebXRController))]
public class Locomotion : MonoBehaviour
{
    private WebXRController controller;
    public float speed = 0.2f;
    private bool snapTurnDebounce = false;
    GameObject player, head;
 
    // Start is called before the first frame update
    void Start()
    {
        //Get the GameObject that is the player's avatar
        player = GameObject.Find("WebXRCameraSet");
 
        //Get the object that represents the real-world headset (e.g the main camera)
        head = GameObject.Find("CameraMain");
 
    }
 
    // Update is called once per frame
    void Update()
    { 
 
        //Read float values (which range from -1 to 1) from the two thumbstick axis (forward/back, left/right)
        float forwardBack = Input.GetAxis("Vertical");
        float leftRight = Input.GetAxis("Horizontal");
 
        //SNAP TURNING
        //Snap turning on thumbstick being pushed left or right
        //With debounce to stop the snap turning from going forever
        float snapTurnThreshold = 0.5f; //How far thumbstick stick is pushed before snapping happens
        float snapTurnAmount = 30; //How far (in degrees) the player rotates each snapturn
 
        if (leftRight > snapTurnThreshold && snapTurnDebounce == false)
            {
                turnAndAdjust(snapTurnAmount);
            }
 
        else if (leftRight < -snapTurnThreshold && snapTurnDebounce == false)
            {
                turnAndAdjust(-snapTurnAmount);
            }
 
        else if (leftRight < 0.1 && leftRight > -0.1 && snapTurnDebounce == true)
            {
                snapTurnDebounce = false;
            }
 
        //FORWARD/BACKWARD LOCOMOTION
 
        float locomotionThreshold = 0.2f; //Only start moving after the thumbstick has been pushed this far
        float locomotionSpeed = 2f; //Adjusts the speed in response to how far thumbstick is pushed
 
        if ((forwardBack > locomotionThreshold || forwardBack < -locomotionThreshold))
            {
                float distance = forwardBack * locomotionSpeed; //
 
                float tempY = player.transform.position.y; //Get curreny Y position so we can put it back in to avoid flying (locks our player to the ground plane)
                Vector3 movement = player.transform.position + head.transform.forward * distance * Time.deltaTime; //Take the player movement, apply transformations
                movement.y = tempY; //Re-introduce previous Y value
                player.transform.position = movement; //Apply movement to player object
 
            }
 
    }
 
 
    private void turnAndAdjust(float i)
    {
 
        //This is actually trickier than it seems. This is because the "player" actually represents the centre of the player's initial starting area,
        //according to the centre of their VR setup in their home.
        //And the "camera" and hands that you use in VR are relative to this initial starting area and its center.
        //E.g. if you and your headset are 1 meter forward and 1 meter to the right of the center of your playspace in the real world, 
        //the headset "camera" will be 1m forward and 1m right of the position of the "player" in Unity.
        //You also can't rotate the "camera" object directly, as it is 1-1 fixed to the headset's position and rotation in the real world space
        //If you rotate the "player", the "camera" actually moves position.
        //E.g. if your player is at X: 0, Z: 0, but your camera is at X: 1, Z:1 (because you and your headset are 1m forward and to the right in the real world)
        //When you rotate the player 90 degrees, the camera (as a child) moves to X: -1, Z: 1. 
        //SO we need to understand how the camera moves position, and where, and then move the player object to compensate for this movement, so it feels like
        //the camera has rotated but not moved. Phew.
 
        Vector3 originalPosition = head.transform.position; //Store the head/camera position
        player.transform.Rotate(0, i, 0); //Rotate the player area
        Vector3 newPosition = head.transform.position; //Get the new head/camera position
        Vector3 difference = newPosition - originalPosition; //Calculate how much the head/camera moved after player was rotated
        player.transform.position = player.transform.position - difference; //Move the player by the difference above, to offset the head/camera position change caused by the rotation
        snapTurnDebounce = true;
    }
}

8. Add the Locomotion.cs script to the handL object

As the subheader says, add the Locomotion.cs script to the handL object.

9. Change the Unity Input Manager settings

This example plugs into the Unity Input Manager for understanding controller inputs, which means to get the result we want from our input controllers, we need to go to the Input Manager.

Go to Edit → Project Settings:

Then choose “Input Manager”:

Now find the second entries for Horizontal and Vertical. The first entries are for handling buttons, but ours our joystick axises (axe-ees?).

You can fiddle with these, but I like Gravity (1000), Dead (0.01) and Sensitivity (1.2). Gravity is how quickly your input resets to normal. The higher the gravity, the more responsive your input will be at registering when you have stopped pushing it. Dead is how far you need to push the stick before movement is registered. And sensitivity is the “units per second” that the axis will move toward the target value.

10. Repeat the above steps for the right hand

If you want both hands sticks to do the same thing (movement), then repeat the steps for the right hand game object (handR).

You can use the same locomotion script, but remember you'll need to add the Horizontal and Vertical inputs to the XRRightControllerMap (you previously did the XRLeftControllerMap).

And you're done. Please enjoy!

Eroding the Boundaries of Cognition: Implications of Embodiment

2020/02/03 10:24

Some notes from the Michael L. Anderson, Michael J. Richardson, Anthony Chemero paper:

  • There is, for example, ample evidence that verb retrieval tasks activate brainareas involved in motor control functions, and naming colors and animals (i.e., processingnouns) activates brain regions associated with visual processing (Damasio & Tranel, 1993;Damasio, Grabowski, Tranel, Hichwa, & Damasio, 1996; Martin, Haxby, Lalonde, Wiggs,& Ungerleider, 1995; Martin, Ungerleider, & Haxby, 2000; Martin, Wiggs, Ungerleider, &Haxby, 1996; Pulvermu ̈ller, 2005)
  • It appears that perceiving manipulable arti-facts, or even just seeing their names, activates brain regions that are also activated bygrasping (Chao & Martin, 2000).
  • And there are myriad demonstrations of interactionsbetween language and motor control more generally, perhaps most striking the recent find-ings that manipulating objects can improve reading comprehension in school-age children(Glenberg, Brown, & Levin, 2007).
  • In other words, only systems with component-dominant dynamics can bemodular; when dynamics are interaction dominant, it is difficult to localize the aspects ofparticular operations in particular parts of the system.
  • For example, Dotov, Nie, and Chemero (2010, in press)and Nie, Dotov, and Chemero (in press) describe experiments designed to induce and thentemporarily disrupt an extended cognitive system. Participants in these experiments play asimple video game, controlling an object on a monitor using a mouse. At some point during722M. L. Anderson, M. J. Richardson, A. Chemero⁄Topics in Cognitive Science 4 (2012)
  • The 1-minute trial, the connection between the mouse and the object it controls is disrupted temporarily before returning to normal. Dotov et al. found 1⁄f scaling at the hand-mouse interface while the mouse was operating normally, but not during the disruption. As dis-cussed above, this indicates that, during normal operation, the computer mouse is part of the smoothly functioning interaction-dominant system engaged in the task; during the mouseperturbation, however, the 1⁄f scaling at the hand-mouse interface disappears temporarily,indicating that the mouse is no longer part of the extended interaction-dominant system.
  • These results all indicate that the boundary between a cognitive agent and his or her environment is malleable
  • This means that there is no specific brainarea responsible for, say, object identification. Indeed, instances of object identificationmight be accomplished by a softly assembled coalition of components spanning brain,body, tools, and, even, other agents. Second, the traditional cognitive faculties, those thatwere traditionally assumed to be accomplished by anatomical modules, can no longer be distinguished from one another. Perception, action, judgment, language, and motor con-trol use the same neural real estate assembled into distinct coalitions.

Brain Bits

2020/02/03 09:37

I never got on with the idea of parts of the brain, like “this part is for imagining, this part is for counting”. Though I also never really explored why I didn't like the idea. Perhaps because it seemed like a strangely simplistic way of explaining how something so weird and blobby could work.

I kinda imagined the brain as a big mess, where there might be *some* dominance in certain areas for certain activities, but I never saw a convincing case for the idea that if you lost part of a brain (lost? As if you've casually misplaced a lump of gooey grey matter), that the processing that was there disappeared with it.

My mental image was more of a messy, interdependent system that could compensate for damaged or missing parts (within reason), with parts of the brain able to compensate and change depending on the requirements and development of the brain-holder.

Anyway, I recently found a language for this, in the terms “softly assembled systems” and “interaction-dominant dynamics”. Here's an explainer from Anderson:

Certain systems, such as an automobile or a laptop computer, are composed of a series of parts, each of which has a particular role that it fulfills. Other systems, such as flocks of birds, are more fluidly put together. In the latter case, it doesn’t matter which particular birds are part of the flock—any old bird will do—and each bird is capable of taking up each posi-tion in the flock. Indeed, during flight each bird will take up multiple positions in the flock. The flock is softly assembled, in that it is composed of a temporary coalition of entities,engaged in collaborative task. Some softly assembled systems exhibit interaction-dominantdynamics, as opposed to component-dominant dynamics. In component-dominant dynam-ics, behavior is the product of a rigidly delineated architecture of modules, each with prede-termined functions; in interaction-dominant dynamics, on the other hand, coordinatedprocesses alter one another’s dynamics and it is difficult, and sometimes impossible, toassign particular roles to particular components. Sometimes softly assembled systems exhibiting interaction-dominant dynamics are called synergies

I've discovered I've always felt like the brain is a softly-assembler system with interaction-dominant dynamics! And that it's a pretty solid idea:

Most recently, Anderson (2010) and Anderson and Pessoa (2011) conclude from a review of 1,469 fMRIexperiments in 11 task domains (including vision, audition, attention, emotion, language,mathematics, memory, abstract reasoning, and action execution, inhibition, and observation)that a typical anatomical region (as delimited, for example, by Freesurfer) is involved insupporting multiple tasks across nine separate cognitive domains. Even relatively smallpieces of neural real estate (equivalent to 1⁄1000th of the brain) typically support tasksacross more than four of these domains.

So how does the brain do things? One idea is that it's the activation patterns between areas of the brain that matters, and that newly formed human behaviours (talking compared with, like, eating) use wider-spread internal brain activation patterns. E.g. we're (as a species, over thousands of years) finding ways to use the basic machinery of our simplistic mammalian brains for our more amazing purposes:

Anderson demonstrated that the differencesbetween cognitive domains are marked less by differences in the neural circuitry devoted toeach, and more by the different patterns of cooperation between mostly shared circuitry(Anderson, 2008). In addition, it appears that the functional complexes supporting tasks innewer—more recently evolved—cognitive domains utilize more and more widely scatteredcircuitry than do the complexes supporting older functionality like vision and motor control(Anderson, 2007, 2008)

Which, taken to a conclusion that I've made up and couldn't find a quote for from someone more informed, suggests that nurture is vitally important in helping the brain form those activation pathways.

Review: The role of the active learning approach in teaching English as a foreign language

2019/12/22 16:37

  • Active Learning is a type of engaging approach or technique for language learning.
  • Many langauge learning approaches can be contextualised as active learning techniques
  • Active learning may be the answer for issues concerning the lack of interest, engagement, participatory actions, voice, and sense of community, as well as ownership, responsibility and autonomy.
  • Grounded in contstructivism