This document is about: FUSION 2
SWITCH TO

VR Host

Level 4

Overview

Fusion VR Host demonstrates a quick and easy approach to start multiplayer games or applications with VR.

The choice between Shared or Host/Server topologies must be driven by your game specificities. In this sample, the Host mode is used.

The purpose of this sample is to clarify how to handle VR rig and provide a basic teleport and grab example.

fusion vr host

Before You start

  • The project has been developed with Unity 2021.3 and Fusion 2.
  • To run the sample, first create a Fusion AppId in the PhotonEngine Dashboard and paste it into the App Id Fusion field in Real Time Settings (reachable from the Fusion menu). Then load the Launch scene and press Play.

Download

Version Release Date Download
2.0.0 Jan 11, 2024 Fusion VR Host 2.0.0 Build 387

Handling Input

Meta Quest

  • Teleport : press A, B, X, Y, or any stick to display a pointer. You will teleport on any accepted target on release
  • Grab : first put your hand over the object and grab it using controller grab button

Mouse

A basic desktop rig is included in the project. It means that you have basic interaction using the mouse.

  • Move : left click with your mouse to display a pointer. You will teleport on any accepted target on release
  • Rotate : keep the right mouse button pressed and move the mouse to rotate the point of view
  • Grab : left click with your mouse on an object to grab it.

Connection Manager

The NetworkRunner is installed on the Connection Manager game object. The Connection Manager is in charge of configuring the game settings and starting the connection.

C#

private async void Start()
{
    // Launch the connection at start
    if (connectOnStart) await Connect();
}

public async Task Connect()
{
    // Create the scene manager if it does not exist
    if (sceneManager == null) sceneManager = gameObject.AddComponent<NetworkSceneManagerDefault>();
    if (onWillConnect != null) onWillConnect.Invoke();

    // Start or join (depends on gamemode) a session with a specific name
    var args = new StartGameArgs()
    {
        GameMode = GameMode.Shared,
        Scene = CurrentSceneInfo(),
        SceneManager = sceneManager
    };

    // Connexion criteria (note: actual project code contains alternative options)
    args.SessionName = roomName;

    await runner.StartGame(args);
}

public virtual NetworkSceneInfo CurrentSceneInfo()
{
    var activeScene = SceneManager.GetActiveScene();
    SceneRef sceneRef = default;

    if (activeScene.buildIndex < 0 || activeScene.buildIndex >= SceneManager.sceneCountInBuildSettings)
    {
        Debug.LogError("Current scene is not part of the build settings");
    }
    else
    {
        sceneRef = SceneRef.FromIndex(activeScene.buildIndex);
    }

    var sceneInfo = new NetworkSceneInfo();
    if (sceneRef.IsValid)
    {
        sceneInfo.AddSceneRef(sceneRef, LoadSceneMode.Single);
    }
    return sceneInfo;
}

Implementing INetworkRunnerCallbacks will allow Fusion NetworkRunner to interact with the Connection Manager class. In this sample, the OnPlayerJoined call back is used to spawn on the host the user prefab when a player joins the session, and OnPlayerLeft to Despawn it when the same player leaves the session.

C#

    public void OnPlayerJoined(NetworkRunner runner, PlayerRef player)
    {
        // The user's prefab has to be spawned by the host
        if (runner.IsServer && userPrefab != null)
        {
            Debug.Log($"OnPlayerJoined. PlayerId: {player.PlayerId}");
            // We make sure to give the input authority to the connecting player for their user's object
            NetworkObject networkPlayerObject = runner.Spawn(userPrefab, position: transform.position, rotation: transform.rotation, inputAuthority: player, (runner, obj) => {
            });

            // Keep track of the player avatars so we can remove it when they disconnect
            _spawnedUsers.Add(player, networkPlayerObject);
        }
    }

    public void OnPlayerLeft(NetworkRunner runner, PlayerRef player)
    {
        // Find and remove the players avatar (only the host would have stored the spawned game object)
        if (_spawnedUsers.TryGetValue(player, out NetworkObject networkObject))
        {
            runner.Despawn(networkObject);
            _spawnedUsers.Remove(player);
        }
    }

Please check that "Auto Host or Client" is selected on Unity Connection Manager game object.

fusion vr host auto host or client

Note that it is also possible to select "Host" or "Client", to be sure to have a specific role, for instance during testing.

Rigs

Overview

In an immersive application, the rig describes all the mobile parts that are required to represent an user, usually both hands, an head, and the play area (it is the personal space that can be moved, when an user teleports for instance),

While in a networked session, every user is represented by a networked rig, whose various parts positions are synchronized over the network.

fusion vr host rigs logic

Several architectures are possible, and valid, regarding how the rig parts are organized and synchronized. Here, an user is represented by a single NetworkObject, with several nested NetworkTransforms, one for each rig parts.

Regarding the specific case of the network rig representing the local user, this rig has to be driven by the hardware inputs. To simplify this process, a separate, non networked, rig has been created, called the “Hardware rig”. It uses Unity InputDevice API to collect the hardware inputs.

Details

Rig

All the parameters driving the rig (its position in space and the pose of the hands) are included in the RigInput structure. Also, information related to grabbed objects are included into this structure.

C#

    public struct RigInput : INetworkInput
    {
        public Vector3 playAreaPosition;
        public Quaternion playAreaRotation;
        public Vector3 leftHandPosition;
        public Quaternion leftHandRotation;
        public Vector3 rightHandPosition;
        public Quaternion rightHandRotation;
        public Vector3 headsetPosition;
        public Quaternion headsetRotation;
        public HandCommand leftHandCommand;
        public HandCommand rightHandCommand;
        public GrabInfo leftGrabInfo;
        public GrabInfo rightGrabInfo;
    }

The HardwareRig class updates the structure when Fusion NetworkRunner polls for user inputs. To do so, it collects input paramaters from the various hardware rig parts.

C#

    public void OnInput(NetworkRunner runner, NetworkInput input)
    {
        RigInput rigInput = new RigInput();
        rigInput.playAreaPosition = transform.position;
        rigInput.playAreaRotation = transform.rotation;

        rigInput.leftHandPosition = leftHand.transform.position;
        rigInput.leftHandRotation = leftHand.transform.rotation;
        rigInput.rightHandPosition = rightHand.transform.position;
        rigInput.rightHandRotation = rightHand.transform.rotation;
        rigInput.headsetPosition = headset.transform.position;
        rigInput.headsetRotation = headset.transform.rotation;

        rigInput.leftHandCommand = leftHand.handCommand;
        rigInput.rightHandCommand = rightHand.handCommand;

        rigInput.leftGrabInfo = leftHand.grabber.GrabInfo;
        rigInput.rightGrabInfo = rightHand.grabber.GrabInfo;

        input.Set(rigInput);
    }

Then, the networked rig associated with the user who sent those input receives them: both the host (as the state authority) and the user having sent those input (as the input authority) receives them. Other users do not (they are proxies here).

It happens in the NetworkRig component, located on the user prefab, during the FixedUpdateNetwork() (FUN), through GetInput (which only return the input for the state and input authorities).

During the FUN, every networked rig parts is configured to simply follow the input parameter coming from the matching hardware rig parts.

In Host mode, when the inputs are handled by the host, they can then be forwarded to the proxies, so they can replicate users' movements. It is either:

  • handled through [Networked] variables (for the hand pose and grabbing info): when the state authority (the host) change a networked var value, this value is replicated on each user
  • or, regarding the positions and rotations, handled by the state authority (the host) NetworkTransform components, which handle the replication to other users.

C#

    // As we are in host topology, we use the input authority to track which player is the local user
    public bool IsLocalNetworkRig => Object.HasInputAuthority;
    public override void Spawned()
    {
        base.Spawned();
        if (IsLocalNetworkRig)
        {
            hardwareRig = FindObjectOfType<HardwareRig>();
            if (hardwareRig == null) Debug.LogError("Missing HardwareRig in the scene");
        }
    }

    public override void FixedUpdateNetwork()
    {
        base.FixedUpdateNetwork();
        // update the rig at each network tick
        if (GetInput<RigInput>(out var input))
        {
            transform.position = input.playAreaPosition;
            transform.rotation = input.playAreaRotation;
            leftHand.transform.position = input.leftHandPosition;
            leftHand.transform.rotation = input.leftHandRotation;
            rightHand.transform.position = input.rightHandPosition;
            rightHand.transform.rotation = input.rightHandRotation;
            headset.transform.position = input.headsetPosition;
            headset.transform.rotation = input.headsetRotation;
            // we update the hand pose info. It will trigger on network hands OnHandCommandChange on all clients, and update the hand representation accordingly
            leftHand.HandCommand = input.leftHandCommand;
            rightHand.HandCommand = input.rightHandCommand;
        
            leftGrabber.GrabInfo = input.leftGrabInfo;
            rightGrabber.GrabInfo = input.rightGrabInfo;
        }
    }

Aside from moving the network rig parts position during the FixedUpdateNetwork(), the NetworkRig component also handles the local extrapolation: during the Render(), rig parts are moved using the most recent data from the local hardware rig (only for the local user having the input authority on them).

It ensures that the local user always has the most up-to-date possible positions for their own hands (to avoid potential unease), even if the screen refresh rate is higher than the network tick rate.

The [DefaultExecutionOrder(NetworkRig.EXECUTION_ORDER)] tag before the class, with EXECUTION_ORDER = 100, ensure that the NetworkRig Render() will be called after the NetworkTransform methods, so that NetworkRig can override the original handling.

C#

    public override void Render()
    {
        base.Render();
        if (IsLocalNetworkRig)
        {
            // Extrapolate for local user:
            // we want to have the visual at the good position as soon as possible, so we force the visuals to follow the most fresh hardware positions
            // To update the visual object, and not the actual networked position, we move the interpolation targets
            transform.position = hardwareRig.transform.position;
            transform.rotation = hardwareRig.transform.rotation;
            leftHand.transform.position = hardwareRig.leftHand.transform.position;
            leftHand.transform.rotation = hardwareRig.leftHand.transform.rotation;
            rightHand.transform.position = hardwareRig.rightHand.transform.position;
            rightHand.transform.rotation = hardwareRig.rightHand.transform.rotation;
            headset.transform.position = hardwareRig.headset.transform.position;
            headset.transform.rotation = hardwareRig.headset.transform.rotation;
        }
    }

Headset

The NetworkHeadset class is very simple : it provides an access to the headset NetworkTransform for the NetworkRig class

C#

    public class NetworkHeadset : NetworkBehaviour
    {
        [HideInInspector]
        public NetworkTransform networkTransform;
        private void Awake()
        {
            if (networkTransform == null) networkTransform = GetComponent<NetworkTransform>();
        }
    }

Hands

Like the NetworkHeadset class, the NetworkHand class provides access to the hand Network Transform for the NetworkRig class.

To synchronize the hand pose, a network structure called HandCommand has been created in the HardwareHand class.

C#

    // Structure representing the inputs driving a hand pose 
    [System.Serializable]
    public struct HandCommand : INetworkStruct
    {
        public float thumbTouchedCommand;
        public float indexTouchedCommand;
        public float gripCommand;
        public float triggerCommand;
        // Optionnal commands
        public int poseCommand;
        public float pinchCommand;// Can be computed from triggerCommand by default
    }

This HandCommand structure is used into the IHandRepresentation interface which set various hand properties including the hand pose. The NetworkHand can have a child IHandRepresentation, to which it will forward hand pose data.

C#

    public interface IHandRepresentation
    {
        public void SetHandCommand(HandCommand command);
        public GameObject gameObject { get; }
        public void SetHandColor(Color color);
        public void SetHandMaterial(Material material);
        public void DisplayMesh(bool shouldDisplay);
        public bool IsMeshDisplayed { get; }
        public Material SharedHandMaterial { get; }
    }

The OSFHandRepresentation class, located on each hand, implements this interface in order to modify the fingers position thanks to the provided hand animator (ApplyCommand(HandCommand command) function).

fusion vr host hand representation
fusion vr host hand animator

Now, let's see how it is synchronized.

The HandCommand structure is updated with the fingers’ positions into the Update() of the HardwareHand

C#

protected virtual void Update()
    {
        // update hand pose
        handCommand.thumbTouchedCommand = thumbAction.action.ReadValue<float>();
        handCommand.indexTouchedCommand = indexAction.action.ReadValue<float>();
        handCommand.gripCommand = gripAction.action.ReadValue<float>();
        handCommand.triggerCommand = triggerAction.action.ReadValue<float>();
        handCommand.poseCommand = handPose;
        handCommand.pinchCommand = 0;
        // update hand interaction
        isGrabbing = grabAction.action.ReadValue<float>() > grabThreshold;
        if (localHandRepresentation != null) localHandRepresentation.SetHandCommand(handCommand);
    }

At each NetworkRig FixedUpdateNetwork(), the hand pose datas are updated on the local user, along with the others rig inputs.

C#

    public override void FixedUpdateNetwork()
    {
        base.FixedUpdateNetwork();

        // update the rig at each network tick
        if (GetInput<RigInput>(out var input))
        {
            transform.position = input.playAreaPosition;
            transform.rotation = input.playAreaRotation;
            leftHand.transform.position = input.leftHandPosition;
            leftHand.transform.rotation = input.leftHandRotation;
            rightHand.transform.position = input.rightHandPosition;
            rightHand.transform.rotation = input.rightHandRotation;
            headset.transform.position = input.headsetPosition;
            headset.transform.rotation = input.headsetRotation;
            // we update the hand pose info. It will trigger on network hands OnHandCommandChange on each client, and update the hand representation accordingly
            leftHand.HandCommand = input.leftHandCommand;
            rightHand.HandCommand = input.rightHandCommand;

            leftGrabber.GrabInfo = input.leftGrabInfo;
            rightGrabber.GrabInfo = input.rightGrabInfo;
        }
    }

The NetworkHand component, located on each hand of the user prefab, manages the hand representation update.

To do so, the class contains a HandCommand networked structure. A ChangeDetector is used to call UpdateHandRepresentationWithNetworkState() for each player every time the networked structure is changed by the state authority (the host), and updates the hand representation accordingly.

C#

    [Networked]
    public HandCommand HandCommand { get; set; }
    ChangeDetector changeDetector;

    public override void Render()
    {
        base.Render();
        if (IsLocalNetworkRig)
        {
        ...
        }
        else
        {
            foreach (var changedNetworkedVarName in changeDetector.DetectChanges(this))
            {
                if (changedNetworkedVarName == nameof(HandCommand))
                {
                    // Will be called when the local user change the hand pose structure
                    // We trigger here the actual animation update
                    UpdateHandRepresentationWithNetworkState();
                }
            }
        }
    }

    // Update the hand representation each time the network structure HandCommand is updated
    void UpdateHandRepresentationWithNetworkState()
    {
        if (handRepresentation != null) handRepresentation.SetHandCommand(HandCommand);
    }

Similarly to what NetworkRig does for the rig part positions, during the Render(), NetworkHand also handles the extrapolation and update of the hand pose, using the local hardware hands.

C#

    public override void Render()
    {
        base.Render();
        if (IsLocalNetworkRig)
        {
            // Extrapolate for local user : we want to have the visual at the good position as soon as possible, so we force the visuals to follow the most fresh hand pose
            UpdateRepresentationWithLocalHardwareState();
        }
        else
        {
        ...
        }
    }

    // Update the hand representation with the local hardware state
    void UpdateRepresentationWithLocalHardwareState()
    {
        if (handRepresentation != null) handRepresentation.SetHandCommand(LocalHardwareHand.handCommand);
    }

Teleport & locomotion

fusion vr host teleport

The RayBeamer class located on each hardware rig hand is in charge of displaying a ray when the user pushes a button. When the user releases the button, if the ray target is valid, then an event is triggered.

C#

    if (onRelease != null) onRelease.Invoke(lastHitCollider, lastHit);

This event is listened to by the Rig Locomotion class located on the hardware rig.

C#

    beamer.onRelease.AddListener(OnBeamRelease);

Then, it calls the rig teleport coroutine…

C#

    protected virtual void OnBeamRelease(Collider lastHitCollider, Vector3 position)
    {
        ...
        if (ValidLocomotionSurface(lastHitCollider))
        {
            StartCoroutine(rig.FadedTeleport(position));
        }
    }

Which updates the hardware rig position, and ask a Fader component available on the hardware headset to fade in and out the view during the teleport (to avoid cybersickness).

C#

    public virtual IEnumerator FadedTeleport(Vector3 position)
    {
        if (headset.fader) yield return headset.fader.FadeIn();
        Teleport(position);
        if (headset.fader) yield return headset.fader.WaitBlinkDuration();
        if (headset.fader) yield return headset.fader.FadeOut();
    }

    public virtual void Teleport(Vector3 position)
    {
        Vector3 headsetOffet = headset.transform.position - transform.position;
        headsetOffet.y = 0;
        transform.position = position - headsetOffet;
    }

As seen previously, this modification on the hardware rig position will be synchronized over the network thanks to the OnInput call back.

The same strategy applies for the rig rotation where CheckSnapTurn() triggers a rig modification.

C#

    IEnumerator Rotate(float angle)
    {
        timeStarted = Time.time;
        rotating = true;
        yield return rig.FadedRotate(angle);
        rotating = false;
    }

    public virtual IEnumerator FadedRotate(float angle)
    {
        if (headset.fader) yield return headset.fader.FadeIn();
        Rotate(angle);
        if (headset.fader) yield return headset.fader.WaitBlinkDuration();
        if (headset.fader) yield return headset.fader.FadeOut();
    }

    public virtual void Rotate(float angle)
    {
        transform.RotateAround(headset.transform.position, transform.up, angle);
    }

Grabbing

Overview

The grabbing logic here is separated in 2 parts:

  • the local part, non networked, that detected the actual grabbing and ungrabbing when the hardware hand has triggered a grab action over a grabbable object (Grabber and Grabbable classes)
  • the networked part, that ensure that all player are aware of the grabbing status, and which manages the actual position change to follow the grabbing hand (NetworkGrabber and NetworkGrabbable classes).

Note: the code contains a few lines to allow the local part to manage the following move itself when used offline, for instance in use cases where the same components are used for an offline lobby. But this document will focus on the actual networked usages.

fusion vr host grabbing logic

Two different kind of grabbing are available in this sample (Grabbable and NetworkGrabbable classes are abstract classes, with subclasses implementing each specific logic):

  • grabbing for kinematic objects: their position simply follow the position of the grabbing hand. They can not have physics interaction with other objects. Implemented in KinematicGrabbable and NetworkKinematicGrabbable classes.
  • grabbing for physics objects: their velocity is changed so that they follow the grabbing hand. They can have physics interactions with other objects, and can be launched. Implemented in PhysicsGrabbable and NetworkPhysicsGrabbable classes.

Note: Even though it is possible to give kinematic object a release velocity (to launch them), it was not added this in this sample, as it would require additional code (while this separate kind of grab is here to demonstrate a very simple code base for simple grab use cases), and as the physics grabbing, also provided here, would in any cases give a more logical implementation and more accurate results for this kind of use case.

fusion vr host grabbing classes

Grab triggering and transfer

The HardwareHand class, located on each hand, updates the isGrabbing bool at each update : the bool is true when the user presses the grip button. Please note that, the updateGrabWithAction bool is used to support the deskop rig, a version of the rig that can be driven with the mouse and keyboard (this bool must be set to False for desktop mode, True for VR mode)

C#

    protected virtual void Update()
    {
        // update hand pose
        handCommand.thumbTouchedCommand = thumbAction.action.ReadValue<float>();
        handCommand.indexTouchedCommand = indexAction.action.ReadValue<float>();
        handCommand.gripCommand = gripAction.action.ReadValue<float>();
        handCommand.triggerCommand = triggerAction.action.ReadValue<float>();
        handCommand.poseCommand = handPose;
        handCommand.pinchCommand = 0;

        // update hand interaction
        if (updateGrabWithAction) isGrabbing = grabAction.action.ReadValue<float>() > grabThreshold;
        if (localHandRepresentation != null) localHandRepresentation.SetHandCommand(handCommand);
    }

To detect collisions with grabbable objects, a simple box collider is located on each hardware hand, used by a Grabber component placed on this hand: when a collision occurs, the method OnTriggerStay() is called.

Note that in host topology, some ticks will be forward ticks (actual new ticks), while other are resimulations (replaying past moments). The grabbing and ungrabbing should only be detected during forward ticks, which correspond to the current positions. So OnTriggerStay() does not launch for resim ticks.

C#

private void OnTriggerStay(Collider other)
{
    if (rig && rig.runner && rig.runner.IsResimulation)
    {
        // We only manage grabbing during forward ticks, to avoid detecting past positions of the grabbable object
        return;
    }

First, OnTriggerStay checks if an object is already grabbed. For simplification, multiple grabbing is not allowed in this sample.

C#

    // Exit if an object is already grabbed
    if (grabbedObject != null)
    {
        // It is already the grabbed object or another, but we don't allow shared grabbing here
        return;
    }

Then it checks that :

  • the collided object can be grabbed (it has a Grabbable component)
  • the user presses the grip button

If these conditions are met, the grabbed object is asked to follow the hand thanks to the Grabbable Grab method

C#

    Grabbable grabbable;

    if (lastCheckedCollider == other)
    {
        grabbable = lastCheckColliderGrabbable;
    }
    else
    {
        grabbable = other.GetComponentInParent<Grabbable>();
    }
    // To limit the number of GetComponent calls, we cache the latest checked collider grabbable result
    lastCheckedCollider = other;
    lastCheckColliderGrabbable = grabbable;
    if (grabbable != null)
    {
        if (grabbable.currentGrabber != null)
        {
            // We don't allow multihand grabbing (it would have to be defined), nor hand swap (it would require to track hovering and do not allow grabbing while the hand is already close - or any other mecanism to avoid infinit swapping between the hands)
            return;
        }
        if (hand.isGrabbing) Grab(grabbable);
    }

The Grabbable Grab() method stores the grabbing position offset

C#

    public virtual void Grab(Grabber newGrabber)
    {
        // Find grabbable position/rotation in grabber referential
        localPositionOffset = newGrabber.transform.InverseTransformPoint(transform.position);
        localRotationOffset = Quaternion.Inverse(newGrabber.transform.rotation) * transform.rotation;
        currentGrabber = newGrabber;
    }

Similarly, when the object is not grabbed anymore, the Grabbable Ungrab() call store some details about the object

C#

    public virtual void Ungrab()
    {
        currentGrabber = null;
        if (networkGrabbable)
        {
            ungrabPosition = networkGrabbable.networkTransform.InterpolationTarget.transform.position;
            ungrabRotation = networkGrabbable.networkTransform.InterpolationTarget.transform.rotation;
            ungrabVelocity = Velocity;
            ungrabAngularVelocity = AngularVelocity;
        }
    }

Note that depending on the grabbing type subclass actually used, some fields are not relevant (the ungrab positions are not used for physics grabbing for instance).

All those data about the grabbing (the network id of the object that is grabbed, the offset, the eventual release velocity and position) are then shared in the input transfer through the GrabInfo structure.

C#

    // Store the info describbing a grabbing state
    public struct GrabInfo : INetworkStruct
    {
        public NetworkBehaviourId grabbedObjectId;
        public Vector3 localPositionOffset;
        public Quaternion localRotationOffset;
        // We want the local user accurate ungrab position to be enforced on the network, and so shared in the input (to avoid the grabbable following "too long" the grabber)
        public Vector3 ungrabPosition;
        public Quaternion ungrabRotation; 
        public Vector3 ungrabVelocity;
        public Vector3 ungrabAngularVelocity;
    }

When building the input, the grabber is asked to provide the up-to-date grabbing info:

C#


    public void OnInput(NetworkRunner runner, NetworkInput input)
    {
        RigInput rigInput = new RigInput();
        ...

        rigInput.leftGrabInfo = leftHand.grabber.GrabInfo;
        rigInput.rightGrabInfo = rightHand.grabber.GrabInfo;
        input.Set(rigInput);
    }

    public GrabInfo GrabInfo
    {
        get
        {
            if (resetGrabInfo)
                return default;

            if (grabbedObject)
            {
                _grabInfo.grabbedObjectId = grabbedObject.networkGrabbable.Id;
                _grabInfo.localPositionOffset = grabbedObject.localPositionOffset;
                _grabInfo.localRotationOffset = grabbedObject.localRotationOffset;
            } 
            else
            {
                _grabInfo.grabbedObjectId = NetworkBehaviourId.None;
                _grabInfo.ungrabPosition = ungrabPosition;
                _grabInfo.ungrabRotation = ungrabRotation; 
                _grabInfo.ungrabVelocity = ungrabVelocity;
                _grabInfo.ungrabAngularVelocity = ungrabAngularVelocity;
            }

            return _grabInfo;
        }
    }

Then, when received by the host in NetworkRig, it stores them in the NetworkGrabber GrabInfo [Networked] var.

There, on each clients, during the FixedUpdateNetwork(), the class checks if the grabbing info has changed. It is done only in forward ticks, to avoid replaying the grab/ungrab during resimulations. It is done by calling HandleGrabInfoChange, to compare between the previous and current grab status of the hand. When needed, it then triggers the actual Grab and Ungrab methods on the NetworkGrabbable.

C#

    public override void FixedUpdateNetwork()
    {
        base.FixedUpdateNetwork();

        if (Runner.IsForward)
        {
            // We only detect grabbing changes in forward, to avoid multiple Grab calls (that would have side effects in current implementation)
            foreach (var changedPropertyName in changeDetector.DetectChanges(this))
            {
                if (changedPropertyName == nameof(GrabInfo))
                {
                    // Grab info is filled by the NetworkRig, based on the input, and the input are filled with the Hardware rig Grabber GrabInfo
                    HandleGrabInfoChange(GrabInfo);
                }
            }
        }
    }

To grab a new object, the method first finds this grabbed NetworkGrabbable by searching it with its network id, with Object.Runner.TryFindBehaviour

C#

    void HandleGrabInfoChange(GrabInfo newGrabInfo)
    {
        if (grabbedObject != null)
        {
            grabbedObject.Ungrab(this, newGrabInfo);
            grabbedObject = null;
        }

        // We have to look for the grabbed object has it has changed
        // If an object is grabbed, we look for it through the runner with its Id
        if (newGrabInfo.grabbedObjectId != NetworkBehaviourId.None && Object.Runner.TryFindBehaviour(newGrabInfo.grabbedObjectId, out NetworkGrabbable newGrabbedObject))
        {
            grabbedObject = newGrabbedObject;

            if (grabbedObject != null)
            {
                grabbedObject.Grab(this, newGrabInfo);
            }
        }
    }

The actual network grabbing, ungrabbing, and following the network grabber, differs depending on which grabbing type was choosen.

Kinematic grabbing type

For the kinematic grabbing type, it is not necessary to change the input authority to move a grabbed object. The position change is only done on the host (the state authority), and then the NetworkTransform ensures that all players receive the position updates.

Follow

For the kinematic grabbing type (implemented in KinematicGrabbable and NetworkKinematicGrabbable classes), following the current grabber is simply teleporting the grabbed object to the actual hand position (done by the state authority).

C#

    public void Follow(Transform followingtransform, Transform followedTransform)
    {
        followingtransform.position = followedTransform.TransformPoint(localPositionOffset);
        followingtransform.rotation = followedTransform.rotation * localRotationOffset;
    }

FixedUpdateNetwork

When online, the following code is called during FixedUpdateNetwork() calls

C#

    public override void FixedUpdateNetwork()
    {
        // We only update the object position if we have the state authority
        if (!Object.HasStateAuthority) return;

        if (!IsGrabbed) return;
        // Follow grabber, adding position/rotation offsets
        grabbable.Follow(followingtransform: transform, followedTransform: currentGrabber.transform);
    }

Render

Regarding the extrapolation, made during the Render() (the NetworkKinematic class as a [DefaultExecutionOrder(NetworkKinematicGrabbable.EXECUTION_ORDER)] directive with EXECUTION_ORDER = NetworkGrabber.EXECUTION_ORDER + 10 to override the NetworkTransform interpolation if needed), 2 cases have to be handled here :

  • extrapolation while the object is grabbed: the object expected position is known, it should be on the hand position.
  • extrapolation when the object has just been ungrabbed: the network transform interpolation is still not the same as the extrapolation done while the object was grabbed. So for a short moment, the extrapolation has to continue (ie. the object has to stay still, at its ungrab position), otherwise the object would jump a bit in the past

C#

    public override void Render()
    {
        if (IsGrabbed)
        {
            // Extrapolation: Make visual representation follow grabber visual representation, adding position/rotation offsets
            // We extrapolate for all users: we know that the grabbed object should follow accuratly the grabber, even if the network position might be a bit out of sync
            grabbable.Follow(followingtransform: transform, followedTransform: currentGrabber.transform);
        } 
        else if (grabbable.ungrabTime != -1)
        {
            if ((Time.time - grabbable.ungrabTime) < ungrabResyncDuration)
            {
                // When the local user just ungrabbed the object, the network transform interpolation is still not the same as the extrapolation 
                //  we were doing while the object was grabbed. So for a few frames, we need to ensure that the extrapolation continues
                //  (ie. the object stay still)
                //  until the network transform offers the same visual conclusion that the one we used to do
                // Other ways to determine this extended extrapolation duration do exist (based on interpolation distance, number of ticks, ...)
                transform.position = grabbable.ungrabPosition;
                transform.rotation = grabbable.ungrabRotation;
            }
            else
            {
                // We'll let the NetworkTransform do its normal interpolation again
                grabbable.ungrabTime = -1;
            }
        }
    }

Note: some additional extrapolation could be done, for additional edge cases, for instance on the client grabbing the object, between the actual grabbing and the first tick were the [Networked] var are set: the hand visual can be followed a bit (a few milliseconds) before it would be otherwise

Physics grabbing type

Unlike kinematic grabbing type, physics grabbing requires to have the input authority on the grabbed object. The physics grabbing (implemented in PhysicsGrabbable and NetworkPhysicsGrabbable classes) is based on applying forces (or changing the velocity) of grabbed objects during each ticks so that they follow the grabbing hand.

Grabbable simulation

To apply this physics properly, including on proxies when they collide with locally grabbed objects (so that they "resist" properly due to their attraction to the grabbing hand), the grabbed object has to be simulated everywhere, so that the physics is applied for each player during the FixedUpdateNetwork.

By default, FixedUpdateNetwork is only called on the host, and on the input authority. To run it even on proxies, SetIsSimulated has to be applied on the grabbable objects:

C#

Runner.SetIsSimulated(Object, true);

Note that when the input authority of the grabbable change (due to another user grabbing it), this setting is reset to its default value, so we have to set it again:

C#

    #region IInputAuthorityLost
    public void InputAuthorityLost()
    {
        // When using Object.AssignInputAuthority, SetIsSimulated will be reset to false. as we want the object to remain simulated (see Spawned), we have to set it back
        Runner.SetIsSimulated(Object, true);
    }
    #endregion

Grabber simulation

Besides, as we want, to apply the physics properly, the curent position of the grabber (hand) object to follow during FixedUpdateNetwork calls, we need to have the grabber position to be set at its proper position during each tick, including resims. So SetIsSimulated has also applied on the grabbers (the hands).

By default, it would lead the hand interpolation to be done based on prediction ticks, even on proxies, which have no info to predict hand movements. So the hand would stutter a bit. To avoid that, even if the remote hands are simulated, we ask to display them based in the remote (original) timeframe for proxies:

C#

if (supportedgrabbingKind == GrabbingKind.PhysicsAndKinematic)
{
    // The hands need to be simulated to be at the appropriate position during FUN when a grabbable follow them (physics grabbable are fully simulated)
    Runner.SetIsSimulated(Object, true);
    if (Object.HasInputAuthority == false)
    {
        // As the object is now simulated, the render time frame will take the localy simulated ticks by default.
        // But we don't really have data to guess in advance the position of the hand for remote users
        // So we still want to interpolate between the state we received from the server (the remote time frame)
        Object.RenderTimeframe = RenderTimeframe.Remote;
    }
}

Grab logic

The NetworkGrabber will trigger the Grab call on the NetworkPhysicsGrabbable. This will mostly trigger the input authority change, so that the input data used to determine the grabbing state are based on the grabbing user's input.

Then, determining if an object is grabbed is based on the inputs for the host and the input authority (the grabbing player), while it is based on a networked var DetailedGrabInfo (containing the grabber and the grabbing details) for the proxies.

Input authority transfer

Due to the input authority transfer, for a short duration, there will be a transition were the player is grabbing, but they do not have the input authority yet.

It is handled by storing temporary data (a willReceiveInputAuthority bool, who "will be" the grabber, the grabbing details, ...) that will override the normal behavior of the grabbable during this transition time.

C#

    // Will be called by the host and by the grabbing user (the input authority of the NetworkGrabber) upon NetworkGrabber.GrabInfo change detection
    //  For other users, will be called by the local NetworkGrabbable.DetailedGrabInfo change detection
    public override void Grab(NetworkGrabber newGrabber, GrabInfo newGrabInfo)
    {
        if (Object.InputAuthority != newGrabber.Object.InputAuthority)
        {
            if (newGrabber.Object.InputAuthority == Runner.LocalPlayer)
            {
                // Store data to handle the grab while the input authority transfer is pending
                willReceiveInputAuthority = true;
                inputAuthorityChangeTime = Time.time;
                inputAuthorityChangeTick = Runner.Tick;
                incomingGrabInfo = newGrabInfo;
                incomingGrabber = newGrabber;
            }

            // Transfering the input authority of the cube is in fact not strickly required here (as the object is fully simulated on all clients)
            if (Object.HasStateAuthority)
            {
                Object.AssignInputAuthority(newGrabber.Object.InputAuthority);
            }
        }
        cachedGrabbers[(newGrabber.Object.InputAuthority, newGrabber.hand.side)] = newGrabber;
    }

This ensures to provide the best reactivity for the local player when it is not the host.

Render timeframe

As we force the proxy simulation with SetIsSimulated (so that the physics is run locally), the object is always simulated, even on proxies. So by default, during Render, the interpolation would be done between locally simulated ticks.

But while applying physics is important to handle collision with other simulated object, the position is not perfectly predicted, as the hand simulated position for the remote user is not moving during those states, while it is probably. It would lead to slight stutter in the interpolation (the grabbed object would not move for a bit, then would jump, and so on).

So, while the FixedUpdateNetwork() position is used for local physics computation, for the final rendering of this object, we prefer to use the remote timeframe, which will interpolate between states where the hand were properly positioned to trigger the following.

C#

    void AdaptRenderTimeFrame(NetworkGrabber grabber)
    {
        if (!grabber) return;
        if (grabber.HasInputAuthority || willReceiveInputAuthority)
        {
            Object.RenderTimeframe = RenderTimeframe.Local;
        }
        else
        {
            Object.RenderTimeframe = RenderTimeframe.Remote;
        }
    }

Ungrab release velocity

In virtual reality, the hand movement are pretty fast and accurate. Ungrabbing an object by the user will also most of the time occur between ticks, not precisely on ticks. There may therefore be a significant difference in velocity direction between the last recorded tick and the subtick moment where the object was actually ungrabbed.

To provide a movement that match as much as possible the grabbing user expectation, the ungrabbing velocity is stored in the input, and then in the DetailedGrabInfo networked var for proxies, so that the accurate release velocity could be applied everywhere, to be sure to replay everywhere the physics expected by the grabbing user.

FixedUpdateNetwork

The FixedUpdateNetwork will be always called everywhere (including on proxies) due to the SetIsSimulated configuration.

In NetworkPhysicsGrabbable, the method handles mainly:

  • determining if a hand is grabbing (through inputs for the state and input authorities, through DetailedGrabInfo for the proxies)
  • applying physics effects to the grabbable when it is grabbed, so that it follows the grabbing hand
  • store DetailedGrabInfo so that the proxies knows an object is grabbed and can apply the physics effects too
  • apply the release velocity on ticks where the object has been ungrabbed (either forward or resim ones)
  • handle the special case of input authority transition (when an object has already been grabbed locally, but the grabbing user does not have the input authority yet)
  • triggering grab/ungrab events (using a change detector, only refreshed in forward stages, to avoid having several events triggered due to resims)

C#

    public override void FixedUpdateNetwork()
    {
        // ---- Handle waiting for input authority reception
        if (willReceiveInputAuthority && Object.HasInputAuthority)
        {
            // Authority received
            willReceiveInputAuthority = false;
        }
        if (willReceiveInputAuthority && (Time.time - inputAuthorityChangeRequestTime) > 1)
        {
            // Authority not received (quickly grabbed by someone else ?)
            willReceiveInputAuthority = false;
        }

        // ---- Reference previous state (up to date for host / input authority only - proxies grab info will always remain at the last confirmed value)
        bool wasGrabbed = DetailedGrabInfo.grabbingUser != PlayerRef.None;
        var previousGrabberId = DetailedGrabInfo.grabberId;

        // ---- Determine grabber/grab info for this tick
        bool isGrabbed = false;
        GrabInfo grabInfo = default;
        NetworkGrabber grabber = null;
        bool grabbingWhileNotYetInputAuthority = willReceiveInputAuthority && Runner.Tick > inputAuthorityChangeRequestTick;
        if (grabbingWhileNotYetInputAuthority)
        {
            // We are taking the input authority: we anticipate the grab before being able to read GetInput, by setting "manually" the grabber
            grabInfo = incomingGrabInfo;
            grabber = incomingGrabber;
        }
        else if (GetInput<RigInput>(out var input))
        {
            // Host or input authority: we use the input to replay the exact moment of the grab/ungrab in resims
            isGrabbed = false;
            if (input.leftGrabInfo.grabbedObjectId == Id)
            {
                isGrabbed = true;
                grabInfo = input.leftGrabInfo;
                PlayerRef grabbingUser = Object.InputAuthority;
                grabber = GrabberForSideAndPlayer(grabbingUser, RigPart.LeftController);
                previousGrabbingSide = RigPart.LeftController;
            }
            else if (input.rightGrabInfo.grabbedObjectId == Id)
            {
                isGrabbed = true;
                // one-hand grabbing only in this implementation
                grabInfo = input.rightGrabInfo;
                PlayerRef grabbingUser = Object.InputAuthority;
                grabber = GrabberForSideAndPlayer(grabbingUser, RigPart.RightController);
                previousGrabbingSide = RigPart.RightController;
            }
            else if (wasGrabbed && previousGrabbingSide != RigPart.None)
            {
                grabInfo = previousGrabbingSide == RigPart.LeftController ? input.leftGrabInfo : input.rightGrabInfo;
            }
        }
        else
        {
            // Proxy
            isGrabbed = DetailedGrabInfo.grabbingUser != PlayerRef.None;
            // one-hand grabbing only in this implementation
            grabInfo = DetailedGrabInfo.grabInfo;
            if (isGrabbed) grabber = GrabberForId(DetailedGrabInfo.grabberId);
        }

        // ---- Apply following move based on grabber/grabinfo
        if (isGrabbed)
        {
            AdaptRenderTimeFrame(grabber);
            grabbable.localPositionOffset = grabInfo.localPositionOffset;
            grabbable.localRotationOffset = grabInfo.localRotationOffset;
            Follow(followedTransform: grabber.transform, elapsedTime: Runner.DeltaTime, isColliding: IsColliding);
        }

        // ---- Store DetailedGrabInfo changes
        if (isGrabbed && (wasGrabbed == false || previousGrabberId != grabber.Id))
        {
            // We do not store data as proxies, unless if we are waiting for the input authority
            if (Object.IsProxy == false || grabbingWhileNotYetInputAuthority)
            {
                DetailedGrabInfo = new DetailedGrabInfo
                {
                    grabbingUser = grabber.Object.InputAuthority,
                    grabberId = grabber.Id,
                    grabInfo = grabInfo,
                };
            }
        }
        if (wasGrabbed && isGrabbed == false)
        {
            // We do not store data as proxies, unless if we are waiting for the input authority
            if (Object.IsProxy == false || grabbingWhileNotYetInputAuthority)
            {
                DetailedGrabInfo = new DetailedGrabInfo
                {
                    grabbingUser = PlayerRef.None,
                    grabberId = previousGrabberId,
                    grabInfo = grabInfo,
                };
            }

            // Apply release velocity (the release timing is probably between tick, so we stored in the input the ungrab velocity to have sub-tick accuracy)
            grabbable.rb.velocity = grabInfo.ungrabVelocity;
            grabbable.rb.angularVelocity = grabInfo.ungrabAngularVelocity;
        }

        // ---- Trigger callbacks and release velocity
        // Callbacks are triggered only during forward tick to avoid triggering them several time due to resims.
        // If we are waiting for input authority, we do not check (and potentially trigger) the callbacks, as the DetailedGrabInfo will temporarily be erased by the server, and so that might trigger twice the callbacks later
        if (Runner.IsForward && grabbingWhileNotYetInputAuthority == false)
        {
            TriggerCallbacksOnForwardGrabbingChanges();
        }

        // ---- Consume the isColliding value: it will be reset in the next physics simulation (used in PID based moves)
        IsColliding = false;
    }

Follow

For physics grabbing type, following the current grabber implies changing the velocity of the grabbed object so that it eventually rejoins the grabber. It can either be done by changing the velocity directly, or using forces to do so, depending of the kind of overall physics desired. The sample provides a PID and a direct velocity mode. The default mode is Velocity but it can be changed for each object on the PhysicsGrabbable component.

C#

    public virtual void VelocityFollow(Transform followedTransform, float elapsedTime)
    {
        // Compute the requested velocity to joined target position during a Runner.DeltaTime
        rb.VelocityFollow(target: followedTransform, localPositionOffset, localRotationOffset, elapsedTime);

        // To avoid a too aggressive move, we attenuate and limit a bit the expected velocity
        rb.velocity *= followVelocityAttenuation; // followVelocityAttenuation = 0.5F by default
        rb.velocity = Vector3.ClampMagnitude(rb.velocity, maxVelocity); // maxVelocity = 10f by default
    }

C#

    public static void VelocityFollow(this Rigidbody followerRb, Transform target, Vector3 positionOffset, Quaternion rotationOffset, float elapsedTime)
    {
        followerRb.VelocityFollow(target.TransformPoint(positionOffset), target.rotation * rotationOffset, elapsedTime);
    }

    public static void VelocityFollow(this Rigidbody followerRb, Vector3 targetPosition, Quaternion targetRotation, float elapsedTime)
    {
        Vector3 positionStep = targetPosition - followerRb.transform.position;
        Vector3 velocity = positionStep / elapsedTime;

        followerRb.velocity = velocity;
        followerRb.angularVelocity = followerRb.transform.rotation.AngularVelocityChange(newRotation: targetRotation, elapsedTime: elapsedTime);
    }

Render

To avoid messing with the position interpolated due to the physics computation, unlike for kinematic grabbing, the Render() logic here is not to force the grabbed object visual position on the hand visual position.

Several options are available (including doing nothing, which gives results that may be relevant choices - the hand would pass through the grabbed object when colliding for instance).

The current implemention in the sample uses the following Render logic:

  • instead of the grabbed object visual staying on the hand visual, it is the hand visual position that is forced to follow the grabbed object visual position
  • in case of collision,this can lead to some differences between the real life hand position and the displayed hand position. To make it comfortable, a "ghost" hand is displayed at the position of the real life hand
  • to make the user feel this dissonance (especially during collisions), the controller send a vibration proportional to the dissonance distance between the displayed hand and the actual hand. It provides a slight feeling of resistance.
  • no effort is made when releasing the object to restore the hand position smoothly (but it could be added if needed)

C#

    public override void Render()
    {
        base.Render();
        if (Object.InputAuthority != Runner.LocalPlayer)
        {
            // Allow to prevent local hardware grabbing of the same object
            grabbable.isGrabbed = IsGrabbed;
        }

        // We don't place the hand on the object while we are waiting to receive the input authority as the timeframe transitioning might lead to erroneous hand repositioning
        if (IsGrabbed && willReceiveInputAuthority == false)
        {
            var handVisual = CurrentGrabber.hand.transform;
            var grabbableVisual = networkRigidbody.InterpolationTarget.transform;

            // On remote user, we want the hand to stay glued to the object, even though the hand and the grabbed object may have various interpolation
            handVisual.rotation = grabbableVisual.rotation * Quaternion.Inverse(grabbable.localRotationOffset);
            handVisual.position = grabbableVisual.position - (handVisual.TransformPoint(grabbable.localPositionOffset) - handVisual.position);

            // Add pseudo haptic feedback if needed
            ApplyPseudoHapticFeedback();
        }
    }

Third party

Next

Here are some suggestions for modification or improvement that you can practice doing on this project :

Back to top