このページは編集中です。更新が保留になっている可能性があります。

Fusion Expo

The Fusion Expo sample is currently only available to users with an active Photon Industries Circle subscription.
Your Industries Circle membership gives you the complete suite plus exclusive license options.

Overview

The ExpoFusion sample demonstrates an approach on how to develop a social application for up to 100 players with Fusion.

Each player is represented by an avatar and can talk to other players if they are located in the same chat buble thanks to Photon Voice SDK.

Some of the highlights of this sample are :

  • First, the player customizes its avatar on the selection avatar screen,
  • Then, they can join the Expo scene. If the player launches the sample on a PC or MAC, it can choose between the Desktop mode (using keyboard & mouse) or VR mode (Meta Quest headset).
  • Players can talk to each other if they are located in the same static chat bubble. In each static chat bubble, a lock button is available to prevent new players from entering.
  • Also, if two players are close to each other, a dynamic chat bubble is created around the player with the lower velocity.
  • Some 3D pens are available to create 3d drawing. Each 3D drawing can be moved using the anchor.
  • Also, a classic whiteboard is available. Each drawing can be moved using the anchor.

More technical details are provided directly in the code comments.

Fusion Expo

トップに戻る

Technical Info

  • This sample uses the Shared Mode topology but core is compatible with Shared & Host Mode topologies,
  • Builds are available for PC, Mac & Meta Quest,
  • The project has been developed with Unity 2021.3.3f1,
  • Unity XR Interaction Toolkit 2.0.2 compatibility,
  • 2 avatars solutions are supported (home made simple avatars & Ready Player Me 1.9.0 avatars),

トップに戻る

Before You Start

To run the sample :

  • Create a Fusion AppId in the PhotonEngine Dashboard and paste it into the App Id Fusion field in Real Time Settings (reachable from the Fusion menu).

    • Create a Voice AppId in the PhotonEngine Dashboard and paste it into the App Id Voice field in Real Time Settings
  • Then load the AvatarSelection scene and press Play

トップに戻る

Download

The Fusion Expo sample is currently only available to users with an active Photon Industries Circle subscription. For more information and access, please email developer@photonengine.com .

トップに戻る

Handling Input

Desktop

Keyboard

  • Move : WASD or ZQSD to walk
  • Rotate : QE or AE to rotate
  • Bot spawn: press “B” to spawn one bot. Stay pressed for longer than 1 second to create 50 bots on release

トップに戻る

Mouse

  • Move : left click with your mouse to display a pointer. You will teleport on any accepted target on release
  • Rotate : keep the right mouse button pressed and move the mouse to rotate the point of view
  • Move & rotate : keep both the left and right button pressed to move forward. You can still move the mouse to rotate
  • Grab & use (3D pens) : put the mouse over the object and grab it using the left mouse button. Then you can use it using the keyboard space key.

トップに戻る

Meta Quest

  • Teleport : press A, B, X, Y, or any stick to display a pointer. You will teleport on any accepted target on release
  • Touch (ie for chat bubbles lock buttons) : simply put your hand over a button to toggle it
  • Grab : first put your hand over the object and grab it using controller grab button
  • Bot spawn : press the menu button on the left controller to spawn one bot. Stay pressed for longer than 1 second to create 50 bots on release

トップに戻る

Folder Structure

The main folder /Expocontains all elements specific to this sample.

The main folder /FusionXRcontains elements that can be shared with other projects. It has a subfolder called Integrations subfolder which manages compatibility with third party solutions like ReadyPlayerMe and Unity XR Integration Toolkit. \ The FusionXR components are compatible with both shared and host topologies, and since the expo sample is shared only, some parts of FusionXR are not required here.

The /Photon folder contains the Fusion and Photon Voice SDK.

The /Plugins folder contains the Ready Player Me SDK

The /StreamingAssets folder contains pre-built ReadyPlayerMe avatars. It can be removed freely if you don’t want to use those prebuilt avatars.

The /XRI and /XR folders contain configuration files for virtual reality.

トップに戻る

Network Connection And Application Lifecycle

The ConnexionManager launches a Fusion session, in shared mode topology, and spawns an user prefab for each connected user.

The SessionEventsManager observes Fusion and PhotonVoice connection status to alert components interested by those, notably the SoundManager which handles various sound effects..

トップに戻る

Audio

VoiceConnection and FusionVoiceBridge components start the audio Photon Voice connection along the Fusion session, the Recorder component catching the microphone input.

For Oculus Quest builds, additional user authorisations are required, and this request is managed by the MicrophoneAuthorization script.

The user prefab contains a Speaker and VoiceNetworkObject placed on its head, to project spatialized sound upon receiving voice.

For more details on Photon Voice integration with Fusion, see this page: https://doc.photonengine.com/en-us/voice/current/getting-started/voice-for-fusion

トップに戻る

Rigs

In an immersive application, the rig describes all the mobile parts that are required to represent an user, usually both hands, an head, and the play area (it is the personal space that can be moved, when an user teleports for instance),

While in a networked session, every user is represented by a networked rig, whose various parts positions are synchronized over the network.

We choose to represent an user by a single NetworkObject, with several nested NetworkTransforms, one for each rig parts.

Regarding the specific case of the network rig representing the local user, this rig has to be driven by the hardware input. To simplify this process, a separate, non networked, rig has been created, called the “hardware rig”. It uses classic Unity components to collect the hardware input (like TrackedPoseDriver). Then, for the networked rig associated with the local user, every networked rig parts simply follows the matching hardware rig parts.

The XRNetworkedRig component, located on the user prefab, manages this tracking for all the nested rig parts.

Rig logic for the local user

Aside from sharing the rig parts position with other players during the FixedUpdateNetwork(), the XRNetworkRigcomponent also handles the extrapolation : during the Render(), the interpolation target, which handle the graphical representation of the various rig parts’ NetworkTransforms, is moved, to ensure that the local user always see their hands at the most recent position, even between network ticks.

To be able to easily find the XRHardwareRig and the matching XRNetworkRig related to the local user, the component RigInfo registers the XRHardwareRig and all XRNetworkedRig for further usage.

RigInfo content

トップに戻る

Interaction Stack

This sample allows the player to move, grab objects, and touch surfaces.

All these manipulations are done locally (on the hardware rig), independently of the network, then sent to the network through Fusion input to the network rig.

We did it this way to ensure Unity XR Interaction Toolkit (XRIT) compatibility, as doing it the other way with XRIT would be more complex here.

トップに戻る

XR Interaction Toolkit

In order to use XR Interaction Toolkit (XRIT) with Fusion, some adaptations are required.

Depending on the topology and the physics settings you chose for Fusion, XR Interaction toolkit and Fusion may try to handle the same elements, so for them to work seamlessly together, the following change were made :

  • Some XRIT components split their processing between the start of the Unity physics phase, during it, and at the end of it. Since Photon can manage physics simulation, the order may be unexpected for XRIT. So subclasses of XRIT classes were created to make it aware of Fusion execution order. \ Note : some of these modifications are not needed for all topologies, but the current version illustrates a way to cohabitate with it that makes it work in every case.
  • When grabbing an object in shared mode, both Fusion and XRIT may want to edit the isKinematic property of their RigidBody. The components ensure that XRIT is aware of the actual values, available in Fusion properties.

トップに戻る

Locomotion

In virtual reality, locomotion is provided through ray-based teleportation, with additional snap-turn available on the joystick. The locomotion is managed by XR Interaction toolkit, with some modification to enable the teleport ray only when requested.

In the desktop version, the user can either move with the keyboard, or mouse left click on the ground to teleport there. The DesktopController and MouseTeleport components manage the locomotion, while the MouseCamera component manages the camera movement using the mouse right-click movements.

For remote users, the play area movement is smoothed, to replace teleportation by a progressive move to the target place (unless the instantPlayareaTeleport option is set to true).

トップに戻る

Locomotion Restrictions

This sample application sometimes prevents the user from going to some places (for example, it is not possible to enter a chat bubble if its max capacity is reached).

To do so, every component that wants to move the user relies on both the generic locomotion validation system, and the constraints associated with a specific locomotion mode.

トップに戻る

Locomotion Validation System

To determine if the user is not trying to go to a forbidden zone, every locomotion system first asks the XRHardwareRig if they can move to this position, with the CanMove() method. To answer, theXRHardwareRig first checks if the move is valid with all its ILocomotionValidator childs, and all the ILocomotionValidator childs of the XRNetworkedRig instance representing the local user on the network.

Locomotion system

Additionally, if a user puts the head in a forbidden zone, the view will fade, to prevent them from “cheating”. \

トップに戻る

Other Restrictions

Additionally, the locomotion is limited by other factor, depending on the locomotion system used:

  • XRIT Locomotion : an user can only teleport on an object having a TeleportArea component
  • DesktopController locomotion : the controller checks that the head position after a move would be correct, by checking it would not be inside a collider, and checking that a correct walkable navigation mesh point will be under it after moving
  • Bot : it ensure that bots will stay on the walkable nav mesh

トップに戻る

Touchable

To trigger events on finger touch, the hands contain a Toucher component, while some objects have a Touchable one. The event is totally handled in the hardware rig, and has no automatic link to the network: you have to handle it in the components called by Touchable.OnTouch.

Note that the desktop controls allow you to touch Touchable with the mouse pointer.

トップに戻る

Grabbing

Grabbing is only used in this sample for handling the pens and drawings manipulations.

The grabbing system is based on the Tracker class. It allows a networked object to follow another one (like the hands), with various tracking logic (instantaneous move, force-based move, …). It also handles extrapolation when needed, for natural movement upon authority changes, collisions with objects when the tracking logic used supports it, ….

The tracking system has been built to support both shared topology, and host topology, and is used here for a tiny part of its capabilities. Notably, it is only used here for instantaneous tracking, not force-based tracking.

Here, the actual grabbing is done locally, using XR Interaction Toolkit. Then, the fact that the grabbed object now tracks (“follows”) the grabbing hand is shared over the Network in the Tracker component.

The input system is used to relay this information from the grabbing hardware hand, to the network hand which finally warns the Tracker.

トップに戻る

Avatar

The users are represented by a graphical representation, called an avatar. This sample supports two kinds of avatar, those from Ready Player Me and a custom simpler one.

トップに戻る

Gazer

To offer a more natural avatar representation, a dedicated system handles automatic eye tracking : when an object of interest is presented in front of an avatar, its eyes will find this target and follow it. If a closer target is presented, the eye focus will change.

If no target is available, the eye will move from time to time, randomly.

Finally, all the avatar systems available in this sample handle eye blinking, also to appear more natural.

To be a potential eye target, a GameObject must have a GazeTarget component. All avatars' eyes and head have such components by default in the prebuilt prefabs. Those GazeTarget register in the GazeInfo manager,

The Gazer component drives the eyes, by asking the GazeInfo a sorted list (by distance) of potential (with valid resulting eye angle and target distance) GazeTarget for it.

The sorting process is quite heavy when a lot of available targets are close, and is done in background threads.

Gaze system

トップに戻る

Hand

The hand models used here come from the Oculus Sample Framework (released under a BSD-3 license by Facebook Technologies, LLC and its affiliates).

The input actions driving the hand movement are collected locally and shared over Fusion to display hand movements on all clients.

The hand movement is a bit discretized remotely to decrease the frequency of change.

The hand color matches the skin color of the avatar representation used.

トップに戻る

AvatarRepresentaiton

Each avatar stores an avatarURL, shared through a Fusion [Networked] var in the XRNetworkedRig component.

Upon change, this URL is parsed by the AvatarRepresentation component, to determine if it represents :

トップに戻る

Simple Avatar

The simple avatar system offers a simple and inexpensive avatar model.

Mouth animation is based on volume detection, with no accurate lip synchronization.

トップに戻る

Ready Player Me

The Ready Player Me avatar system can display any avatar provided by https://readyplayer.me .

The mouth animation provides lip synchronization, based on the Oculus Lip Sync library, released under the Oculus Audio SDK license (https://developer.oculus.com/licenses/audio-3.3/).

To optimize the avatar downloading and loading, for a given avatar URL, the avatar object can be loaded in several ways:

  • If this URL has already been used for an active avatar, the existing avatar is cloned
  • If this URL is associated to a prefab, the prefab is spawned, instead of downloading and parsing the glb file
  • If those options are not relevant, the glb file is downloaded and parsed. Note that an URL can describe a glb file in the StreamingAssets folder, to skip the download, by using the syntax %StreamingAssets%/Woman1.glb for the URL. If you do so, you should place next to Woman1.glb the associated Woman1.json metadata file provided by Ready Player Me (to download it, simply replace .glb extension by .json in the original URL provided by Ready Player Me).

トップに戻る

LOD

In sessions with a large number of users (or bots), avatars can have a heavy toll on performances.

To avoid this, the avatar representation manages several LOD :

  • The regular avatar (simple avatar or Ready Player Me)
  • A low poly version of the simple avatar, with hair/skin/cloth color matching the ones of the regular avatar
  • A billboard always facing the user camera.

トップに戻る

Chat Bubble

This sample demonstrates chat bubbles. The scene includes 4 static chat bubbles. The application can also create a dynamic chat bubble, when 2 users come close to each other.

トップに戻る

Static Chat Bubble

Static Chat Bubble

By default, in the expo scene, users don’t hear each other.

But when they enter the same static chat bubble, they will be able to discuss together with spatialized sound over Photon Voice.

Zone system

In the sample, a zone system keeps track of ZoneUser components whose XRNetworkedRig is close to a Zone. If it occurs, the ZoneUser will enter the zone, and trigger the various Zone andZoneUser listeners.

A Zone can have a PhotonVoice interest group associated with them, so the ZoneAudioInterestChanger component can change the audio listening and recording group of the local user if they enter/exit a zone.

The static chat bubbles are automatically locked when they have reached the max number of users in it, but it is also possible to lock them before that, by touching the lock button available in every static chat bubble.

トップに戻る

Dynamic Chat Bubble

The prefab representing users contains a DynamicZoneSource component.

Dynamic zone system

This component registers to a DynamicZonePool component, so that it can then check if another existing DynamicZoneSource is in proximity, as defined in the DynamicZonePool.size. In that case, the DynamicZonepool can provide a zone that is spawned (or reused) to be used for both users.

Other user can join them later, up to reaching the max capacity of the zone

Dynamic zone spawn

トップに戻る

Bots

To demonstrate how an expo full of people would be supported by the samples, it is possible to create bots, in addition to regular users.

Bots are regular networked prefabs, whose voice has been disabled, and which are driven by a navigation mesh instead of user inputs.

The Bot class also uses the locomotion validation system, and is thus aware of chat bubbles. Besides, as they can’t speak, they are forbidden to enter any zone, locked or not.

トップに戻る

Drawing

3D Drawing

The expo scene contains 3D pens that can create 3d drawings : a group of line renderer, with a common handle that can be grabbed and moved.

The 3D pen holds a Drawer component, which spawns a drawing prefab holding a Draw component. The Draw component ensures that all the drawn points are synchronized over Fusion. It is done through a [Networked] variable. As it cannot hold an infinite number of points, the drawing if splitted in several parts when needed, the secondary Draw following the first Draw when any user moves it, making it appear as a single drawing.

トップに戻る

2D Drawing

The board in the expo scene can be written by the pen around it, and the resulting drawing can then be moved on the boards with a handle similar to the 3D pen drawings’ handles.

It is in fact special 3D drawings, using a layer invisible to the layer camera but visible to a camera associated with the board. The camera then renders the drawing image on a render texture that is displayed on the board.

For performance purposes, this camera is only enabled when the pens near the board are used, or when the drawings created with them are moved.

トップに戻る

Third Party Components

トップに戻る

Known Issues

  • Oculus Quest : it is not possible to exit the application when the headset is in standby mode since a long time. Then, headset reboot is required.
  • If a user connects while the authority of a drawing is disconnecting, the new user won’t receive the drawing data.

ドキュメントのトップへ戻る