This document is about: VOICE 2
SWITCH TO

Frequently Asked Questions

Which Photon product is the right one for me?

This is a difficult question to answer as it depends on the game you are making and the specifics of your project.
However we can recommend the following:

Feel free to reach out to us if you still have questions.

What is the difference between Photon Realtime and PUN?

Photon Realtime wraps up all generic features needed for the load balancing of Photon. It is a product as much as a defined workflow to use Name Server, Master Server and Game Servers. Photon Realtime (a.k.a. LoadBalancing) is the basis for many games using Photon.

While Photon Realtime is independent from Unity, PUN adds many comfortable features for Unity and makes Realtime (the lower level) even easier to use.

Both products share the same backend, same server applications, same low-level stuff, same core concepts.
At first PUN was meant to be a better UNet (old Unity Networking): preserve a similar API with a more solid backend and rich features.
Then it slowly diverged and became the number 1 solution for multiplayer on Unity.

While we do have a Photon Realtime Unity SDK, PUN has more high level out-of-the-box features like:

  • Magic Unity callbacks
  • Extra Unity components that serialize and sync. networked objects for you: most important one PhotonView
  • PunRPC
  • Offline mode
  • ...

Read more here.

However, while PUN supports webhooks and persisting room states, it is still not 100% able to restitute networked objects' state in the scene(s) when loading a saved game.
Read more here.

What is the difference between LoadBalancing API and Photon Realtime?

LoadBalancing API and Photon Realtime can be two different names for the same thing.
The LoadBalancing API or the LoadBalancing Client API is the programming interface available in the client SDKs we provide for the Photon Realtime product.

Photon Cloud

Is Photon Cloud down?

You can check Photon Cloud status here.

What is the default Photon region?

Clients should be able to connect to Photon Cloud as long as at least one region is available.
So to guarantee this, a default value is configured or is used when the developer does not explicitly set one or choose "Best Region" option.
The default value could vary by client SDK.
In native SDKs, it is the value at index 0 of the region list returned by server in OpGetRegions.
On Unity and DotNet SDKs, the default region should be "EU".

Is it possible to disable some regions?

Yes.
It works in the other way around by defining a list of allowed regions.
Read more about the "Dashboard Regions Filtering".

Photon Voice

How to save conversations into files?

We will answer this question in two parts:

First, incoming voice streams:

Photon Voice streams are uniquely identified using the pair: PlayerId and VoiceId.
So given this pair you can guess the origin of the remote voice stream: which player and which recorder.
You can subscribe to three events for remote streams:

  • VoiceConnection.RemoteVoiceAdded(RemoteVoiceLink): a new remote voice stream is created (started transmission) with information received.
  • RemoteVoiceLink.FloatFrameDecoded(float[]): an audio frame is received from a specific remote voice stream.
  • RemoteVoiceLink.RemoteVoiceRemoved: a remote voice stream has ended and is destroyed (stopped transmission).

If you want to capture an entire incoming remote voice stream, you can:

  1. Create and open file for the stream in RemoteVoiceAdded handler.
  2. Write frame of audio data in FloatFrameDecoded handler.
  3. Save and close the file in RemoteVoiceRemoved handler.

Or you can open and close file and update FloatFrameDecoded accordingly on user input.

Second, outgoing voice streams:

For outgoing audio stream, you can create a custom processor by extending Voice.LocalVoiceAudio<T>.IProcessor.
You can get the locally recorded audio frame in IProcessor.Process.
A component attached to the same GameObject as the Recorder is needed to intercept PhotonVoiceCreated Unity message.
Inside that method, insert the custom processor in the local voice processing pipeline using LocalVoice.AddPreProcessor (before transmission) or LocalVoice.AddPostProcessor (after transmission).
See "WebRtcAudioDsp.cs" for an example.

How to use a custom audio source?

If you want the Recorder to transmit audio produced by your own custom audio source:

First approach: data stream is driven by consumer

AudioClipWrapper is a sample of this approach.
It streams audio clip assigned to Recorder.AudioClip.

  1. Create a class reading your audio source and implementing Photon.Voice.IAudioReader interface. e.g. MyAudioReaderSource.

  2. Set Recorder.SourceType to Factory in editor (or in code).

  3. Create an instance of your class somewhere during app initialization (before creation of Recorder):

    C#

    // MyAudioReaderSource is just an example, replace with your own class name and constructor
    recorder.InputFactory = () => new MyAudioReaderSource(); 
    
  4. As long as client is connected to a voice room and Recorder is transmitting, IAudioReader.Read(float[] buffer) method will be called on your custom audio source instance (e.g. MyAudioReaderSource).
    Calls frequency and buffer size are adjusted to meet sampling rate returned by IAudioReader.SamplingRate property of your custom audio source instance (e.g. MyAudioReaderSource).

Second approach: data stream is driven by producer

ToneAudioPusher in "AudioUtil.cs" is a sample of this approach.

  1. In this case it may be more convenient to implement Photon.Voice.IAudioPusher interface instead. e.g. MyAudioPusherSource.
    You need to implement IAudioPusher.SetCallback method only which mainly stores given callback.

  2. Set Recorder.SourceType to Factory in editor (or in code).

  3. Create an instance of your class somewhere during app initialization (before creation of PhotonVoiceRecorder):

    C#

    // MyAudioPusherSource is just an example, replace with your own class name and constructor
    recorder.InputFactory = () => new MyAudioPusherSource(); 
    
  4. During streaming, you simply call the callback set using IAudioPusher.SetCallback periodically (e.g. from MonoBehaviour.OnAudioFilterRead) with as many samples as you have.
    Photon Voice will do all buffering work for you.

Billing

Do you have special offers for students, hobbyists or indies?

All our products have a free tier and a one-off entry-plan.
We also usually take part in Unity's asset store sales and occasionally give vouchers to lucky ones.

Can I combine more than one 100 CCU plan for a single Photon application?

No.
The 100 CCU plans are not stackable and can be applied only once per AppId.
If you purchase multiple PUN+ asset seats then you must redeem each 100 free CCU for a separate AppId.
If you need more CCU for a single app, the next higher plan is the 500 CCU one.
If you subscribe to a monthly or yearly plan, then you will still keep the 100 CCUs for 12 months on top of / in addition to the CCU from your monthly/yearly plan.

Back to top