This document is about: VOICE 2

Frequently Asked Questions

Which Photon product is the right one for me?

The answer to this depends mostly on your project and team. Generally, we suggest to use either Fusion or Quantum, which are our most advanced client solutions.

For a quick overview, both product sheets contain the product picker "Quadrant":

Additionally, this page discusses using the Photon Cloud or Photon Server?.

Feel free to reach out to us for any questions.

Photon Cloud

Is Photon Cloud down?

Our Photon Cloud Status Page shows the current and past status per product. Information about incidents is also published on twitter: @photon_status.

Is there a default Photon Cloud region?

Actually, there is no default region. Clients know the Name Server address for the Photon Cloud. They are global and used to provide an up to date region list for the given AppId.

Clients will ping each region and identify the "Best Region", which has the lowest latency.

If none of the regions can be pinged successfully, the first region of the list is used.

Can we get a list of all Cloud servers / IPs?

Such a list does not exist as the Photon Cloud is changing too frequently. Servers get added or removed and even new regions show up from time to time. This means it is impossible to add the Photon Cloud (as a whole) to an allow-list.

It is a different topic for an Enterprise Cloud. We'd discuss this via mail.

Apps within the Photon Industries Circle can rely on the host name for allow-listing: *

Is it possible to disable some regions?

Yes. It works in the other way around by defining a list of allowed regions. Read more about the "Dashboard Regions Filtering".

Photon Voice

How to save conversations into files?

We will answer this question in two parts:

First, incoming voice streams:

Photon Voice streams are uniquely identified using the pair: PlayerId and VoiceId. So given this pair you can guess the origin of the remote voice stream: which player and which recorder. You can subscribe to three events for remote streams:

  • VoiceConnection.RemoteVoiceAdded(RemoteVoiceLink): a new remote voice stream is created (started transmission) with information received.
  • RemoteVoiceLink.FloatFrameDecoded(float[]): an audio frame is received from a specific remote voice stream.
  • RemoteVoiceLink.RemoteVoiceRemoved: a remote voice stream has ended and is destroyed (stopped transmission).

If you want to capture an entire incoming remote voice stream, you can:

  1. Create and open file for the stream in RemoteVoiceAdded handler.
  2. Write frame of audio data in FloatFrameDecoded handler.
  3. Save and close the file in RemoteVoiceRemoved handler.

Or you can open and close file and update FloatFrameDecoded accordingly on user input.

Second, outgoing voice streams:

For outgoing audio stream, you can create a custom processor by extending Voice.LocalVoiceAudio<T>.IProcessor. You can get the locally recorded audio frame in IProcessor.Process. A component attached to the same GameObject as the Recorder is needed to intercept PhotonVoiceCreated Unity message. Inside that method, insert the custom processor in the local voice processing pipeline using LocalVoice.AddPreProcessor (before transmission) or LocalVoice.AddPostProcessor (after transmission). See "WebRtcAudioDsp.cs" for an example.

How to use a custom audio source?

If you want the Recorder to transmit audio produced by your own custom audio source:

First approach: data stream is driven by consumer

AudioClipWrapper is a sample of this approach. It streams audio clip assigned to Recorder.AudioClip.

  1. Create a class reading your audio source and implementing Photon.Voice.IAudioReader interface. e.g. MyAudioReaderSource.

  2. Set Recorder.SourceType to Factory in editor (or in code).

  3. Create an instance of your class somewhere during app initialization (before creation of Recorder):


    // MyAudioReaderSource is just an example, replace with your own class name and constructor
    recorder.InputFactory = () => new MyAudioReaderSource(); 
  4. As long as client is connected to a voice room and Recorder is transmitting, IAudioReader.Read(float[] buffer) method will be called on your custom audio source instance (e.g. MyAudioReaderSource). Calls frequency and buffer size are adjusted to meet sampling rate returned by IAudioReader.SamplingRate property of your custom audio source instance (e.g. MyAudioReaderSource).

Second approach: data stream is driven by producer

ToneAudioPusher in "AudioUtil.cs" is a sample of this approach.

  1. In this case it may be more convenient to implement Photon.Voice.IAudioPusher interface instead. e.g. MyAudioPusherSource. You need to implement IAudioPusher.SetCallback method only which mainly stores given callback.

  2. Set Recorder.SourceType to Factory in editor (or in code).

  3. Create an instance of your class somewhere during app initialization (before creation of PhotonVoiceRecorder):


    // MyAudioPusherSource is just an example, replace with your own class name and constructor
    recorder.InputFactory = () => new MyAudioPusherSource(); 
  4. During streaming, you simply call the callback set using IAudioPusher.SetCallback periodically (e.g. from MonoBehaviour.OnAudioFilterRead) with as many samples as you have. Photon Voice will do all buffering work for you.


Do you have special offers for students, hobbyists or indies?

All our products have a free tier and a one-off entry-plan. We also usually take part in Unity's asset store sales and occasionally give vouchers to lucky ones.

Can I combine more than one 100 CCU plan for a single Photon application?

No. The 100 CCU plans are not stackable and can be applied only once per AppId. If you purchase multiple PUN+ asset seats then you must redeem each 100 free CCU for a separate AppId. If you need more CCU for a single app, the next higher plan is the 500 CCU one. If you subscribe to a monthly or yearly plan, then you will still keep the 100 CCUs for 12 months on top of / in addition to the CCU from your monthly/yearly plan.

Back to top