This document is about: VOICE 2
SWITCH TO

Frequently Asked Questions

Which Photon product is the right one for me?

The answer to this depends mostly on your project and team. Generally, we suggest to use either Fusion or Quantum, which are our most advanced client solutions.

For a quick overview, both product sheets contain the product picker "Quadrant":

Additionally, this page discusses using the Photon Cloud or Photon Server?.

Feel free to reach out to us for any questions.

Photon Cloud

Is Photon Cloud down?

You can check Photon Cloud status here or follow @photon_status on twitter to get notified about status updates.

What is the default Photon region?

Clients should be able to connect to Photon Cloud as long as at least one region is available. So to guarantee this, a default value is configured or is used when the developer does not explicitly set one or choose "Best Region" option. The default value could vary by client SDK. In native SDKs, it is the value at index 0 of the region list returned by server in OpGetRegions. On Unity and DotNet SDKs, the default region should be "EU".

Is it possible to disable some regions?

Yes. It works in the other way around by defining a list of allowed regions. Read more about the "Dashboard Regions Filtering".

Photon Voice

How to save conversations into files?

We will answer this question in two parts:

First, incoming voice streams:

Photon Voice streams are uniquely identified using the pair: PlayerId and VoiceId. So given this pair you can guess the origin of the remote voice stream: which player and which recorder. You can subscribe to three events for remote streams:

  • VoiceConnection.RemoteVoiceAdded(RemoteVoiceLink): a new remote voice stream is created (started transmission) with information received.
  • RemoteVoiceLink.FloatFrameDecoded(float[]): an audio frame is received from a specific remote voice stream.
  • RemoteVoiceLink.RemoteVoiceRemoved: a remote voice stream has ended and is destroyed (stopped transmission).

If you want to capture an entire incoming remote voice stream, you can:

  1. Create and open file for the stream in RemoteVoiceAdded handler.
  2. Write frame of audio data in FloatFrameDecoded handler.
  3. Save and close the file in RemoteVoiceRemoved handler.

Or you can open and close file and update FloatFrameDecoded accordingly on user input.

Second, outgoing voice streams:

For outgoing audio stream, you can create a custom processor by extending Voice.LocalVoiceAudio<T>.IProcessor. You can get the locally recorded audio frame in IProcessor.Process. A component attached to the same GameObject as the Recorder is needed to intercept PhotonVoiceCreated Unity message. Inside that method, insert the custom processor in the local voice processing pipeline using LocalVoice.AddPreProcessor (before transmission) or LocalVoice.AddPostProcessor (after transmission). See "WebRtcAudioDsp.cs" for an example.

How to use a custom audio source?

If you want the Recorder to transmit audio produced by your own custom audio source:

First approach: data stream is driven by consumer

AudioClipWrapper is a sample of this approach. It streams audio clip assigned to Recorder.AudioClip.

  1. Create a class reading your audio source and implementing Photon.Voice.IAudioReader interface. e.g. MyAudioReaderSource.

  2. Set Recorder.SourceType to Factory in editor (or in code).

  3. Create an instance of your class somewhere during app initialization (before creation of Recorder):

    C#

    // MyAudioReaderSource is just an example, replace with your own class name and constructor
    recorder.InputFactory = () => new MyAudioReaderSource(); 
    
  4. As long as client is connected to a voice room and Recorder is transmitting, IAudioReader.Read(float[] buffer) method will be called on your custom audio source instance (e.g. MyAudioReaderSource). Calls frequency and buffer size are adjusted to meet sampling rate returned by IAudioReader.SamplingRate property of your custom audio source instance (e.g. MyAudioReaderSource).

Second approach: data stream is driven by producer

ToneAudioPusher in "AudioUtil.cs" is a sample of this approach.

  1. In this case it may be more convenient to implement Photon.Voice.IAudioPusher interface instead. e.g. MyAudioPusherSource. You need to implement IAudioPusher.SetCallback method only which mainly stores given callback.

  2. Set Recorder.SourceType to Factory in editor (or in code).

  3. Create an instance of your class somewhere during app initialization (before creation of PhotonVoiceRecorder):

    C#

    // MyAudioPusherSource is just an example, replace with your own class name and constructor
    recorder.InputFactory = () => new MyAudioPusherSource(); 
    
  4. During streaming, you simply call the callback set using IAudioPusher.SetCallback periodically (e.g. from MonoBehaviour.OnAudioFilterRead) with as many samples as you have. Photon Voice will do all buffering work for you.

Billing

Do you have special offers for students, hobbyists or indies?

All our products have a free tier and a one-off entry-plan. We also usually take part in Unity's asset store sales and occasionally give vouchers to lucky ones.

Can I combine more than one 100 CCU plan for a single Photon application?

No. The 100 CCU plans are not stackable and can be applied only once per AppId. If you purchase multiple PUN+ asset seats then you must redeem each 100 free CCU for a separate AppId. If you need more CCU for a single app, the next higher plan is the 500 CCU one. If you subscribe to a monthly or yearly plan, then you will still keep the 100 CCUs for 12 months on top of / in addition to the CCU from your monthly/yearly plan.

Back to top