Performance Tips

Performance is a vital part for providing a fluid and seamless integration of multiplayer components into your application. So we assembled a list of tips you should keep in mind when developing with Photon.

If you are running a local Photon Server and noticed a bad event rate then this is most likely caused by logging. For instance, we noticed that enabling both Photon Server logging and Windows Defender may cause the event rate to slow down considerably. That is why we recommend disabling all virus scanners and firewalls when enabling Photon logging so you can have the expected events rate. You can enable back the virus scanners and firewalls once you stop logging on Photon Servers.

Call Service Regularly

The client libraries rely on regular calls to LoadBalancingPeer.Service, to keep in touch with the server. Bigger pauses between the service calls could lead to a timeout disconnect as the client can't keep up the connection.

Loading data is a common situation where less updates per second are done in the main loop. Make sure that service is called despite loading or the connection might suffer and be closed. If overlooked, this problem is hard to identify and reproduce.

Updates vs. Traffic

Ramping up the number of updates per second makes a game more fluid and up-to-date. On the other hand, traffic might increase dramatically. Keep in mind that possibly each Operation you call will create events for other players.

On a mobile client, 4 to 6 operations per second are fine. In some cases even 3G devices use pretty slow networking implementations. Keep in mind that it might in fact be faster to send fewer updates per second.

PC based clients can go a lot higher. The target frame rate should be the limit for these clients.

Producing and Consuming Data

Related to the "traffic" topic is the problem of producing only the amount of data that can be consumed on the receiving end. If performance or frame rate don't keep up with incoming events they are outdated before they are executed.

In the worst case, one side produces so much data that it breaks the receiving end. Keep an eye on the queue length of your clients while developing.

Limiting Execution of Unreliable Commands

Even if a client doesn't dispatch incoming messages for a while (e.g. while loading), it will still receive and buffer everything. Depending on the activity of the other players, a client might have a lot to catch up with.

To keep things lean, a client will automatically cut the unreliable messages to a certain length. The idea is that you get the latest info faster and missing updates will be replaced by new, up-to-date messages soon.

This limit is set via LoadbalancingPeer.LimitOfUnreliableCommands which has a default of 20 (in PUN, too).

Datagram Size

The content size of datagrams is limited to 1200bytes to run on all devices.

The 1200bytes include all the overhead from headers (see "Binary Protocol"), size and type information (see "Serialization in Photon"), so that the number for actual pure payload is significantly lower. In fact, even if it varies depending on how data is structured, we can safely assume that pure payload data lower than 1kb can fit into a single datagram.

Operations and events that are bigger than 1200bytes get fragmented and are sent in multiple commands. These become reliable automatically, so the receiving side can reassemble and dispatch those bigger data chunks when completed.

Bigger data "streams" can considerably affect latency as they need to be reassembled from many packages before they are dispatched. They can be sent in a separate channel, so they don't affect the "throw away" position updates of a (lower) channel number.

 To Document Top