Performance is a vital part for providing a fluid and seamless integration of multiplayer components into your application. So we assembled a list of tips you should keep in mind when developing with Photon.
Call Service Regularly
The client libraries are built to send any messages only when the app logic triggers that. This way, the clients can aggregate several operations and avoid network overhead.
To trigger sending any data, a main loop must call
PhotonPeer.SendOutgoingCommands() frequently. The bool return value is true if some data is still queued. If so, call SendOutgoingCommands again (but not more than three times in a row).
Service and SendOutgoingCommands also send acknowledgements and pings, which are important to keep a connection alive. You should avoid longer pauses between calling either. Especially, make sure that Service is called despite loading.
If overlooked, this problem is hard to identify and reproduce. The C# library has a ConnectionHandler class, which can help.
To avoid local lag, you could call SendOutgoingCommands after the game loop wrote network updates.
Updates Vs. Traffic
Ramping up the number of updates per second makes a game more fluid and up-to-date. On the other hand, traffic might increase dramatically. Also, random lag and loss can not be avoided, so receivers of updates should always be capable of interpolating important values.
Keep in mind that many operations you call will create events for other players and that it might in fact be faster to send fewer updates per second.
Producing And Consuming Data
Related to the "traffic" topic is the problem of producing only the amount of data that can be consumed on the receiving end. If performance or frame rate don't keep up with incoming events they are outdated before they are executed.
In the worst case, one side produces so much data that it breaks the receiving end. Keep an eye on the queue length of your clients while developing.
Limiting Execution Of Unreliable Commands
Even if a client doesn't dispatch incoming messages for a while (e.g. while loading), it will still receive and buffer everything. Depending on the activity of the other players, a client might have a lot to catch up with.
To keep things lean, a client will automatically cut the unreliable messages to a certain length. The idea is that you get the latest info faster and missing updates will be replaced by new, up-to-date messages soon.
This limit is set via
LoadbalancingPeer.LimitOfUnreliableCommands which has a default of 20 (in PUN, too).
The content size of datagrams is limited to 1200 bytes by default.
These 1200 bytes include all the overhead from headers (see "binary protocol"), size and type information (see "serialization in photon"), so that the number for actual pure payload is somewhat lower. In fact, even if it varies depending on how data is structured, we can safely assume that pure payload data lower than 1kb can fit into a single datagram.
Operations and events that are bigger than 1200 bytes get fragmented and are sent in multiple commands. These become reliable automatically and the receiving side can only reassemble and dispatch those bigger data chunks when all fragments are received.
Bigger data "streams" can considerably affect latency as they need to be reassembled from many packages before they are dispatched. They can be sent in a separate channel, so they don't affect the "live" position updates of a (lower) channel number.
The C# clients receive events via
OnEvent(EventData ev). By default, each EventData is a new instance, which causes some extra work for the garbage collector.
In many cases, it is easily possible to reuse the EventData and avoid the overhead. This can be enabled via the