Performance is a vital part for providing a fluid and seamless integration of multiplayer components into your application. So we assembled a list of tips you should keep in mind when developing with Photon.
Call Service Regularly
The client libraries are built to send any messages only when the app logic triggers that. This way, the clients can aggregate several operations and avoid network overhead.
To trigger sending any data, a main loop must call
PhotonPeer.SendOutgoingCommands() frequently. The bool return value is true if some data is still queued. If so, call SendOutgoingCommands again (but not more than three times in a row).
Service and SendOutgoingCommands also send acknowledgements and pings, which are important to keep a connection alive. You should avoid longer pauses between calling either. Especially, make sure that Service is called despite loading.
If overlooked, this problem is hard to identify and reproduce. The C# library has a ConnectionHandler class, which can help.
To avoid local lag, you could call SendOutgoingCommands after the game loop wrote network updates.
Updates Vs. Traffic
Ramping up the number of updates per second makes a game more fluid and up-to-date. On the other hand, traffic might increase dramatically. Also, random lag and loss can not be avoided, so receivers of updates should always be capable of interpolating important values.
Keep in mind that many operations you call will create events for other players and that it might in fact be faster to send fewer updates per second.
You can usually send less to avoid traffic issues. Doing so has a lot of different approaches:
Don't Send More Than What's Needed
Exchange only what's totally necessary. Send only relevant values and derive as much as you can from them. Optimize what you send based on the context. Try to think about what you send and how often. Non critical data should be either recomputed on the receiving side based on the data synchronized or with what's happening in game instead of forced via synchronization.
In an RTS, you could send "orders" for a bunch of units when they happen. This is much leaner than sending position, rotation and velocity for each unit ten times a second. Good read: 1500 archers.
In a shooter, send a shot as position and direction. Bullets generally fly in a straight line, so you don't have to send individual positions every 100 ms. You can clean up a bullet when it hits anything or after it travelled "so many" units.
Don't send animations. Usually you can derive all animations from input and actions a player does. There is a good chance that a sent animation gets delayed and playing it too late usually looks awkward anyways.
Use delta compression. Send only values when they changes since last time they were sent. Use interpolation of data to smooth values on the receiving side. It's preferable over brute force synchronization and will save traffic.
Don't Send Too Much
Optimize exchanged types and data structures.
- Make use of bytes instead of ints for small ints, make use of ints instead of floats where possible.
- Avoid exchanging strings at all costs and prefer enums/bytes instead.
- Avoid exchanging custom types unless you are totally sure about what get sent.
Use another service to download static or bigger data (e.g. maps). Photon is not built as content delivery system. It's often cheaper and easier to maintain to use HTTP-based content systems. Anything that's bigger than the Maximum Transfer Unit (MTU) will be fragmented and sent as multiple reliable packages (they have to arrive to assemble the full message again).
Don't Send Too Often
Lower the send rate, you should go under 10 if possible. This depends on your gameplay of course. This has a major impact on traffic. You can also use adaptive or dynamic send rate based on the user's activity or the exchanged data, this is also helping a lot.
Send unreliable when possible. You can use unreliable messages in most cases if you have to send another update as soon as possible. Unreliable messages never cause a repeat. Example: In an FPS, player position can usually be sent unreliable.
Producing And Consuming Data
Related to the "traffic" topic is the problem of producing only the amount of data that can be consumed on the receiving end. If performance or frame rate don't keep up with incoming events they are outdated before they are executed.
In the worst case, one side produces so much data that it breaks the receiving end. Keep an eye on the queue length of your clients while developing.
Limiting Execution Of Unreliable Commands
Even if a client doesn't dispatch incoming messages for a while (e.g. while loading), it will still receive and buffer everything. Depending on the activity of the other players, a client might have a lot to catch up with.
To keep things lean, a client will automatically cut the unreliable messages to a certain length. The idea is that you get the latest info faster and missing updates will be replaced by new, up-to-date messages soon.
This limit is set via
LoadbalancingPeer.LimitOfUnreliableCommands which has a default of 20 (in PUN, too).
The content size of datagrams is limited to 1200 bytes by default.
These 1200 bytes include all the overhead from headers (see "binary protocol"), size and type information (see "serialization in photon"), so that the number for actual pure payload is somewhat lower. In fact, even if it varies depending on how data is structured, we can safely assume that pure payload data lower than 1kb can fit into a single datagram.
Operations and events that are bigger than 1200 bytes get fragmented and are sent in multiple commands. These become reliable automatically and the receiving side can only reassemble and dispatch those bigger data chunks when all fragments are received.
Bigger data "streams" can considerably affect latency as they need to be reassembled from many packages before they are dispatched. They can be sent in a separate channel, so they don't affect the "live" position updates of a (lower) channel number.
The C# clients receive events via
OnEvent(EventData ev). By default, each EventData is a new instance, which causes some extra work for the garbage collector.
In many cases, it is easily possible to reuse the EventData and avoid the overhead. This can be enabled via the