multiple players on same network cause intermittent packet loss after drifter release patch

friendlyfred56

New member
hello,
since the drifter release update whenever there is more than one person in a deadlock match on my network, both players receive intermittent packet loss up to the high 90%'s. the obvious answer would be that our network cannot handle multiple people playing, however it should be more than enough for running 50 people playing deadlock at once, never has been a problem in the past, and -

1) pings to various other ip addresses (google, steamcommunity.com etc) during the in-game packet loss do not drop any packets
2) this issue does not occur on CS2 or any other multiplayer games ive tested, using the exact same testing conditions
3) if anything, the network is being used less since the problem began

the problem is easily reproducible, as a custom match with 2 players and bots will trigger it.
the packet loss gets worse the more there is going on around the player in-game e.g. 6 people in close proximity will make it worse.
here are some snippets from the in-game console log during packet loss, im not entirely sure what parts are relevant -

[Networking] [server @ =[SERVER_ADDRESS]] high frame misdelivery 93.5%, 145/0 of 155 frames dropped/reordered, 0ms ping

[SteamNetSockets] [#331902081 SDR server steamid:STEAMID vport 0 'server'] decode pkt 112062 abort. Reliable stream already has 26 fragments, first is [12514372,12524009), last is [12871986,12876435). We don't want to fragment [12636425,12644675) with new segment [12637637,12637853)

[Client] CL: CNetworkGameClient::ProcessTick server tick 45083 went backward from previous of 45346, resetting previous
[Client] Server is reporting is received command 44877 on tick 45083 (first), but we don't have a timestamp when we sent that

[Prediction] Prediction time (1418.340820) is less that sim time (1418.715454)? Clamping offset
[Prediction] Trying to store prediction history for command 45233, but first slot is for command 45245. Going backwards or lurching prevents us from smoothing over prediction errors!


i have more logs, as well as other logs from network monitoring applications that i can also provide if needed.
thanks
 
Back
Top