Hello guys,
we were testing the ENSL Plugin in the server that we are going to use in USA (Xtinction server), and we noticed that ENSL Plugin didnt allow us to put certain value of cl_updaterate. When we started testing, we noticed that if we increased the value of cl_updaterate, our ms increased like 80 ms. (in net_graph). So we tested decreasing the value to cl_updaterate 10 and we verified that ms decreased amazingly.
If we put cl_updaterate as ENSL Plugin requires, out ping is equal to the ms, (200 ping, 200 ms), now, with the updaterate lower, the ms that are the time of response to the server, they decrease a lot. For example, Snail has 210 latency in USA, and with updaterate 10 he has 120 ms, while if he puts the value of ENSL Plugin, he has 210.
We tested the game and playability with both values and the difference is clearly seen.
When you play with ping, it is usual that if somebody bumps into another player, either marine or skulk, the camera "jams", i mean it is like it gets stuck.
The most incredible thing is that with cl_updaterate 10, this doesnt happens, or it happens but rarely. And another advantage is that the reg is more exact, bullets or bites doesnt have so many delay.
We also know that updaterate is related to the number of dates, packets that the game asks to the server, so que thought it could have problems when there is lot of activity, like 5 marines, chambers, hive, aliens all together.. but snail checked in ablens and he didnt notice any problem.
We invite you to play in this server and test the difference between updaterate 10 and 40.
The password is "match", and the server: 173.208.131.162:27015 Xtinction Pug Server
We would like to be able to play with less cl_updaterate than 40, for the reasons mentioned above.
The topic is to debate.
I have no idea why people think 1000 tick servers are better. There really shouldn't be a difference between an hlds running at say, 300 constant vs 1000 constant, but I digress
cl_updaterate is a cvar that is a count of the amount update frames per second your client requests from a server
rate is a cvar that determines the upper limit for bytes per second you receive from a server
as far as I know, ping in HL measures round-trip time for a packet not one-way
with the above in mind, if you are raising cl_updaterate and your latency is dropping significantly, to me that means that any of these three events are possible:
1) the server is sending you packets more slowly due to the extra load from cl_updaterate (not very likely if it is a 1000hz capable server and has good bandwidth)
2) your downstream connection to the server sucks, meaning somewhere along the router path from the server to you, the increased load of packets cannot be handled as efficiently as it can with a decreased load (more likely)
3) HL starts reporting one-way latency when cl_updaterate is set below a certain threshold
So what I recommend is to go to speedtest.net and pingtest.net and run tests to a place nearby where the server is located (I think it's in Kansas, MO) and then report your results back here. you might need to adjust your 'rate' cvar if your download speed on american servers sucks the dong
also, I would be happy to let you use cl_updaterate 10 if you were my opponent because it would put you at a huge disadvantage
I live about 900 miles from Kansas (near Chicago), which isn't significantly far away as to not make a decent case here
I've used pingtest.net to measure my average ping to their Argentinean servers and I cannot get less than 220ms of latency. If you got less than 180ms of ping to Kansas, I think it is artificial and HL is reporting it to you wrong
btw, its seems you are the high pings experts but hey, we are the one playing with high ping and what a coincidence, we are the ones who feel the difference
i say it again, you are welcome to test the difference of the updaterate values or you can keep talking about supposeds factors that you dont even really know who they works (btw, we arent experts too, but at least we tested..)
updaterate values makes differences when you got high ping and you got a delayed response from the server, if i play arg i can play with any updaterate value and i will get 0ms, i think you guys really need to test the difference before keep talking
It's well possible you'll get lower ping with if your connection or even some router between you and the server cannot handle the traffic if its over 50 packets per seconds for example. I think you should test this in practice. Even though 10 updates per second will mean a lot less accuracy, 120ms ping with 10updates per second could well be a lot better than a spiking 200-250ms ping with 100 updates. Back in 1.04 days many people actually had 10-30 update-/cmdrate.
With 10 updates per second, you'll miss about 100ms of input from the player. Its not that much, but if your ping is spiking between 200 and 250ms even with 100 updates per seecond, your hitobxes will likely move randomly, or SPORADICALLY, making aiming you much more of an issue.
Its up to tom and other season admins whether they'll want to allow this. All you have to do is enable match configs and then amx_enslcvar ensl_mincmdrate 30 or amx_enslcvar ensl_minupdaterate 30.
HLSW has a very good traceroute feature, use it if you want to see real routing.
I'm looking at your pingtest and that's why I'm doubting that you actually got 120ms to Apollo's Kansas server. What does your ping to the server say in the steam server browser?
I told fitox to make this post because I'd think that many euro players would rather have 30 updaterate on american servers vs argentina. I'd go lower than 30 if there was good evidence to support it!
As it stands:
I see mixed opinions about minimum updaterate, I know that we've moved the min updaterate upto 40 for a reason, but I also know that this reason was largely based on euro servers and I don't pretend to be a rates expert, or even close!