hope you will quickly find a solution as now it began 10 days ago and nothing improves: sometimes more than 1 minute waiting when loading a new thread.
There is surely another problem somewhere than just a lot of downloads.
The server is advertising a smaller than normal window at around 28k instead of the usual 65k
Well, I know it may not seem like it, but things really are gradually improving (much more slowly than I anticipated).
Over the last few days, the time of day at which the network interface has been getting saturated has been getting steadily later (by a few hours already) and I hope it keeps on moving that way... That doesn't mean we aren't still considering options though to mitigate the excess load more effectively.
As for the window sizes you're seeing @GeekyDeaks: I'm definitely no expert on TCP window stuff but I understand that it will dynamically resize, and I suspect that the server behaviour you saw is likely to be a reflection of the fact that the NIC is saturated and so the stack is thus maintaining a fairly small window. If you conduct the same test during the (western European) morning, you should see a much larger window (way in excess of 64 kB), adequate to permit >100 Mb/s downloads with several tens of milliseconds of RTT. (At the time when you wrote your post, which was perhaps shortly after you checked the window size, I can confirm that the NIC was saturated.)
 
At the time when you wrote your post, which was perhaps shortly after you checked the window size, I can confirm that the NIC was saturated.
Yeah, but not only did I grab the window size right before the post, I also downloaded LA Canyons twice and it took less than 60 seconds on each attempt, so I think it's interesting to note that I still received decent throughput during saturation. I do think there are some other shenanigans going on here, especially looking at that trace earlier which shows twelve 99 causing half of the delay in one hop
 
Yeah, but not only did I grab the window size right before the post, I also downloaded LA Canyons twice and it took less than 60 seconds on each attempt, so I think it's interesting to note that I still received decent throughput during saturation. I do think there are some other shenanigans going on here, especially looking at that trace earlier which shows twelve 99 causing half of the delay in one hop
Ooooook, that's pretty strange. Your post from this afternoon was not *very* long after the server NIC became saturated, and I don't think that the system behaviour is necessarily ultra "fair" about how to dish out packets to clients when the link is congested (but this is definitely something I would like to learn more about) so I guess it's not totally strange that you managed to get what must have been in the 55+ Mb/s zone. Getting that speed when the server is "deeply" saturated would be more tricky I expect, but then again, I'm not even sure how to judge how deep the saturation is (or if the concept of depth of saturation is even the right way to look at it :)).
However, the window behaviour has me very puzzled.

In the last 40+ minutes, the NIC hasn't been saturated so I carried out a test download of LAC myself, getting full wire speed (which for me is around 70 Mb/s). When I review the TCP packets from the download though, I'm seeing nonsensical (to me) window sizes of typically 1452 bytes. This basically appears to mean that I understand TCP even less well than I thought, because with my ping time of around 20 ms, that window size should be totally incompatible with the download speed I actually obtained...
On balance, I'm inclined to maintain the view (until I learn otherwise :roflmao:) that the window sizes we're seeing probably aren't a clue to any underlying problems - this is because I can see very clearly from the server logging that we're simply hitting our bandwidth limit quite a lot recently. This wasn't the case rather earlier in this thread of course, which was why the comments back then were asking people to consult their ISPs. Anyone trying a download right now and NOT getting 50+ Mb/s is likely to be experiencing a different problem, unconnected to server congestion.

Getting back to the delays in the post above, the twelve99 delay steps up by around 70 ms going from New York to London, which seems more or less sensible to me (it's not a factor of 2 too high for example).
 
Hi John.
Apologies for not replying sooner. I just took a moment to check the server status at the time of your post, and (given that it was in the small hours for us Europeans) I was not surprised to see that the network load wasn't very high. So you should have been able to get a very fast download (likely full wire speed for you, or close). Was the download attempt at much the same time as your post? (In other words, after 02:00 UTC?) If so then I think the problem lies between your ISP and i3D and I'm sorry to say that the standard advice of "consult your ISP" is probably the only way forward :(
I'm not aware of Mediacom being the source of problems for other users though.
 
When I review the TCP packets from the download though, I'm seeing nonsensical (to me) window sizes of typically 1452 bytes.
do you still have the raw packets? The window size might be scaled, so you'd need to also check the option bits and then shift the window bits accordingly
 
do you still have the raw packets? The window size might be scaled, so you'd need to also check the option bits and then shift the window bits accordingly
Yeah, Wireshark is allegedly doing the multiplication for me though. I can't say I entirely trust it but it does display both the raw window size and the scaled window size for plenty of other packets in the dump.
After reviewing some packets, I was unable to see anything on a per-packet basis which indicated whether or not scaling is turned on; my initial conclusion is that it's either agreed and turned on when the connection is established and then remains on for all packets or it's just off forever. In my case, it's being turned on (with WS=128) for the downstream packets in the handshake.
 
Yeah, Wireshark is allegedly doing the multiplication for me though. I can't say I entirely trust it but it does display both the raw window size and the scaled window size for plenty of other packets in the dump.
Well, the window size is actually a receiver thing anyway and is something used to tell the sender how many non-ACK bytes you can have in flight, so when you see the window going down, it usually means the receiver is not processing the data fast enough out of it's recv buffer (i.e. it's kind of saying, yeah I got that last thing you sent, but I'm not able to accept as much right now, so slow down a bit). I was only really interested in the servers SYN window size as it was smaller than normal and could be an indication that it's allocating a smaller than normal send buffer, but it's not an absolute given. The only way to really get a feel for the send buffer size on the server is to calculate the difference between the outgoing SEQ and incoming ACK and see if it's plateauing around a certain number, but the receiver is still advertising a window size much bigger. The problem with this is you can only make a reasonable guess at the state of the tcp buffers if you capture close to the device you are interested in and we are collecting at the other end, so you have to also try and factor in the packet flight time too

EDIT: I should probably point out that this is all lazy guesswork and nonsense at this stage, so take it with the proverbial :D - it's just that in the 2-3 years I have been using the site, I have not once experienced slow performance, but I'm just down the road from the server and I have lost count of the number of 'bizarre network issues' I have investigated throughout the years that were down to small buffers and high RTTs
 
Last edited:
Well, the window size is actually a receiver thing anyway and is something used to tell the sender how many non-ACK bytes you can have in flight, so when you see the window going down, it usually means the receiver is not processing the data fast enough out of it's recv buffer (i.e. it's kind of saying, yeah I got that last thing you sent, but I'm not able to accept as much right now, so slow down a bit). I was only really interested in the servers SYN window size as it was smaller than normal and could be an indication that it's allocating a smaller than normal send buffer, but it's not an absolute given. The only way to really get a feel for the send buffer size on the server is to calculate the difference between the outgoing SEQ and incoming ACK and see if it's plateauing around a certain number, but the receiver is still advertising a window size much bigger. The problem with this is you can only make a reasonable guess at the state of the tcp buffers if you capture close to the device you are interested in and we are collecting at the other end, so you have to also try and factor in the packet flight time too

EDIT: I should probably point out that this is all lazy guesswork and nonsense at this stage, so take it with the proverbial :D - it's just that in the 2-3 years I have been using the site, I have not once experienced slow performance, but I'm just down the road from the server and I have lost count of the number of 'bizarre network issues' I have investigated throughout the years that were down to small buffers and high RTTs
Doh. I somehow managed to get the receiver/sender windows back to front while reviewing dumps. V embarrassing! :redface:
It's also entirely possible that while reviewing multiple dumps, I managed to check some of the window sizes in a capture that had missed the handshake (thus no multiplier). Will continue this in your PM thread ;) :thumbsup:
 
EU Prime Time on this platform for me is like: Just avoid it. Seriously, pings have been all fine but the response of the site kills it then. Can check RD at all other times but yeah. To be blunt, if I keep reading excuses on and on (should be better in "place random time here") I wonder if I should extend the premium when the time comes.
 
There are no excuses, it's a Deutsche Telekom issue we have been repeating over and over again. All your connected IP addresses are from DTAG as well.

This sites consumes 250 Terabyte of data a month and it seems our ISP and DTAG are not alligned on actually using the internet properly so best advise is to contact Deutsche Telekom and ask why they are cutting you off.
 
if I keep reading excuses on and on (should be better in "place random time here")
OK that feels a little unfair. You are already well aware that your problem is not connected with our site being overwhelmed, and so your problem is NOT going to get better in a few days.
There are no excuses, it's a Deutsche Telekom issue we have been repeating over and over again. All your connected IP addresses are from DTAG as well.
This, for sure.
Customers of DTAG and a few other ISPs in various parts of the world have had pretty clear feedback from us about this in threads like this one.

Just to help clarify it for everyone, there are essentially two distinct causes for people having download-speed issues on RD.
Firstly, there are those with connectivity problems (like DTAG customers) for whom the problems have often been long-standing, and to whom the only advice we can really offer is to put pressure on your ISP to explain/solve the issue, because we can't. These users will generally have bad download speeds whenever the relevant bit of infrastructure between them and us is congested, which clearly may be hard for them to distinguish from the RD site being slow since it will also tend to happen at peak times, but this is why we've provided them with feedback; we can see when the site server is being slow and when it isn't.
Secondly, from time to time the RD site itself is simply overwhelmed by traffic (e.g. at holiday periods or when some event causes traffic to spike), which of course brings a lot more people to threads like this one; this one is on us for sure, though ultimately it's a transient problem, and as I've said before, we're still looking at ways to mitigate it since the site is continuing to grow in popularity.
 
OK that feels a little unfair. You are already well aware that your problem is not connected with our site being overwhelmed, and so your problem is NOT going to get better in a few days.
When DTAG was bottlenecking, the pings were all over the place, now DL speed and pings are fine but the response time of the site itself is slow as hell at prime times. I could draw it by hand probably by the same time all elements are loaded.
 
Encountered my first problem with RD today. I have always had good download speeds but for some reason, today a 500MB file takes 40 min to download. I am certain it is not my connection or the web browser since I have downloaded lots of other things today at normal speeds. I have tried to solve the issue but with no luck and I'm now wondering if it is RD that is causing the problem.
 
When DTAG was bottlenecking, the pings were all over the place, now DL speed and pings are fine but the response time of the site itself is slow as hell at prime times. I could draw it by hand probably by the same time all elements are loaded.
When the RD site's network link is at full capacity (as it is right at this moment for example) the fact that DTAG and i3D don't play nicely together becomes irrelevant. Once we get the network link load down a wee bit, the other issues will (I fully expect) rear their heads again.
Encountered my first problem with RD today. I have always had good download speeds but for some reason, today a 500MB file takes 40 min to download. I am certain it is not my connection or the web browser since I have downloaded lots of other things today at normal speeds. I have tried to solve the issue but with no luck and I'm now wondering if it is RD that is causing the problem.
Yup, see above, it'll likely be our network capacity limiting your download speeds today (and for the last several days).
 
That's unfortunate, I just got a new pc and want to download everything I had on my old one to this one, guess its just gonna take some time then. At least I know I'm not the one with the problem now, so thanks for the quick response :)
 

Latest News

What would make you race in our Club events

  • Special events

    Votes: 29 24.0%
  • More leagues

    Votes: 25 20.7%
  • Prizes

    Votes: 22 18.2%
  • Trophies

    Votes: 13 10.7%
  • Forum trophies

    Votes: 6 5.0%
  • Livestreams

    Votes: 20 16.5%
  • Easier access

    Votes: 71 58.7%
  • Other? post your reason

    Votes: 18 14.9%
Back
Top