Slow wired upload speed vs Linux on same hardware

Dawid Oosthuizen 26 Reputation points
2020-09-09T05:18:31.053+00:00

Intel® Ethernet Controller X550-AT2, 10G network interface on ASRock Rack ROMED8-2T with AMD EPYC 7232P processor.
Windows 10 Pro for Workstations, 2004.
Latest Windows updates and Intel drivers installed as of 09/09/2020.

This machine is a dual boot, the above Windows version, and Ubuntu 20.04.
When doing a speed test, I get good performance from Ubuntu, but very poor uploads from Windows. This is on the same machine, the exact same hardware.

The WAN link is 1000Mbps down, 50Mbps up.

This is the Windows speedtest result:
23320-windows-10-speedtest.png

This is the Ubuntu speedtest result:
23325-ubuntu-2004-speedtest.png

I have tried to tweak the adapter's advanced driver settings in Windows, such as disabling LSO, etc. No luck, performance remains poor.

I've also noticed it on another PC running Windows 10 Pro, and a laptop running Windows 10 Pro for Workstations, both give the same poor upload performance. Whereas my other Ubuntu 20.04 Server machine, and also my phone connected via Wi-Fi, is getting good upload speeds.

I have even taken the Windows laptop and plugged it straight into my incoming WAN connection (bypassing router), and it still gets poor upload speeds.

Incidentally, when the speed test is running, I can see that the upload looks bursty on Windows, like it is only getting chunks of data here and there, while in Linux and on Android it looks the same as the download, the graph is drawn at a consistent high rate and with consistent high values.

Windows 10 Network
Windows 10 Network
Windows 10: A Microsoft operating system that runs on personal computers and tablets.Network: A group of devices that communicate either wirelessly or via a physical connection.
2,274 questions
{count} votes

50 answers

Sort by: Most helpful
  1. Sunny Qi 10,906 Reputation points Microsoft Vendor
    2020-09-10T07:29:43.223+00:00

    Hi,

    Thanks for posting here.

    In general, there are several reasons may cause upload speed slow.

    I would suggest you try the following method to see if the upload speed can be returned to normal.

    Option 1

    Open Control Panel > Network and Sharing Center > Change adapter settings > Right click the adapter that you need check > click Properties > click Configure > in Advanced tab, locate to Speed & Duplex > select the value 10 Mbps Half Duplex > click OK to see if the issue can be resolved.

    23629-image-1.jpg
    23646-image-2.jpg
    23698-image2.jpg

    Option 2

    Press Windows+R to open Run, insert gpedit.msc, click OK to open Local Group Policy Editor > Computer Configuration > Administrative Templates > Network > QoS Packet Scheduler > right click Limit reservable bandwidth > click Edit > select Enable and set Bandwidth limit to 0 > click Apply and OK

    23616-image3.jpg
    23699-image4.jpg

    Option 3

    Click Start, insert cmd to open a CMD window with administrator privilege and please insert the following command and press enter

    netsh interface tcp set global autotuning=disabled

    Then in the same CMD window, insert command regedit to open Registry Editor

    Locate to HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\services\AFD\Parameters

    Right click to create a new DWORD Value, Value name: DefaultSendWindow, Value data: 1640960, Decimal.

    Click OK and Reboot the machine to see if the issue can be resolved.

    23589-image6.jpg
    23743-image7.jpg

    Option 4

    Repeat the first two steps of Option 3 to open the Registry Editor

    Locate to HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings

    Right click to create a new DWORD Value, Value name: SocketSendBufferLengt, Value data: 4000, Hexadecimal.

    Click OK and Reboot the machine to see if the issue can be resolved.

    23735-image8.jpg

    Best Regards,
    Sunny Qi

    =======================================================

    If the Answer is helpful, please click "Accept Answer" and upvote it.
    Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.

    2 people found this answer helpful.

  2. Gary Nebbett 5,721 Reputation points
    2020-11-28T12:47:23.54+00:00

    Hello @Egoist ,

    Sorry, I was too quick to cast aspersions on the "Large Send Offload" functionality. After longer consideration, all of the problems seem to arise outside of Windows.

    Let's look at some of the interesting behaviour.

    11:39:17.343617 10.132.84.7.58616 > 24.220.62.52.8080: P 63:124(61) ack 76 win 1026 (DF)  
    11:39:17.343635 truncated-ip 0  
    11:39:17.343654 truncated-ip 0  
    11:39:17.343662 truncated-ip 0  
    11:39:17.343670 10.132.84.7.58616 > 24.220.62.52.8080: . 12412:13872(1460) ack 76 win 1026 (DF)  
    11:39:17.348628 24.220.62.52.8080 > 10.132.84.7.58616: . ack 63 win 251 <nop,nop,sack 4220:5680> (DF)  
    11:39:17.349441 24.220.62.52.8080 > 10.132.84.7.58616: . ack 63 win 274 <nop,nop,sack 7140:8316 4220:5680> (DF)  
    11:39:17.349442 24.220.62.52.8080 > 10.132.84.7.58616: . ack 63 win 297 <nop,nop,sack 9776:11236 7140:8316 4220:5680> (DF)  
    11:39:17.349442 24.220.62.52.8080 > 10.132.84.7.58616: . ack 63 win 320 <nop,nop,sack 9776:12412 7140:8316 4220:5680> (DF)  
    11:39:17.349442 24.220.62.52.8080 > 10.132.84.7.58616: . ack 63 win 343 <nop,nop,sack 9776:13872 7140:8316 4220:5680> (DF)  
    11:39:17.349604 24.220.62.52.8080 > 10.132.84.7.58616: . ack 124 win 343 <nop,nop,sack 9776:13872 7140:8316 4220:5680> (DF)  
    

    First, 61 bytes are sent, then 3 large segments (each carrying 4096 bytes - which are sent as two 1460 byte packets and one 1176 byte packet by the network adapter) and then a 1460 byte packet.

    Those first 61 bytes are first acknowledged in the sixth ack response - long after later packets have been acknowledged with a "sack" (selective acknowledgement). The subsequent acks also indicate that packets are being received by the server "out of order".

    "Out of order" reception typically results from packet loss plus retransmission, but no retransmission is taking place at this time and some packets seem to be "overtaking" others as they cross the network.

    The frequent packet loss quickly reduces the "congestion window" which starts at 10 * MSS and soon falls to just one MSS:

    SeqNo = 648939448, BytesSent = 0, CWnd = 14600  
    SeqNo = 648939449, BytesSent = 0, CWnd = 14600  
    SeqNo = 648939449, BytesSent = 62, CWnd = 14600  
    SeqNo = 648939511, BytesSent = 61, CWnd = 14662  
    SeqNo = 648939572, BytesSent = 4096, CWnd = 14662  
    SeqNo = 648943668, BytesSent = 4096, CWnd = 14662  
    SeqNo = 648947764, BytesSent = 4096, CWnd = 14662  
    SeqNo = 648951860, BytesSent = 2313, CWnd = 14662  
    SeqNo = 648953320, BytesSent = 6815, CWnd = 17643  
    SeqNo = 648942492, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648942492, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648960135, BytesSent = 2920, CWnd = 2920  
    SeqNo = 648945128, BytesSent = 2920, CWnd = 2920  
    SeqNo = 648948048, BytesSent = 1460, CWnd = 4380  
    SeqNo = 648949508, BytesSent = 2922, CWnd = 4382  
    SeqNo = 648953320, BytesSent = 4383, CWnd = 4383  
    SeqNo = 648953320, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648953320, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648953320, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648963055, BytesSent = 4380, CWnd = 4380  
    SeqNo = 648963055, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648967435, BytesSent = 2920, CWnd = 2920  
    SeqNo = 648964515, BytesSent = 2920, CWnd = 2920  
    SeqNo = 648967435, BytesSent = 1460, CWnd = 4380  
    SeqNo = 648964515, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648970355, BytesSent = 2920, CWnd = 3066  
    SeqNo = 648968895, BytesSent = 3066, CWnd = 3066  
    SeqNo = 648971815, BytesSent = 1460, CWnd = 4380  
    SeqNo = 648968895, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648973275, BytesSent = 4526, CWnd = 4526  
    SeqNo = 648977655, BytesSent = 1607, CWnd = 4527  
    SeqNo = 648974735, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648974735, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648974735, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648979115, BytesSent = 4380, CWnd = 4380  
    SeqNo = 648983495, BytesSent = 4383, CWnd = 4383  
    SeqNo = 648987875, BytesSent = 4386, CWnd = 4386  
    SeqNo = 648992255, BytesSent = 1467, CWnd = 4387  
    SeqNo = 648993715, BytesSent = 1861, CWnd = 4781  
    SeqNo = 648990795, BytesSent = 1460, CWnd = 1460  
    SeqNo = 648995175, BytesSent = 2920, CWnd = 2920  
    SeqNo = 648992255, BytesSent = 4380, CWnd = 4380  
    SeqNo = 648998095, BytesSent = 4381, CWnd = 4381  
    SeqNo = 649002475, BytesSent = 4384, CWnd = 4384  
    SeqNo = 649006855, BytesSent = 1977, CWnd = 4897  
    SeqNo = 649003935, BytesSent = 1460, CWnd = 1460  
    SeqNo = 649008315, BytesSent = 3427, CWnd = 3427  
    SeqNo = 649011235, BytesSent = 3429, CWnd = 3429  
    SeqNo = 649014155, BytesSent = 4041, CWnd = 4041  
    SeqNo = 649017075, BytesSent = 4954, CWnd = 4954  
    SeqNo = 649021455, BytesSent = 5867, CWnd = 5867  
    SeqNo = 649027295, BytesSent = 2215, CWnd = 6595  
    SeqNo = 649029510, BytesSent = 1460, CWnd = 6595  
    SeqNo = 649030970, BytesSent = 1460, CWnd = 6595  
    SeqNo = 649022915, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649032430, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649033890, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649035350, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649036810, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649038270, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649039730, BytesSent = 1460, CWnd = 5840  
    SeqNo = 649022915, BytesSent = 1460, CWnd = 1460  
    SeqNo = 649041190, BytesSent = 948, CWnd = 5548  
    SeqNo = 649042139, BytesSent = 0, CWnd = 5549  
    

    The impact that this has on performance can be seen clearly here:

    11:39:18.271541 24.220.62.52.8080 > 10.132.84.7.58616: . ack 8600 win 594 <nop,nop,sack 7140:8316 15332:23607 9776:13872> (DF)  
    11:39:18.275841 24.220.62.52.8080 > 10.132.84.7.58616: . ack 13872 win 616 <nop,nop,sack 9776:10060 15332:23607> (DF)  
    11:39:18.275916 10.132.84.7.58616 > 24.220.62.52.8080: . 13872:15332(1460) ack 76 win 1026 (DF)  
    11:39:18.587669 10.132.84.7.58616 > 24.220.62.52.8080: . 13872:15332(1460) ack 76 win 1026 (DF)  
    11:39:19.196868 10.132.84.7.58616 > 24.220.62.52.8080: . 13872:15332(1460) ack 76 win 1026 (DF)  
    11:39:20.399855 10.132.84.7.58616 > 24.220.62.52.8080: . 13872:15332(1460) ack 76 win 1026 (DF)  
    11:39:20.406807 24.220.62.52.8080 > 10.132.84.7.58616: . ack 23607 win 639 (DF)  
    

    Windows is trying to fill the gap at 13872:15332 and can only send one MSS at a time because the congestion window is now that size (i.e. Windows has to wait until this data is acked before it can send any more). In fact it re-sends this packet 4 times at ever increasing intervals until, after two seconds, it is finally acknowledged.

    The Windows system always seems to be the victim of the network behaviour without itself having put a foot wrong - there appears to be nothing that can be done from the Windows side to improve the situation.

    Gary

    1 person found this answer helpful.
    0 comments No comments

  3. Gary Nebbett 5,721 Reputation points
    2020-11-28T14:33:27.187+00:00

    Hello @Egoist ,

    You may well be right about BBR. This image from the presentation that you mentioned may well be the explanation:

    43299-image.png

    When to use and not use BBR mentions "To make things worse, BBR does not treat packet loss as a congestion signal, [...]". The high rate of packet loss in your sample trace really constrains the CUBIC congestion window (since CUBIC does treat packet loss as a strong congestion signal) and leads to the terrible throughput.

    Gary

    1 person found this answer helpful.
    0 comments No comments

  4. Gary Nebbett 5,721 Reputation points
    2020-12-24T13:31:04.887+00:00

    Hello @mensa84 ,

    Thanks for trying.

    I am coming to the opinion that there is no settable option to improve the situation - functionality seems to be missing from the Microsoft TCP/IP implementation.

    The functionality in question is RACK ("Recent ACKnowledgment", The RACK-TLP loss detection algorithm for TCP, https://datatracker.ietf.org/doc/draft-ietf-tcpm-rack/?include_text=1). The RFC for this functionality is still in draft status - I had been looking at draft 14 and I just noticed that draft 15 was published 2 days ago (2020-12-22). RACK in the Windows TCP/IP implementation is mentioned in this c't article from 2016 (the date of the first draft): https://www.heise.de/select/ct/2016/17/1471609544700372

    It would be useful to try to identify which device is responsible for the high degree of out-of-order delivery. When I trace the route to the speedtest server that you used, this is what I see:

    tracert -w 200 84.116.34.253

    Tracing route to 84.116.34.253 over a maximum of 30 hops

    1 5 ms 3 ms 3 ms 192.168.0.1
    2 * * * Request timed out.
    3 10 ms 14 ms 13 ms riehen-sw1-po-1.gw.imp.ch [157.161.254.101]
    4 24 ms 11 ms 10 ms bsl-mes-sw1-vlan-2140.gw.imp.ch [157.161.251.110]
    5 12 ms 11 ms 13 ms bsl-dsp-sw1-vlan-2100.gw.imp.ch [157.161.251.42]
    6 10 ms 10 ms 15 ms prt-imp-sw7-vlan-2302.gw.imp.ch [157.161.251.50]
    7 11 ms 14 ms 26 ms prt-hea-sw1-vlan-2303.gw.imp.ch [157.161.251.53]
    8 63 ms 48 ms 31 ms prt-cbl-sw1-vlan-2003.gw.imp.ch [157.161.251.9]
    9 11 ms 11 ms 11 ms prt-cbl-core2-ve-3020.gw.imp.ch [157.161.254.153]
    10 10 ms 11 ms 12 ms te0-0-2-1.nr11.b021978-0.bsl01.atlas.cogentco.com [149.6.34.41]
    11 11 ms 10 ms 11 ms te0-0-2-3.rcr11.bsl01.atlas.cogentco.com [154.25.4.221]
    12 13 ms 11 ms 14 ms te0-2-0-1.ccr51.zrh02.atlas.cogentco.com [130.117.2.145]
    13 13 ms 12 ms 12 ms liberty.zrh01.atlas.cogentco.com [130.117.14.230]
    14 29 ms 32 ms 30 ms ch-zrh03a-rc1-ae-9-0.aorta.net [84.116.134.141]
    15 44 ms 43 ms 30 ms de-fra02a-rc1-ae-27-0.aorta.net [84.116.132.1]
    16 28 ms 29 ms 28 ms at-vie01b-rc2-ae-3-0.aorta.net [84.116.136.117]
    17 31 ms * 28 ms at-vie01b-rc1-ae-41-0.aorta.net [84.116.130.26]
    18 29 ms 28 ms 30 ms at-vie01a-ra5-dc-ae-1-306.aorta.net [84.116.138.33]
    19 28 ms 28 ms 27 ms 84.116.34.253

    How many hops are their from your system to the first common hop that we share on the path to the speedtest server?

    Gary

    1 person found this answer helpful.

  5. Gary Nebbett 5,721 Reputation points
    2021-01-04T14:11:21.527+00:00

    Hello @mensa84 ,

    I would guess that the "re-ordering" takes place as the traffic is traversing the 5G part of the connection. Further guesses are that there is nothing that can be done short-term to improve the situation and that the Microsoft developers responsible for the TCP/IP stack are aware of the problem (but it make take a long time before any improvement is made available).

    Here is a detailed explanation of what is happening:

    22:18:14.936343 192.168.1.10.50647 > 84.116.34.253.8080: . 107727:109147(1420) ack 917 win 1022 (DF)
    22:18:14.936344 192.168.1.10.50647 > 84.116.34.253.8080: . 109147:110567(1420) ack 917 win 1022 (DF)
    22:18:14.936344 192.168.1.10.50647 > 84.116.34.253.8080: . 110567:111987(1420) ack 917 win 1022 (DF)
    22:18:14.936344 192.168.1.10.50647 > 84.116.34.253.8080: . 111987:113407(1420) ack 917 win 1022 (DF)
    22:18:14.936344 192.168.1.10.50647 > 84.116.34.253.8080: . 113407:114827(1420) ack 917 win 1022 (DF)
    22:18:14.936345 192.168.1.10.50647 > 84.116.34.253.8080: . 114827:116247(1420) ack 917 win 1022 (DF)
    22:18:14.936345 192.168.1.10.50647 > 84.116.34.253.8080: . 116247:117667(1420) ack 917 win 1022 (DF)
    22:18:14.936345 192.168.1.10.50647 > 84.116.34.253.8080: P 117667:118453(786) ack 917 win 1022 (DF)
    22:18:14.936377 192.168.1.10.50647 > 84.116.34.253.8080: . 118453:119873(1420) ack 917 win 1022 (DF)
    22:18:14.936377 192.168.1.10.50647 > 84.116.34.253.8080: . 119873:121293(1420) ack 917 win 1022 (DF)
    22:18:14.936377 192.168.1.10.50647 > 84.116.34.253.8080: . 121293:122713(1420) ack 917 win 1022 (DF)
    22:18:14.936378 192.168.1.10.50647 > 84.116.34.253.8080: . 122713:124133(1420) ack 917 win 1022 (DF)
    22:18:14.936378 192.168.1.10.50647 > 84.116.34.253.8080: . 124133:125553(1420) ack 917 win 1022 (DF)
    22:18:14.936378 192.168.1.10.50647 > 84.116.34.253.8080: . 125553:126973(1420) ack 917 win 1022 (DF)
    22:18:14.936378 192.168.1.10.50647 > 84.116.34.253.8080: . 126973:128393(1420) ack 917 win 1022 (DF)
    22:18:14.936379 192.168.1.10.50647 > 84.116.34.253.8080: . 128393:129813(1420) ack 917 win 1022 (DF)
    22:18:14.936379 192.168.1.10.50647 > 84.116.34.253.8080: . 129813:131233(1420) ack 917 win 1022 (DF)
    22:18:14.936379 192.168.1.10.50647 > 84.116.34.253.8080: . 131233:132653(1420) ack 917 win 1022 (DF)
    22:18:14.936379 192.168.1.10.50647 > 84.116.34.253.8080: P 132653:133055(402) ack 917 win 1022 (DF)
    22:18:14.936443 192.168.1.10.50647 > 84.116.34.253.8080: . 133055:134475(1420) ack 917 win 1022 (DF)
    22:18:14.936444 192.168.1.10.50647 > 84.116.34.253.8080: . 134475:135895(1420) ack 917 win 1022 (DF)
    22:18:14.936444 192.168.1.10.50647 > 84.116.34.253.8080: . 135895:137315(1420) ack 917 win 1022 (DF)
    22:18:14.936444 192.168.1.10.50647 > 84.116.34.253.8080: . 137315:138735(1420) ack 917 win 1022 (DF)
    22:18:14.936444 192.168.1.10.50647 > 84.116.34.253.8080: . 138735:140155(1420) ack 917 win 1022 (DF)
    22:18:14.936445 192.168.1.10.50647 > 84.116.34.253.8080: . 140155:141575(1420) ack 917 win 1022 (DF)
    22:18:14.936445 192.168.1.10.50647 > 84.116.34.253.8080: . 141575:142995(1420) ack 917 win 1022 (DF)
    22:18:14.936445 192.168.1.10.50647 > 84.116.34.253.8080: . 142995:144415(1420) ack 917 win 1022 (DF)
    22:18:14.936445 192.168.1.10.50647 > 84.116.34.253.8080: . 144415:145835(1420) ack 917 win 1022 (DF)
    22:18:14.936445 192.168.1.10.50647 > 84.116.34.253.8080: . 145835:147255(1420) ack 917 win 1022 (DF)
    22:18:14.936446 192.168.1.10.50647 > 84.116.34.253.8080: . 147255:148675(1420) ack 917 win 1022 (DF)
    22:18:14.936446 192.168.1.10.50647 > 84.116.34.253.8080: P 148675:149461(786) ack 917 win 1022 (DF)
    22:18:14.936474 192.168.1.10.50647 > 84.116.34.253.8080: . 149461:150881(1420) ack 917 win 1022 (DF)
    22:18:14.936475 192.168.1.10.50647 > 84.116.34.253.8080: . 150881:152301(1420) ack 917 win 1022 (DF)
    22:18:14.936475 192.168.1.10.50647 > 84.116.34.253.8080: . 152301:153721(1420) ack 917 win 1022 (DF)
    22:18:14.936475 192.168.1.10.50647 > 84.116.34.253.8080: . 153721:155141(1420) ack 917 win 1022 (DF)
    22:18:14.936475 192.168.1.10.50647 > 84.116.34.253.8080: . 155141:156561(1420) ack 917 win 1022 (DF)
    22:18:14.936475 192.168.1.10.50647 > 84.116.34.253.8080: . 156561:157981(1420) ack 917 win 1022 (DF)
    22:18:14.936476 192.168.1.10.50647 > 84.116.34.253.8080: . 157981:159401(1420) ack 917 win 1022 (DF)
    22:18:14.936476 192.168.1.10.50647 > 84.116.34.253.8080: . 159401:160821(1420) ack 917 win 1022 (DF)
    22:18:14.936476 192.168.1.10.50647 > 84.116.34.253.8080: . 160821:162241(1420) ack 917 win 1022 (DF)
    22:18:14.936476 192.168.1.10.50647 > 84.116.34.253.8080: . 162241:163661(1420) ack 917 win 1022 (DF)
    22:18:14.936477 192.168.1.10.50647 > 84.116.34.253.8080: . 163661:165081(1420) ack 917 win 1022 (DF)
    22:18:14.936477 192.168.1.10.50647 > 84.116.34.253.8080: P 165081:165867(786) ack 917 win 1022 (DF)
    22:18:14.936503 192.168.1.10.50647 > 84.116.34.253.8080: . 165867:167287(1420) ack 917 win 1022 (DF)
    22:18:14.936503 192.168.1.10.50647 > 84.116.34.253.8080: . 167287:168707(1420) ack 917 win 1022 (DF)

    Now some acknowledgements are made available to the TCP stack, all grouped together (as a result of the "Interrupt Moderation" that we discussed earlier):

    22:18:14.962886 84.116.34.253.8080 > 192.168.1.10.50647: . ack 102047 win 525 <nop,nop,sack 117667:118453> (DF)
    22:18:14.967281 84.116.34.253.8080 > 192.168.1.10.50647: . ack 102047 win 525 <nop,nop,sack 104887:106307 117667:118453> (DF)
    22:18:14.967282 84.116.34.253.8080 > 192.168.1.10.50647: . ack 106307 win 548 <nop,nop,sack 117667:118453> (DF)
    22:18:14.967282 84.116.34.253.8080 > 192.168.1.10.50647: . ack 107727 win 570 <nop,nop,sack 117667:118453> (DF)
    22:18:14.967282 84.116.34.253.8080 > 192.168.1.10.50647: . ack 107727 win 570 <nop,nop,sack 109147:110567 117667:118453> (DF)
    22:18:14.967282 84.116.34.253.8080 > 192.168.1.10.50647: . ack 107727 win 570 <nop,nop,sack 111987:113407 109147:110567 117667:118453> (DF)
    22:18:14.967282 84.116.34.253.8080 > 192.168.1.10.50647: . ack 107727 win 570 <nop,nop,sack 109147:113407 117667:118453> (DF)
    22:18:14.967283 84.116.34.253.8080 > 192.168.1.10.50647: . ack 113407 win 592 <nop,nop,sack 117667:118453> (DF)
    22:18:14.967283 84.116.34.253.8080 > 192.168.1.10.50647: . ack 114827 win 614 <nop,nop,sack 117667:118453> (DF)
    22:18:14.967283 84.116.34.253.8080 > 192.168.1.10.50647: . ack 114827 win 614 <nop,nop,sack 116247:118453> (DF)

    The last "ack" acknowledging 107727 also selectively acknowledges 118453 - something that was sent 7 (seven!) segments later than 107727 but which arrived earlier. This causes the TCP stack to conclude that the segment has been lost and to retransmit it (shown below). Because a group of acknowledgements were delivered at once, the trace above shows the segment at 107727 finally being acknowledged before the segment was retransmitted; this is because the stack is still "working through" the group and had not seen/processed the crucial acknowledgement at the time that the decision to retransmit was made.

    22:18:14.967327 192.168.1.10.50647 > 84.116.34.253.8080: . 168707:170127(1420) ack 917 win 1022 (DF)
    22:18:14.967336 192.168.1.10.50647 > 84.116.34.253.8080: . 107727:109147(1420) ack 917 win 1022 (DF)

    Later, the TCP stack is unambiguously informed that the retransmitted segment was received twice via the "D-SACK block" mechanism:

    22:18:14.997584 84.116.34.253.8080 > 192.168.1.10.50647: . ack 168707 win 786 <nop,nop,sack 107727:109147 171547:172967> (DF)

    The Windows TCP stack seems to be able to cope with segments being deliver out-of-order provided the out-of-order "distance" is below 3 or so segments, but 7 segments is well beyond its capabilities.

    The retransmission causes the "congestion window" to be reduced, and it is this that reduces the throughput. This pattern of retransmission occurs many times during the performance test.

    The trace of the Microsoft-Windows-TCPIP provider shows that the receipt of the "D-SACK block" has no impact - it could/should reopen (perhaps just partially) the "congestion window" but it does nothing.

    The trace also records a variable shown as "RackReoWind" which sounds as though it could be used to increase the size of the "re-ordering window", but it is just reported as zero throughout the trace.

    Gary

    1 person found this answer helpful.
    0 comments No comments