infinitenothing Posted July 14, 2022 Report Share Posted July 14, 2022 (edited) Has anyone done a bandwidth test to see how much data they can push through a 10GbE connection? I'm currently seeing ~2Gbps. One logical processor is at 100%. I could try to push more but I'm wondering what other people have seen out there. I'm using a packet structure similar to STM. I bet jumbo frames would help. Processor on PC that transmits the data: Intel(R) Xeon(R) CPU E3-1515M v5 @ 2.80GHz, 2808 Mhz, 4 Core(s), 8 Logical Processor(s) Edited July 14, 2022 by infinitenothing Quote Link to comment
hooovahh Posted July 14, 2022 Report Share Posted July 14, 2022 I'd first ignore LabVIEW and just look at the theoretical limit using command line tools. I used Iperf in the past between two computers with one setup to be the server, and one the client. The fact that you see 100% makes me think there is a different bottleneck. 1 Quote Link to comment
Rolf Kalbermatter Posted July 15, 2022 Report Share Posted July 15, 2022 13 hours ago, infinitenothing said: Has anyone done a bandwidth test to see how much data they can push through a 10GbE connection? I'm currently seeing ~2Gbps. One logical processor is at 100%. I could try to push more but I'm wondering what other people have seen out there. I'm using a packet structure similar to STM. I bet jumbo frames would help. Processor on PC that transmits the data: Intel(R) Xeon(R) CPU E3-1515M v5 @ 2.80GHz, 2808 Mhz, 4 Core(s), 8 Logical Processor(s) Definitely echo Hooovahh's remark. LabVIEW TCP Nodes may limit the effectively reachable throughput since they do their own intermediate buffering that adds some delays to the read and write operations, but they use select() calls to asynchronously control the socket, which should do a highly efficient yield on the CPU when there is nothing to do yet for a socket. And the buffer copies itself should not be able to max out your CPU, 2Gbps comes down to 250MBps, which even if you account for double buffereing once in LabVIEW and once in the socket, should not be causing a 100% CPU load. Or did you somehow force your TCP server and client VIs into the UI thread? That could have pretty adverse effects but would also be noticeable in that your LabVIEW GUI starts to get very sluggish. Quote Link to comment
infinitenothing Posted July 15, 2022 Author Report Share Posted July 15, 2022 I got similar performance from iperf. My question then is what knobs do I have to tweak to get closer to 10Gbps? I attached my benchmark code if anyone is curious about the 100% CPU. I see that on the server which is receiving the data, not the client which is sending. The client's busiest logical processor is at 35% CPU. The server still has 1 logical processor at 100% use and as far as I can tell, its all used in the TCP Read primitive tcp bandwidth test Folder.zip Quote Link to comment
Rolf Kalbermatter Posted July 17, 2022 Report Share Posted July 17, 2022 (edited) 100% CPU load on the server would indicate some form of "greedy" loop. If you create a loop in LabVIEW that has no means of throttling its speed it will consume 100% of the CPU core it is assigned to, even if there is nothing in the loop and it does effectively do nothing very fast. More precisely, that loop will consume whatever is left over of that core after other VIs clumps had their chance to snoop some time of from that core. Edited July 17, 2022 by Rolf Kalbermatter Quote Link to comment
infinitenothing Posted July 19, 2022 Author Report Share Posted July 19, 2022 I submitted my code. I don't think I had any greedy loops. I wonder if there's a greedy section of TCP Read though. Quote Link to comment
ShaunR Posted July 19, 2022 Report Share Posted July 19, 2022 (edited) 1 hour ago, infinitenothing said: I submitted my code. I don't think I had any greedy loops. I wonder if there's a greedy section of TCP Read though. Running both client and server on localhost, the CPU usage is about 30% across all cores with one of the cores maxing at about 70%. I would suggest you try the same and see if you still get 100%. Doing this will isolate the network layers from the software. I suspect you will find that your 2GB limit is also present on localhost. Edited July 19, 2022 by ShaunR Quote Link to comment
infinitenothing Posted July 19, 2022 Author Report Share Posted July 19, 2022 (edited) Localhost is a little faster: 3.4gbps Seeing 80% CPU use single logical processor and 35% averaged over all the processors iperf on local host with default options isn't doing great: ~900Mbps but very little cpu use I suspect there's a better tool for Windows. Edited July 19, 2022 by infinitenothing Quote Link to comment
ShaunR Posted July 20, 2022 Report Share Posted July 20, 2022 (edited) 12 hours ago, infinitenothing said: Localhost is a little faster: 3.4gbps That's not good at all. With some fiddling you might get 4-5Gb/s but that's what you expect from a low-end laptop. Are you sure it's Gb/s and not GB/s? You are hoping for at least 20Gb/s+. On 7/14/2022 at 8:34 PM, infinitenothing said: I bet jumbo frames would help. Edited July 20, 2022 by ShaunR Quote Link to comment
Phillip Brooks Posted July 20, 2022 Report Share Posted July 20, 2022 15 hours ago, infinitenothing said: Localhost is a little faster: 3.4gbps Seeing 80% CPU use single logical processor and 35% averaged over all the processors iperf on local host with default options isn't doing great: ~900Mbps but very little cpu use I suspect there's a better tool for Windows. Are you using the parallel streams option in iperf3? This will help saturate the link. https://documentation.meraki.com/General_Administration/Tools_and_Troubleshooting/Troubleshooting_Client_Speed_using_iPerf#Parallel_Streams_2 Quote Link to comment
infinitenothing Posted July 20, 2022 Author Report Share Posted July 20, 2022 10 hours ago, ShaunR said: That's not good at all. With some fiddling you might get 4-5Gb/s but that's what you expect from a low-end laptop. Are you sure it's Gb/s and not GB/s? You are hoping for at least 20Gb/s+. I tested two other computers. Interestingly, I found out on those computers, the consumer looking for the end condition couldn't keep up. I would think a U8 comparison would be reasonably speedy. But, once I stopped checking the whole array, I could get 11Gbps. The video was pretty useless as the mfg doesn't have recommended settings as far as I know. I don't know if I have the patience to fine tune it on my own. 6 hours ago, Phillip Brooks said: Are you using the parallel streams option in iperf3? This will help saturate the link. https://documentation.meraki.com/General_Administration/Tools_and_Troubleshooting/Troubleshooting_Client_Speed_using_iPerf#Parallel_Streams_2 Parallel helps. I don't understand why but the improvement is a few times faster than you'd expect by multiplying the single worker rate by the number of workers. I'm more interested in single connections at this time though. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.