Jump to content

optimising data streaming over tcp/ip - switch in the way

Recommended Posts

Hi All,

I thought I'd posted on this before, but can't find anything so I may have imagined it.

I have a client-server application whereby the remote server acquires waveforms (typically 1000-16000 data points long) and fires them up on demand to the client. The waveforms are actually sent as a "header" cluster and then a 1D array of U8 integers (i.e. the waveform values). The actual data acquisition takes approximately 4ms (which involves getting the data off a digitiser in the server computer), but the time taken to send the waveform "up the line" seems to vary hugely.

In some circumstances I want to stream these waveforms up from the server to the client. I do this by putting the client "request waveform" vi into a loop and allow it to run as quickly as it likes (although I have tried limiting the max rate with a wait ms multiple inside the loop). I very seldom get a good constant rate of waveform transfer between the two PCs. The waveforms seem to stutter on the client pc, running smoothly for 10-100 waveforms or so at a goo 50 per second, then stuttering down to 1 per second at worst.

The PCs are networked either by a crossover network cable (with both pcs ethernet adapters set to 100Mb/s, full duplex), or via a fibre-optic to ethernet adapter (again limited to 100Mb/s). In this setup, the performance is at least bearable, but if I then try to do run the setup with a network switch between (netgear fs105 for info) the situation worsens to where I only get 1 or 2 waveforms a second.

I tried chopping the waveforms up into smaller packets and sending them up and then rebuilding the waveform. This did have a positive effect (and in fact is how I get acceptable performance without the switch in place). Is there anyway of optimising the process of sending data over a TCP connection?

Any ideas on this would be gratefully recieved!



Link to comment

Check the MTU (???) settings throughout (various IP stuff will adapt to smaller ssettings) and send packets that produce max packet sizes (Nagel algorithm could be involved).

Set-up WireShark on another PC and ... tell us what you find.

MS has a white paper somehwere talking about their implementations of the "TCP/IP" stack.

Please let us know what you find.


PS: Do we know your agregate bandwidth?

Link to comment

Hi Ben,

Thanks for the reply. I should first point out that this is my first foray into the world of writing Labview applications which send data across a network via tcp/ip. I have used the labview STM (Simple TCP Messaging) VIs in building my application. They simply put some wrappers around the built in LV TCP vis to help manage the commands and data types sent over the network. Similarly, I'm not overly experienced with the detailed network settings (I can set up a pc to run on a specific IP/subnet etc and in this case have had to manually set the link speed of the NICs on the PCs (the fibre link will only work at 100Mb/s and won't allow the PCs to autonegotiate speed).

I guess the most simple question which I should start with is:

should I expect to have to manage the size (i.e. length) of the data string I write to the "TCP-write" vi? Or should I be able expect labview/windows to take whatever i write to that VI and send it in the most appropriate way over the network?

(Basically, I want to keep my life and this code as simple as possible whilst making it work as well as it can!)

I looked at the wikipedia info on Nagel's algorithm (seemed like as good a place to start as any!) and it mentions issues arising from doing lots of writes to the tcp port and then reading, might it be sensible for me to split my "commands" and "responses" within my code, so I use 1 port for the "commands" from the client and a separate port for the "responses" from the server? Might that help?

Next then, is: might I need to make changes to the specific PC setups for networking (I saw something on being able to disable the Nagel algorithm in windows 7 via the registry)?

Finally, I still don't really understand why sticking an unmanaged switch between the two pcs (the two pcs are the only devices on the switch) causes such a marked difference in behaviour...

Any more thoughts? I've not used wireshark before, what information would I be looking to get from it?

Thanks again for your help!


Link to comment

You will need more help than I can give, personally so here are suggestions.

Go to the Dark-Side and serach for TCP/IP and "nathand". Nathan has taken over threads on performance where I left off. Review all of his posts on the subject.

A google search on TCP and speed should give you a lot of hits from gamers trying to minimize lag. It feels funny reading gaming posts at work but hey, there is knowlege in those posts.

WireShark may be overwhelming if you don't know networking. WireShark has tutorials and ready built filters for TCP so it can start to get an idea of what is happening if you apply the TCP filter. The thing you will be looking for is the size of the TCP packets coming from your machine and the time-stamps.

Again have fun and please report back. All of the corners of our networked world have not been fully explored and documented so all reports from adveturers like yourself are of great value.

Take care,


Link to comment

Thanks for the info Ben, I will endeavour to read up on it more through the Dark-Side, although right now I have bigger fish to fry on this project - some juddering on waveform transfer is better than the thing not turning on in the first place!

I found this http://b.snapfizzle.com/2009/09/windows-7-nagles-algorithm-and-gaming/ following your first post and gave it a go. It disables the Nagel algorithm in win 7. It seems to have improved things significantly. The only minor thing to note is that it seems you need to have this set at both machines in my setup before you can get the two computers to even talk... (I found this out since I'd set it up on machine A and B and got it working, then replaced B with C before I'd modified the settings on C and the network didn't even get going...) Since in this project the PCs are likely to be isolated from any other network environments I'm not worried about it impacting anything else... Otherwise I might have done a bit more research before trying it!

Thanks again for your help


Link to comment

It disables the Nagel algorithm in win 7. It seems to have improved things significantly.

If you look carefully at some of the LabVIEW posts regarding Nagle's Algorithm, you can find references to (or actual) VIs that you can use to toggle Nagling on a PER CONNECTION BASIS within your LabVIEW code. You certainly can make changes in Windows (via the registry) but this will affect all apps.

Edited by Phillip Brooks
  • Like 1
Link to comment


I just found the "TCP NoDelay" property in the "Instr" property node, but you have to wire a VISA name into that property node. I'm using the TCP vis and only have a TCP refnum. How do I get from one to the other?!

Thanks in advance for any info!


Link to comment

Are you sure you need to go low-level?

We used Datasocket for such issues, and it works great, datasocket takes care of the actual connection, and we had it running as fast as possible (we starte to see throttling at the effective 100 mbit/s rate.), since then we moved to gigabit over optical fibre.

In our setup we published the data at the client PC, leaving our ADC-PC as lean as possible.

One thing to check, is it LV or is it the networ?. You can test this by copying a large file with a tool that shows the copy rate (like Total Commander). Make sure you have good cables, bad cables can kill your transfer rate.


Link to comment

I guess that by saying you are using the "Client Request Waveform" that you are actually polling for the wave forms rather than streaming. This is really bad for performance since you get double the amount of network latency.

16000 data points (assuming double precision numbers) equates to about 125KB. On a 100Mb connection you should be getting in the region of 60-80 updates per sec (10MB/125kB = 80).

For an example of streaming waveforms, take a look at Transport.lvlib

Edited by ShaunR
Link to comment

I guess that by saying you are using the "Client Request Waveform" that you are actually polling for the wave forms rather than streaming. This is really bad for performance since you get double the amount of network latency.

That's a good point. Also, is it critical that you get 100% delivery of the waveform? UDP is faster than TCP because there's no ACK involved, if you can tolerate the possibility of missing parts of the waveform.

Link to comment

Thanks for all of the replies. I'm going to make a couple of tweaks to the code now and see what happens.

@ShaunR - you're right I am polling - and I know this will cause a reduction in the possible transfer rate, but what I was seeing (prior to disabling the Nagle algorithm) was a plenty fast enough transfer rate but it would get say 10-20 waveforms fast (>30 waveforms a second) then stall to 2-3 per second (which ties in with the time delay of the ACK etc) sporadically. The situation was manageable when the two PCs were connected only with a crossover ethernet cable, but when I then put a standard switch in between them the fast transfer pretty much died).

In hindsight it is possible that doing a proper streaming of the data might help rather than polling, but I don't need "that much" speed! and I prefer the polling approach in general for this application (live updates are only part of what we need to do). Also, just for info, each point on the waveform is one U8 integer and it gets scaled at the client end.

@asbo - for this application I have other things which I need to be sure arrive properly as well as the waveforms, so I prefer the tcp approach over the udp for that reason.

Thanks again for all the insight, its been very useful. I'll post back when I've finished my tweaks and let you know how I got on!


Link to comment

Hi All,

So I got the nagle algorithm switched off using the VIs in asbo's link above. I also got rid of a couple of short command/response items which were being sent to get waveforms, and finally I took out the chopping data up into chunks and everything now seems to work very smoothly. No stalling or juddering at all.

So, that looks to be the best way for my application (for now at least) so thanks all for your input!



Link to comment

I'll have to confess to having not measured it. I was very pushed for time so just made sure it looked quick and smooth! I don't have access to the hardware to actually do any characterisation now either. When I next get it back I'll have a look at it.



Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.