Jump to content

Shared Variable transmit buffer has a large impact


Recommended Posts

Hi,

I am trying to achieve as high speed as possible when sending data over the network using shared variables.

I have created a small test application as seen below.

The test sequence consists of writing an array of I32 to a shared variable (so far located on my local machine) and then measure the time it takes before the same array can be read back from the variable.

When I run tests I observe a behavior related to the size of the array. When writing an array larger than 2033 elements the average read+write time is around 0.8ms but when using a smaller array this time increases to a whopping 31ms.

This behavior must be related to the built-in transmit buffer which holds 8kB, but why is the difference over 30ms? If the buffer isn't full the data should still be sent within 10ms? there has to be some additional delay related to smaller amounts of data?

In my project the data will rarely fill the buffer and I do not have access to the Flush.vi of LV 8.5.1.

Is there another way to speed up the transfer?

post-10866-1210683498.png?width=400

Link to comment

If it's throughput you seek, shared varibales may not offer the best solution. The way I see it, you have two options.

1. Pad your data such that each Write is at least 2033 elements. Just make sure your first element is equal to the number of real elements you wish to process on the other side.

2. Switch to raw TCP and make your own protocol.

I agree that your timing results are troubling. I've seen similar spooky things with network shared variables.

Link to comment

The intended use is to monitor data from a compactRIO 9012 on a PC.

Unfortunately the use of Shared Variables is not longer optional. Padding the data might be a solution although not the prettiest.

Our original goal was to detect data changes on the cRIO within 10ms on the PC but this might have to be revised.

Thanks

Link to comment

QUOTE (vestman @ May 13 2008, 09:40 AM)

...Our original goal was to detect data changes on the cRIO within 10ms on the PC but this might have to be revised.

Thanks

Ben cast a skeptical eye over the top of his glasses....

30ms would make me write up an exaustive memo about unrealistic goals on Windows.

100ms would only get a foot-note in the spec.

Have fun,

Ben

Link to comment

Thank you for your replies.

I understand that SV might not be the fastest way to communicate but the giant time increase still puzzles me.

Earlier tests have shown that both read and write operations take virtually no time at all so having to wait that long for the data to arrive is just frustrating.

I also made an .exe of my test-VI and saw no positive change at all.

Link to comment

QUOTE (BrokenArrow @ May 13 2008, 01:41 PM)

So Dan, you have experience seeing TCP being faster than Shared Variables? ;)

I have seen the somewhat counter-intuitive behaviour of the TCP routines being a lot faster than Shared Variables in Development Mode, but once an EXE was made, the TCP approach only yielded a modest speed advantage. There's a lot going on under the hood of Shared Variables (variant VI's and whatnot), but maybe once it is compiled.... ?

Not sure about shared variables but TCP can be made fast in LabVIEW. And you do not even need to go on raw socket level. Just get a small VI from the NI site to disable the Nagle algorithme for a TCP network refnum and you are done without delays for small data packets making command-acknowledge type protocols getting slow.

As to being compiled, as far as LabVIEW is concerned there should be little difference between development system and runtime system performance. If there is a big improvement the application builder would have to do something on the SV engine level that would be very spooky at best.

Rolf Kalbermatter

Link to comment

QUOTE (rolfk @ May 14 2008, 04:25 AM)

As to being compiled, as far as LabVIEW is concerned there should be little difference between development system and runtime system performance.

Rolf Kalbermatter

Agreed! I wonder if it's the benchmarking rouine that could be to blame? Maybe "Tick Count" works differently in EXE than Dev, and what I'm seeing is the burden of the Tick Count as opposed to any real time differences in the code between the ticks. (?)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.