Jump to content

TCP vs. UDP Question


Recommended Posts

Problem: For a project I am testing possible methods of data transfer between applications. Sparing the details, the three methods being tested are TCP, UDP and DLLs.

Approach: I designed a basic LV VI that passes packets of data to itself via the three methods. Packets are usually 64-bytes, and the test usually loops 1,000 to 20,000 times. Time is measured for further comparison of speed and accuracy.

Results: I have read that UDP is a faster protocol, often at the expense of accuracy. As others working on the project, I have found LV DLL functions to be very unstable, crashing LabVIEW all together. My concern is that the results show that TCP is about 5 times faster than UDP transfers! Which is only about half the speed of DLL transfers (i.e. DLL = 0.1ms per transfer, TCP = .2ms per transfer, UDP = 1.0ms per transfer)

Is this reasonable? Any ideas why these results contradict general expectations? Any ideas would be appreciated.

- Philip

Link to comment
My concern is that the results show that TCP is about 5 times faster than UDP transfers! Which is only about half the speed of DLL transfers (i.e. DLL = 0.1ms per transfer, TCP = .2ms per transfer, UDP = 1.0ms per transfer)

Is this reasonable? Any ideas why these results contradict general expectations? Any ideas would be appreciated.

- Philip

The performance difference may be related to how you are sending the packets. If you look at the UDP examples, the UDP Sender has a Broadcast/Remote Host Only boolean. Setting the Address to 0xFFFFFFFF (Boolean true; Broadcast) will force the packets to traverse the hardware onto the wire. Specifying a hostname of localhost will resolve to 127.0.0.1 and the OS will loop the data back before the physical interface.

TCP always includes a source and destination address, so the TCP packets are likely looped back before the physical interface (in the OS). You should really try to do these tests with two computers and distinct IP addresses.

Look at the TX/RX LEDs on your Ethernet controller, and listen for people complaining that the network is slow when you're performing UDP tests. I managed to knock some people out of their database while I was testing my UDP implimentation :oops:

If you're setting the UDP packet max size to 64, this could be also be a problem. Leave this variable unwired. From the online help: "max size is the maximum number of bytes to read. The default is 548. Windows If you wire a value other than 548 to this input, Windows might return an error because the function cannot read fewer bytes than are in a packet."

UDP datagrams have a header that indcates the data size. If the data received by the OS does not match this, the packet is invalid and will never be passed to LabVIEW.

Place the UDP Read function in a tight while loop and pass the output to a queue. As soon as the data is stuffed into the queue, the while loop will try to retrieve the next message. The OS can buffer received UDP messages. As an example, try setting the UDP Sender example VI to 1000 messages and change the diagram's wait to 1 ms. Change the UDP Receiver example VI to 10 ms on the block digram. Run the receiver, then the sender.

Note! Don't open and close the UDP Socket between reads or you will thrash the OS! Open the socket, place the handle ID in a shift register, and then close the handle outside the while loop. To avoid memory hogging, set an upper limit for the number of elements in the queue based on your expected receive rate and the interval that you intend to process the data.

I've successfully read UDP messages twice your size at a 400 uSec rate. The data included a U32 counter to monitor for dropped messages. On a dedicated segment, I never experienced a missed UDP message.

Link to comment

Thanks a lot for the help pbrooks. :thumbup:

Firstly, your advice led me to find a fault with my code -- as the UDP address, I put "localhost" (as per the examples?) I suspected it to be broadcasting (as you said), and by changing that to "127.0.0.1" it now transfers about 10x faster than it did before! (about 100 uSec for an 8-byte packet)

I think you addressed my new problem (I am losing UDP packets bigger than 8 bytes, or so) ; I think the loop is going too fast. Is that what you meant by "Place the UDP Read function in a tight while loop and pass the output to a queue." ?? I don't really understand what you mean by that paragraph.

Thanks again, cheers,

Philip

Link to comment
I think you addressed my new problem (I am losing UDP packets bigger than 8 bytes, or so) ; I think the loop is going too fast. Is that what you meant by "Place the UDP Read function in a tight while loop and pass the output to a queue." ?? I don't really understand what you mean by that paragraph.

Remove any Wait or Wait next multiple calls from your while loop. The UDP demo has a fixed 10 ms wait between reads. Do not specify a max size for your messages. Leave this connector unwired. The OS determines if a UDP datagram is valid, and delivers it to LabVIEW regardless of it's length. If you're sourcing 8 byte UDP messages from another VI, then the receiving VI will receive 8 byte messages.

For the timeout, use whatever is reasonable for you, start with 100 ms? If the timeout occurs, the function will exit and the error cluster will return a code of 56. This simply means that no data was received. Check the code, and ignore it if error = 56. If a UDP datagram arriving on the listening port (regardless of length) passes checksum, the OS will pass it to LabVIEW and labview will immediately exit the Read function. I can't understand why you would be running "too fast" or "dropping messages" considering that the OS buffers UDP messages.

Make sure that you are not closing and re-opening the the UDP session handle after each UDP read. This would likely result in your CPU utilization reaching 100% and datagrams being missed.

I've attached an example that shows the technique I've used successfully. (LV 6.1)

Download File:post-949-1138374261.vi

Link to comment

pbrooks,

I have tried all your advice, including the code you posted. Unfortunately my problem remains the same in either case. It appears to me that the messages might not be buffering properly, and hence I think it is 'sending too fast.' I am likely to be wrong.

It is possible my code has some logical error. Maybe you could take a look at it below:

post-3833-1138623426.gif?width=400

The code has 5 steps, open connections, start timing, execute transfer, stop time, close connections. It is intended for the sole purpose to send n-packets as quickly as possible.

Any ideas?

Link to comment
It is possible my code has some logical error. Maybe you could take a look at it below:

The only thing that catches my eye is that you are comparing the Expected Data Arrival Size (Bytes) which I assume to be static to a value that is increasing every UDP receive (Shift Register + Length of String) :question: . I think you want to connect the equality check to the output of the string length function, not the sum of the lengths read. You could also multiply the interation value by the Expected Data Arrival Size (Bytes) and compare that to your shift register's current value.

Link to comment
You could also multiply the interation value by the Expected Data Arrival Size (Bytes) and compare that to your shift register's current value.

Sorry, that was unclear. "The Expected Data Arrival Size (Bytes)" is already, as you said, the product of the string length and the iteration value. So sending 8 bytes 10 000 times, it requires 80 000 bytes to be received. This is not achieved. Instead, the number of bytes read is a seemingly random value usually ranging from 20k to 60k.

This method works 100% for TCP, and also works for UDP when I put a 1ms Wait Timer inside the send loop.

Link to comment
This method works 100% for TCP, and also works for UDP when I put a 1ms Wait Timer inside the send loop.

1 ms is the smallest delay you can place in a loop. Your real application won't be done locally like your test; I suggest that you connect two computers and pass some traffic over a real network segment. There will be delays in your source data and network connection that can't be simulated in the way you are trying.

The best you could try is to use a quotient and remainder function in your send loop, divide the index counter by some multiple and check if the remainder is zero. Include a conditional case that waits 1 ms after every, say, 20 messages to help throttle the send loop in the way real network and data source would.

Otherwise, TCP may be the way to go for you!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.