Jump to content
Zyl

TCP write / read problem, disable write buffer ?

Recommended Posts

Hi everybody,

 

I'm actually running on a problem with a  TCP connection between 2 cRIOS.

One cRIO is a server which writes 76 bytes long messages every 10ms (today, but can be anything between 1ms and 1s) using STM Write VI (so at the end it pushes 82 bytes long message in the TCP write function). I want that message to be sent only if the client as time to read it, so I set the timeout to 0.

The other cRIO is a the client with tries to read on the TCP link at a speed of 1000Hz (1ms, Wait next Multiple used to ensure that the loop is not running faster). I use the STM Read VI to get the data sent from the other cRIO. The read function has a timeout of 100ms.

What I was expected is that the client loop would actually runs at 10ms rate (server writing rate) due to the 100ms timeout. And if the server writes faster, it would follow the server rate. If the server rate is greater that 100ms, error 56 would be fired, and I would handle it.

What happens is that the server writes the 82 bytes every 10ms. But the client loop is always getting data and runs at 1ms ; which means that the timeout is not respected ! I disabled Nagel algorithm on the server side to be sure that the message is sent when requested, but it didn't help. The client acts like there is always data in the read buffer. Even if it can be right for the first iterations, I would expect that running at 1ms rate, the client would empty the buffer rapidly, but it seems that it never ends... Moreover, the longer the server writes, the longer it takes for the client to see empty buffer (timeout reached again and error 56) when the server is stopped (but connection not closed).

Does somebody already ran into this issue ?

Any idea on how I can solve this ?

 

Server code is attached to the post. 2 TCP connections are established between the server and client (so same IP address, but different ports), but only one is used (upper loop). The other opens and close immediately because EnableStream boolean is always false.

server.PNG

Share this post


Link to post
Share on other sites

Yeah, what you're describing doesn't make sense. You can't have a server send 82 bytes every 10 ms and have a client receive 82 bytes every 1 ms. Maybe have the server send unique data and  verify the messages at the client to see where the extra data is coming from.

This won't fix your problem but, I recommend replacing the timed loop with a while loop. TCP code in a timed loop makes no sense to me. 

Share this post


Link to post
Share on other sites

Hi,

I already tried replacing the loop, it doesn't help at all.

From what I see, it is like the server was writing as fast as possible (but the loop monitoring says that it is really looping at 10ms) and that the read buffer was always filled with something... So the read loop is never respecting the timeout because it always have somthing to read... Is there a way to see the nb of elements in the buffer s ?

Share this post


Link to post
Share on other sites

You've misunderstood something about TCP communication: you will almost never see a timeout on a TCP write. You can't use it for throttling. It sounds like you're hoping that the TCP Write will only succeed if the receiving side is currently waiting on a TCP Read, but that's not how it works. The operating system receives and buffers incoming messages. If you call TCP Read and there's enough data in the buffer, you'll get that data immediately. If there isn't enough data in the buffer, then TCP Read waits for either enough data to arrive or for the timeout period to elapse. If you want to send data only as fast as the receiving side can handle it, then you need the receiving side to send a request for the data instead of sending it at regular intervals.

On the client, it makes no sense to try to run the loop faster than the TCP Read Timeout time. Don't use a timed loop nor any other timing mechanism in the loop, just let the TCP Read handle the loop timing. Can you show your client code? Are you sure you're reading all the bytes on each loop through the client? If you're reading fewer bytes than you write, you could see something like the situation you describe.

Share this post


Link to post
Share on other sites

Hi Ned,

Indeed I didn't know about the behavior of the TCP write. timeout of 0 was to ensure that I couldn't write data if the reader didn't get the previous message. I'll change that.

However on the settings I have, TCP writer runs 10x slowly than the client.

In fact the client doesn't run faster than the server because the timeout of the TCP read is quite 'big' (100ms) compared to the writing rate. 1ms is the value given to the Wait Next multiple function inside the reading loop. It just ensure that the loop doesn't run faster than 1ms. Most of the time it just should wait for the timeout or the data t be received.

So in my case the server writes every 10ms, the client should loop every 10ms also, right ? But this is not what I see! In my case the client runs at 1ms... Just like if the pause granted by the timeout was not respected, which would mean that I always data in the receiving buffer. But my server writes it 10x slower than the max read time...

Share this post


Link to post
Share on other sites
1 hour ago, Zyl said:

Hi,

I already tried replacing the loop, it doesn't help at all.

From what I see, it is like the server was writing as fast as possible (but the loop monitoring says that it is really looping at 10ms) and that the read buffer was always filled with something... So the read loop is never respecting the timeout because it always have somthing to read... Is there a way to see the nb of elements in the buffer s ?

No, from the application level, I don't know of a way to see how many elements are in the buffer. It's unusual to leave anything on the buffer with TCP. Normally, you read as fast as possible without any throttle. You can then implement your own buffer in a queue for example. Then you'd get to use the queue tools to see what's in the buffer.

I think your problem points to an application issue where something's writing faster than you think or your reader isn't really reading. Maybe it's throwing an error or something. What do you think about my "unique data" idea. You could slow the server down to once a second and just send 1, then 2, then 3. You should see that show up on your client. Also, you might want to post your client code.

Share this post


Link to post
Share on other sites

Again, can you show your client code? Which TCP Read mode are you using? Is there any chance you're doing a zero-length TCP Read on many loop iterations? If so, I think it will return immediately with no error (and no data), which could lead to something like the situation you describe.

Share this post


Link to post
Share on other sites
12 hours ago, Zyl said:

I already tried replacing the loop, it doesn't help at all.

Definitely don't put TCP code in a timed loop, regardless of whether or not you saw a difference here. Just don't. I know there have been actual bugs/crashes in the past, and my current understanding is that the underlying calls to perform the TCP transfer are happening at a much lower priority than the thing shoving data into it (your timed loop).

13 hours ago, Zyl said:

I want that message to be sent only if the client as time to read it, so I set the timeout to 0.

If you don't care about delivery, why not use UDP? STM is polymorphic and will happily accept a UDP socket instead of a TCP one.

Edited by smithd

Share this post


Link to post
Share on other sites

Hi everybody,

Thank you all for advices ! I'll try to answer each questions.

Why not UDP ? Because even if I don't care missing a message because my reader is not fast enough, if I send it I expect receiving it and without any error in the bit stream.

TCP read mode I'm using is Standard (I didn't modify the STM lib). From what I see there is no chance that i'm reading 0 byte, bcause my bit stream is then unflatten to a LV type. With 0 byte read, my unflatten function would return en error.

Now that I answered the main question, here is what happened ! I was using the function Ticks (us) to measure the time elapsed between 2 parts of code. It appears that when you probe the code, this function doesn't behave as it should at all !!! I moved my cRIO code from RT target to Windows host, and ran DETT to see what was going on. DETT found out that my reading was running at ... 10ms !! With probes only, probed values were saying that my loop ran at 1ms, reading took 50us. Probe + DETT : DETT was saying 10ms, probes 6000 us ; stopping DETT trace, probes were saying #50 us !

I changed the Ticks (us) to Ticks (ms) : probes were displaying 10ms !!

As may notice from the reader code, the developpement is actually a custom device used in VeriStand. It appears that if I use the same technique to measure time elapsed between 2 parts of my code and put the result in a channel, the same thing happens if everything is based on us (ticks and wait next mutiple). If one of these 2 functions is in ms then the monitoring returns the expected value.

I dont't know exactly what happens under the hood with us ticks, but it seems that there is an interaction with my code which makes them misbehave when you monitor their value in some way... Maybe a quantum effet ? :-P

reader.png

Inside the reader VI.PNG

Share this post


Link to post
Share on other sites

You need a different protocol.

Have the reader send a "I'm waiting" packet to the writer, and have the writer simply wait until one of these is present in it's receive buffer vefore sending.  This is a duplex communication, requires two TCP ports but should throttle as you require.

Share this post


Link to post
Share on other sites
1 hour ago, shoneill said:

Have the reader send a "I'm waiting" packet to the writer, and have the writer simply wait until one of these is present in it's receive buffer vefore sending.  This is a duplex communication, requires two TCP ports but should throttle as you require.

Why would this require 2 TCP ports? TCP is full-duplex over a single port.

Share this post


Link to post
Share on other sites

I'm still not sure I understand what the communication goal is. You don't want to lose any messages, but only want to send a message if the receiver has time to process it right then and there? Whats the purpose of such a scheme?

Share this post


Link to post
Share on other sites

I would imagine not flooding the buffer would be one, trying to synchronise sender and receiver is another.  If the listener is the "master" then the protocol needs to be implemented this way.

So if it can be done with 1 TCP port, even better.

Share this post


Link to post
Share on other sites

Exactly, shoneill is right. The main purpose is it to give breath to the reading loop and keep some synchronisation between server and client.

Share this post


Link to post
Share on other sites
3 hours ago, shoneill said:

I would imagine not flooding the buffer would be one

What's wrong with flooding the buffer? (they are allocated for each connection) TCPIP connections block when the buffer is full (that's why there is a timeout on the write). If you set the buffer to, say, 10 x message size then it will fill up with 10 messages and wait until at least one has been received (nak'd) and then write another.

  • Like 1

Share this post


Link to post
Share on other sites

How do you set the TCP buffer, it's typically a driver setting, not a LV setting.  At least AFAIK.  If it's possible to limit the receive buffer of an ethernet card, I'd be interested to know.  My experience (based on others' experience I must admit) is that this can't be controlled from within LV.  If you fill the receive or transmit buffer of an ethernet card, you typically lose conenction.  We see this sometimes when our host software can't keep up with our RT system sending data at 20kHz.  Buffer overflow, lost connections, chaos ensues.

Share this post


Link to post
Share on other sites
1 hour ago, shoneill said:

How do you set the TCP buffer, it's typically a driver setting, not a LV setting.  At least AFAIK.  If it's possible to limit the receive buffer of an ethernet card, I'd be interested to know.  My experience (based on others' experience I must admit) is that this can't be controlled from within LV.  If you fill the receive or transmit buffer of an ethernet card, you typically lose conenction.  We see this sometimes when our host software can't keep up with our RT system sending data at 20kHz.  Buffer overflow, lost connections, chaos ensues.

It's a call to setsocketoptions like the NAGLE. There are some VIs in Transport.lvlib.

 

Edited by ShaunR

Share this post


Link to post
Share on other sites

So fine control of the buffer (setting it to 1 or two messages) would force synchronous messaging on the TCP driver level?  That's rather useful.

Share this post


Link to post
Share on other sites
22 hours ago, Zyl said:

Exactly, shoneill is right. The main purpose is it to give breath to the reading loop and keep some synchronisation between server and client.

The reader will be using minimal resources while it's waiting for a new message so there's no need for additional breaths. I still think you need see where the extra messages are coming from by creating very well known unique messages. Also, your error handling in the reader terminates at the RT FIFO so I don't think you're catching errors (which seems like a likely culprit at this point). You can "OR" in the error and the timeout from the read to make sure you don't go processing invalid messages.

Share this post


Link to post
Share on other sites

You're definitely trying to abuse a feature of the TCP communication here in order to fit square pegs into a round hole. Your requirements make little sense.

1) You don't care about loosing data from the sender (not sending it is also loosing it) but you insist on using a reliable transport protocol (TCP/IP).

2) The client should control what the server does, but it does not do so by explicitly telling the server, but instead you rely on the buffer full message at the client side to propagate back to the server, hoping that that will work.

For 1), the use of UDP is definitely useful. For 2), the buffering in TCP/IP is not meant nor reliable for this purpose. The buffering in TCP/IP is designed to never allow for the possibility that data gets lost on the way without generating an error on at least one side of the connection. It's design is in fact pretty much orthogonal to your requirement to use it as a throttling mechanisme.

While you could set the buffer size to sort of make it behave the way you want, by only allowing a buffer for one message on both the client and server side, this is a pretty bad idea in general. First, you still would have to send at least two buffers, with one being stored on the client socket driver and the other in the server socket driver. Only allocating half the message as buffer size to only have one full message stored, would likely not work at all and generally generate errors all the time.

But it gets worse: any particular socket implementation is not required to honor your request exactly. What it is required to do is to guarantee that a message up to the buffer size can not get corrupted or spuriously lost due to some buffer overflow, but it is absolutely free to reserve a bigger buffer than you specify, for performance reasons for instance, or by always reserving a buffer with a size that is a power of 2 bytes long. Also it requires your client to know in advance what the message length is, limits your protocol to only work in the intended way when every transmission is exactly this size, and believe me, at some time in the future you will go and change that message length on the server side and forget on the client side to make the according correction.

Sit down and think about your intended implementation. It may seem that it would involve more work to implement an explicit client to server message that can tell the server to start sending periodic updates or stop them, (a single command with the interval as parameter would be already enough, an interval of -1 could then mean to stop sending data), but this is a much more reliable and future safe implementation than what you describe.

Jumping through hoops in order to fit square pegs into round holes is never a solution.  

Edited by rolfk
  • Like 2

Share this post


Link to post
Share on other sites
17 hours ago, rolfk said:

You don't care about loosing data from the sender (not sending it is also loosing it) but you insist on using a reliable transport protocol (TCP/IP).

The use of TCP/IP is because it is acknowledged and ordered. Don't forget this is for when the producer is faster than the consumer - an unfortunate edge case.

17 hours ago, rolfk said:

The client should control what the server does, but it does not do so by explicitly telling the server, but instead you rely on the buffer full message at the client side to propagate back to the server, hoping that that will work.

No. The client is effectively DOSing the server (causing the disconnects). TCPIP already has a mechanism to acknowledge data and even retries if packets are lost. This is just using the designed features to rate limit.

17 hours ago, rolfk said:

First, you still would have to send at least two buffers, with one being stored on the client socket driver and the other in the server socket driver. Only allocating half the message as buffer size to only have one full message stored, would likely not work at all and generally generate errors all the time.

The receiver side can have as much buffer as it likes. There is no need to "match" each endpoint. We just want to rate limit the send/write so as not to overwhelm the receiver (I'm not going to use client/server terminology here because that is just confusing)

17 hours ago, rolfk said:

and believe me, at some time in the future you will go and change that message length on the server side and forget on the client side to make the according correction.

As I said earlier. They don't have to be matched. If you are really worried about it you can modify the buffer size on-the-fly. You are getting bogged down on being able to set a buffer to exactly the message size. It doesn't have to be that exact, only enough that the receiver doesn't get overwhelmed with backlog and occasional room to breathe. It's simple, fast, reliable and far more bandwidth efficient than handling at Layer7.

 

Edited by ShaunR

Share this post


Link to post
Share on other sites
2 hours ago, ShaunR said:

The receiver side can have as much buffer as it likes. There is no need to "match" each endpoint. We just want to rate limit the send/write so as not to overwhelm the receiver (I'm not going to use client/server terminology here because that is just confusing)

That still won't work as intended by the OP. As long as the receiver socket has free buffer it will accept and acknowledge packets, making the sender socket never timeout on a write! This is not UDP, where a message datagram is considered a unique object, that will be delivered to the receiver as single unit, even if he requests a larger buffer, and even if there are in fact more message datagrams in the socket buffer that could fit into the requested buffer.

TCP/IP is a stream protocol. No matter how many small datapackets you send (not talking about Nagle for a moment) as long as the receiver socket has buffer space available, it will copy it into that buffer, appending to any already waiting data in the buffer, and the receiver can then read it in one single go at once, or in any sized parts it desires. So if the receiver has a 4k buffer, it will cache about 53 packets of 76 bytes each from the sender before sending a NAK to the sender socket for any more packets. Only then will the write start to timeout on the sender side, after having filled its own outgoing socket buffer too.

And then you need to read those 53 packets at the client before you get the first fairly recent package. Sounds to me not like a very reliable throttling mechanisme at all!

Of course you could make the sender close the connection, once it sees a TCP Write timeout error, which will eventually give a connection aborted by peer error on the receiver side, but assuming the 4k receive buffer example above and a 100ms interval for sending packets, it will take more than 5s for the sender to see that the receiver is not reading the messages anymore and being able to abort. If the receiver starts to read more data in that time, it will still see old data and having to read them all until the TCP Read function times out, to be sure to have the latest value.

And that assumes a 4k buffer. Typical socket implementation nowadays use 64 k buffers and more. Modern Windows versions actually use an adaptive buffer size, meaning it will increase the buffer beyond the configured default value as needed for fast data transfer. This should not likely come into play here as sending 76 byte chunks of data every few ms is not considered fast data at all, but it shows you that the receive buffer size for a socket is on many modern systems more like a recommendation than a clear limit.

Share this post


Link to post
Share on other sites
2 hours ago, rolfk said:

That still won't work as intended by the OP. As long as the receiver socket has free buffer it will accept and acknowledge packets, making the sender socket never timeout on a write!

Indeed. My conversation was in replying to Shoneill. The OP doesn't have a problem with the producer being faster than the consumer.

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.