santi122 Posted November 5, 2009 Report Share Posted November 5, 2009 Hi, at the moment I am working at a Projekt where I have to transfer much Data via TCP/IP and handle them via RT-FIFO´s . So far so good. But I had a lot of Problems with the cRIO TCP functions. If I send to much Data, Data gets lost or Fifo References got lost. Maybe the cRIO TCP Buffer runs into a overflow... Does anyone here have any experince with this behaviours ?? I am stumped... System : cRIO 9014 RT Software : NI RIO 3.2.0 LabVIEW 8.6.1 Quote Link to comment
Dean Mills Posted November 5, 2009 Report Share Posted November 5, 2009 Hi, A couple of things. What is the timeout on the RT FIFO's? If it is not 0 you need to make it 0 and handle the case when there is no data. They can use most of the CPU waiting. I try to avoid using the RT FIFO's unless they are absolutely necessary and tend to just use Queues. What is the timeout on the TCP read and write functions? They need to be appropriate for the application. This is usually a trial and error but I generally start with about 1 second. In the past while using TCP on cRIO's I have experienced lots of lost data. I had to implement a scheme of attaching a unique id to the data, send it to the PC and wait for a confirmation that the data was received before I sent the next chunk. In my experimentation TCP functions should not be in timed loops, TCP functions should reside in their subvi without any time critical loops. I also only send data about every 250ms. For me that creates about 8k of data per send. Dean Quote Link to comment
santi122 Posted November 5, 2009 Author Report Share Posted November 5, 2009 (edited) Hi Dean, thanks for your reply ! I have a complete Protokoll with Header and Data. The rt fifos write mode is blocking so that they don´t waste cpu power in the waiting state. Timeouts are 100 ms for the header and 10ms for the data at tcp read and 100 ms at tcp write. I have also the problem that I sometime lose my rt fifo -> Error -2206. I have a checksum in my protokoll to avoid working with damaged data, but I have no idea how to handle the lost of my rt Fifos or whats the reason for this. This behaviour is happening quite unreproducible. All tasks a running in timed loops with different priorities. I also have the possibility(condtional disable structures) to use queues, I will try if the app is more stable with queues greetz from Austria chris Edited November 5, 2009 by santi122 Quote Link to comment
PaulL Posted November 5, 2009 Report Share Posted November 5, 2009 I don't know if this will be a suitable solution for your case, but you might also consider using RT-FIFO enabled networked shared variables. We have used these quite successfully on multiple projects with cRIOs and have not encountered the difficulties you describe. Quote Link to comment
Tom Limerkens Posted November 5, 2009 Report Share Posted November 5, 2009 Hi, We've had the same problems with the cRIO platform when NI switched the CPU from Intel to PowerPC and the RT OS from ETS to VxWorks. Apparently, when you want to access the TCP packet received by the cRIO 2 different times, it takes about 200 msec to access it the second time. Depending the CPU type and load. Our typical RT architecture was something like this before : (UDP) SenderSide: - Prepare data to send in a cluster - Flatten cluster to string - Get String length - Typecast string length to 4 bytes text - Pre-Pend string length before data Receiver side: - Wait for UDP data, length 4 bytes (string length) - Read 4 bytes, format to I32 string length - Read remaining string length - Unflatten datastring to Cluster and process While this worked flawlessly in Windows, RT on Fieldpoint, CompactFieldpoint and the first series of cRIO's (the intel line), the code suddenly broke when NI switched its OS to VxWorks on the newer cRIO targets. Perfectly working code suddenly had to be rewritten, just because a newer type cRIO was selected. But anyhow, We found some kind of workaround which proved to be working. General description : Sender Side : - Only use TCP, UDP proved to be too unreliable on cRIO - Prepend the String length before the flattened string, and send as a single 'send function'. - If total string size exceeds 512 bytes, split the string to 512 byte packets, and send them with separate 'send functions' Receive side - Send no more then 512 bytes per TCP Send function, if a string is longer, split it in separate 'TCP Sends' - At receive side, only access every receive package only once, to have fast cycle times - Read TCP bytes with the 'immediate' option, check the received package size, and the expected package size (first 4 bytes) - If more then the received bytes are expected, read the remaining bytes with the 'standard' option For us this proved to be a working solution, which does not take 200msec to read a package from the TCP/UDP stream. But it affects the real-time behaviour of the OS, since TCP will take more Priority in the OS scheduler then UDP did, causing more jitter on loop times. I'm still trying to figure how I have to explain this kind of stuff to customers who choose cRIO as their product/industrial platform. Try to explain that the manufacturer changes the TCP stack behaviour between 2 different cRIO versions. That the underlying OS has changed is not of their interest. It proves that NI has no clue on how automation is done in a mature industrial market, instead of a research environment or prototyping work. But let's not turn this into a rant on NI ;-) I'm sorry I can't send you any code, but that is under company copyright. If you have any questions, let me know, I'll try to help. Tom Quote Link to comment
Christian_L Posted November 9, 2009 Report Share Posted November 9, 2009 Hi, We've had the same problems with the cRIO platform when NI switched the CPU from Intel to PowerPC and the RT OS from ETS to VxWorks. Apparently, when you want to access the TCP packet received by the cRIO 2 different times, it takes about 200 msec to access it the second time. Depending the CPU type and load. ... I'm sorry I can't send you any code, but that is under company copyright. If you have any questions, let me know, I'll try to help. Tom Tom, Your message caught my eye as the Simple TCP Messaging (STM) protocol that we have published on ni.com basically does the same thing. We have tested and benchmarked it on different LV versions and controlleres and can achieve at least 2 ms update rates per packet. The STM sender prepends the size of the packet in a 4 byte integer, and the STM reader first performs one read operation to read the 4 byte header and then performs a second read to get the payload of the packet. STM is implemented using polymorphic VIs supporting both UDP and TCP under the hood. I just retested this using LV 8.6 and a cRIO-9012 VxWorks controller and was able to achieve better than 2ms loop times for sending/receiving two packets per loop (4 individual read operations on the cRIO) using both UDP and TCP. I realize you already have a working solution, but if you're interested I would be available to help determine what caused the behavior and low performance that you saw, which is not typical of TCP/UDP on the VxWorks RT platform. http://zone.ni.com/d...a/tut/p/id/4095 http://zone.ni.com/d...a/epd/p/id/2739 Quote Link to comment
santi122 Posted November 11, 2009 Author Report Share Posted November 11, 2009 Hi, thanks a lot for the interessting discussion about my topic I solved the problem with the following actions : - A Checksum in my Header to detect TCP IN Buffer overflows - When I close the connection on cRIO side I wait till the read and write function finished plus a 100 ms wait -> I found out that I lose the connection to the RIO and was not able to built up a new connection if I kill the connection during a running tcp read or write also if they are just in a timeout - I use Queues in place of rt fifo´s At the moment my APP is stable, and I hope it keep... greetz chris Quote Link to comment
i2dx Posted November 13, 2009 Report Share Posted November 13, 2009 - When I close the connection on cRIO side I wait till the read and write function finished plus a 100 ms wait -> I found out that I lose the connection to the RIO and was not able to built up a new connection if I kill the connection during a running tcp read or write also if they if you loose the connection (by whatever reason) you have to close the connection on both sides, using the TCP Close primitive. Then you have to open a new listener on the server side and connect again on the client side and maybe you want to give both sides a little wait time to allow the TCP-Stack to call it's clean up routines (50 ms shuold do ...). The client recieves an error 66 when the server closes the connection (e.g. due to an error), you have to handle that one and just ignore error 56, if the server as not jet sent data ... I'm using the Simple TCP Messaging protocol since years now on cRIOS, PXIx, and RT Desktops and never had any problems, except these 2: - you should not "bomb" a cRIO target with 2 connection requests simultaneously, if you use a multi-connection protocol --> use the error cluster to open the connections (ports) one after another - - you should (of course) NOT try to send more data then the physical layer can handle - even with crappy hardware you are on the good side with data-rates of: 5 MB per second on a 100 MBit Link and 50 MByte per second on a Gigabit link cheers, CB Quote Link to comment
Rolf Kalbermatter Posted November 13, 2009 Report Share Posted November 13, 2009 Tom, Your message caught my eye as the Simple TCP Messaging (STM) protocol that we have published on ni.com basically does the same thing. We have tested and benchmarked it on different LV versions and controlleres and can achieve at least 2 ms update rates per packet. The STM sender prepends the size of the packet in a 4 byte integer, and the STM reader first performs one read operation to read the 4 byte header and then performs a second read to get the payload of the packet. STM is implemented using polymorphic VIs supporting both UDP and TCP under the hood. I just retested this using LV 8.6 and a cRIO-9012 VxWorks controller and was able to achieve better than 2ms loop times for sending/receiving two packets per loop (4 individual read operations on the cRIO) using both UDP and TCP. I realize you already have a working solution, but if you're interested I would be available to help determine what caused the behavior and low performance that you saw, which is not typical of TCP/UDP on the VxWorks RT platform. http://zone.ni.com/d...a/tut/p/id/4095 http://zone.ni.com/d...a/epd/p/id/2739 Might the problem be maybe more on the sender side? I ask this because 200ms sounds a lot like the default TCP/IP Naggle algorithmus delay. But that is applied to the sender side to avoid sending lots and lots of small TCP/IP frames over the network. So reading 4 bytes and then the data might be no problem at all but trying to do the same on the sender side might be. It's also my experience that on the reading side you can usually chuck up a package in as many reads as you want (of course performance might get smaller if you do a seperate TCP/IP read for every byte, but that is besides the point). On the other hand it is usually a good idea to combine as many data as possible in one string and send it with a single TCP Write. That is at least how I usually do TCP/IP communication. Another option I have at times used is to enable the TCP_NODELAY socket option, but I have to admit I never did that on an embedded controller so far. Not even sure how to do that on a VxWorks controller as its API is not really standard. Quote Link to comment
vattic Posted July 12, 2010 Report Share Posted July 12, 2010 Tom, Your message caught my eye as the Simple TCP Messaging (STM) protocol that we have published on ni.com basically does the same thing. We have tested and benchmarked it on different LV versions and controlleres and can achieve at least 2 ms update rates per packet. The STM sender prepends the size of the packet in a 4 byte integer, and the STM reader first performs one read operation to read the 4 byte header and then performs a second read to get the payload of the packet. STM is implemented using polymorphic VIs supporting both UDP and TCP under the hood. I just retested this using LV 8.6 and a cRIO-9012 VxWorks controller and was able to achieve better than 2ms loop times for sending/receiving two packets per loop (4 individual read operations on the cRIO) using both UDP and TCP. I realize you already have a working solution, but if you're interested I would be available to help determine what caused the behavior and low performance that you saw, which is not typical of TCP/UDP on the VxWorks RT platform. http://zone.ni.com/d...a/tut/p/id/4095 http://zone.ni.com/d...a/epd/p/id/2739 Hi, i'm sorry if i interrupt this thread,but when you mentioned about STM i have a question. i have STM newest version 2.0 library and labview 2009. when i connect a server and client it works fine without any problem. i should thank the people who has did this library,but found one bug on it,like whenever i remove the LAN connection that means STM read message in the server side which is suppose to give an error (66) , will not give any error saying the connection is lost. what is the reason behind this?if you have any idea then please tell how to go on with this. Quote Link to comment
JackHamilton Posted July 27, 2010 Report Share Posted July 27, 2010 Sorry to chime in late here. Done lots of cRIO TCP/IP data streaming apps. Some tips. 1. Use Queue's to buffer data between acquistion loop and TCP/IP send loops. 2. Code the TCP/IP yourself - avoid the RT FIFO - they work, but not very hard, they don't expose all error conditions. 3. Download *Free* 'Robust TCP/IP" from www.labuseful.com it's a proven robost TCP/IP send and receiver model. 4. Be aware that you can CPU stare threads in the cRIO system!, you're not in Windows anymore, you've got to write very clean LV code with not alot of VI server tricks!. Regards Jack Hamilton Quote Link to comment
santi122 Posted July 28, 2010 Author Report Share Posted July 28, 2010 Sorry to chime in late here. Done lots of cRIO TCP/IP data streaming apps. Some tips. 1. Use Queue's to buffer data between acquistion loop and TCP/IP send loops. 2. Code the TCP/IP yourself - avoid the RT FIFO - they work, but not very hard, they don't expose all error conditions. 3. Download *Free* 'Robust TCP/IP" from www.labuseful.com it's a proven robost TCP/IP send and receiver model. 4. Be aware that you can CPU stare threads in the cRIO system!, you're not in Windows anymore, you've got to write very clean LV code with not alot of VI server tricks!. Regards Jack Hamilton Hi Jack, thanks for your post ! At the moment I have a very robust TCP/IP send an recive model. Do you have any Informtion about what is the best packagesize for a cRIO 9104 Controller ? Regards Christian Santer Quote Link to comment
FixedWire Posted November 7, 2018 Report Share Posted November 7, 2018 In case someone needs Jack's code here it is resaved to work on the latest LabVIEW.Robust TCP-IP (LV2016).llb 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.