mr2fastva Posted March 12, 2007 Report Share Posted March 12, 2007 what is the correct method for using UDP communications in RT? i have deployed a simple test program to my CompactRIO, the Communications loops for a time-critical application using both the FPGA and the RT processor... the communications loops alone take 45% CPU and 60% memory, leaving little headroom for the actual time-critical decision-making code. here are what i've tried: try1: the incoming msg loop contained a UDP Read with infinite timeout. write shared variables upon reciept of UDP datagram (this was turned off)... outgoing msg strings put in queue of size 5. an infinite timeout Dequeue emptied this queue onto the UDP port. i preallocated all arrays and strings, and did any complex Cluster->String bit/byte packing outside the loop and just used Replace String Subset in the loop. later i figured the Cluster>>String stuff was eating processor so i made a *.conf file to contain the strings and just loaded them in. try2: incoming msg loop replaced with timed loop running at 25 ms, replaced UDP read with 1 ms timeout. replaced outgoing msg loop with timed loop at 25ms, replaced DeQueue with 1 ms timeout. i recieve messages at about 10 Hz (40 Hz in future), and send at about 20 Hz. all data is replace-previous, but i do examine a byte to determine which data to replace. i cannot use Shared Variables since I am implementing a robotics standard (JAUS, www.mr2fast.net/jaus) that uses UDP and TCP right now... this RIO application must communicate with 5 other Executables running on Linux and WIndows, developed by several organizations. other RT considerations i've made after reading NI.com guidelines for RT programming: - i've put my loops inside subVIs, eliminated subVIs where possible. - i preallocate strings of max length and the just use "Replace" - most of my VI terminals are "Required" so the VI doesnt have to check whether to use Default or not ive searched this on NI.com and LavaG without much luck. to my knowledge my app is broken into the two suggested loops "TimeCritical" and "Communications"... hopefully others have had the same problem. Thanks!!! Quote Link to comment
Mellroth Posted March 12, 2007 Report Share Posted March 12, 2007 QUOTE(ruelvt @ Mar 11 2007, 04:32 AM) what is the correct method for using UDP communications in RT? i have deployed a simple test program to my CompactRIO, the Communications loops for a time-critical application using both the FPGA and the RT processor... the communications loops alone take 45% CPU and 60% memory, leaving little headroom for the actual time-critical decision-making code. here are what i've tried: Hi, To get more help, could you please specify; * cRIO model? (the IP performance is really boosted in cRIO 9012, mainly due to change of chipset) * LabVIEW version? * It would also be easier to help you, if you could upload an example to show what you have tried so far. We have, however, used cRIO 9012 units to act as distibuted I/O. In this case the cRIO units received user commands through UDP, and also used UDP to synchronize Data buffers with the Master RT target. If no data was received (UDP timeout set to 1000ms), the load of the cRIO was around 5%, and when doing performance measurements (i.e. sending commands that were just looped back), the processor load was about 40-50%. I haven't really checked the memory usage, but what I have seen is that about 50% of the available memory is allready taken by the OS, leaving us with around 30MB for our applications. Another issue we have encountered in the cRIO systems is that using Queues/RT-FIFOs/Notifiers/LV2-globals to transfer data between processes, takes in average 270us for a transfer to complete, i.e. time from adding data until it is read by the receiver. This is probably not relevant for your current problem, but it might be good to have that in mind when you are using a cRIO. /J Quote Link to comment
wallyabcd Posted May 15, 2007 Report Share Posted May 15, 2007 Hi; Without some sort of architectural diagram, it's a bigt hard to follow what exactly you're doing, so I will just give you some advice based on a similar thing that I am doing... I am running an RT based system with fpga, motion and vision... Essentialy, UDP and TCP is used for both communications and logging of status. Every command received or processed by the instrument sequencer or internal command or feedback is logged to a file and also transmitted via UDP. This works so fast transfering small files to the instrument from the desktop appears instanteneous. The only difference here is I don't use shared variables. Be very carefull with shared variables. This system generates quite a bit of data and has no problems with the communications eating lots of memory... Make sure you're not reading or writing to synchronous controls or indicators anywhere in your program unintentionaly. What I would suggest is that you put your communications loop into a seperate thread(real easy in LV) In your communication thread, put your sender and receiver in seperate loops Use a bigger queue. Set the loop rate to about 40 ms Give the thread normal priority. replaced UDP read with 1000 ms timeout Make your communications module almost like an independent state machine, self regulating. In essence, try to have your code multitask. You can make a queick test to see where the problem may be by lowering the priority of your communications loop to see if anything changes. Post the code for more... Goodluck Walters Spinx QUOTE(ruelvt @ Mar 11 2007, 03:32 AM) what is the correct method for using UDP communications in RT? i have deployed a simple test program to my CompactRIO, the Communications loops for a time-critical application using both the FPGA and the RT processor... the communications loops alone take 45% CPU and 60% memory, leaving little headroom for the actual time-critical decision-making code. here are what i've tried:try1: the incoming msg loop contained a UDP Read with infinite timeout. write shared variables upon reciept of UDP datagram (this was turned off)... outgoing msg strings put in queue of size 5. an infinite timeout Dequeue emptied this queue onto the UDP port. i preallocated all arrays and strings, and did any complex Cluster->String bit/byte packing outside the loop and just used Replace String Subset in the loop. later i figured the Cluster>>String stuff was eating processor so i made a *.conf file to contain the strings and just loaded them in. try2: incoming msg loop replaced with timed loop running at 25 ms, replaced UDP read with 1 ms timeout. replaced outgoing msg loop with timed loop at 25ms, replaced DeQueue with 1 ms timeout. i recieve messages at about 10 Hz (40 Hz in future), and send at about 20 Hz. all data is replace-previous, but i do examine a byte to determine which data to replace. i cannot use Shared Variables since I am implementing a robotics standard (JAUS, www.mr2fast.net/jaus) that uses UDP and TCP right now... this RIO application must communicate with 5 other Executables running on Linux and WIndows, developed by several organizations. other RT considerations i've made after reading NI.com guidelines for RT programming: - i've put my loops inside subVIs, eliminated subVIs where possible. - i preallocate strings of max length and the just use "Replace" - most of my VI terminals are "Required" so the VI doesnt have to check whether to use Default or not ive searched this on NI.com and LavaG without much luck. to my knowledge my app is broken into the two suggested loops "TimeCritical" and "Communications"... hopefully others have had the same problem. Thanks!!! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.