mwebster Posted December 14, 2011 Report Share Posted December 14, 2011 LV2011 RT, crio-9076, using NSV's to share data back to the PC and send commands to the rio. I'm getting very slow reads of the NSV's in the RIO (on the order of 250-350ms). The shared variable engine is hosted on the RIO. I've got a critical timed loop reading from FIFO's and writing to scan engine I/O variables and reading from scan engine I/O and writing to FIFO's. I have a non-time critical communication loop that reads from NSV's and writes to FIFO's and reads from FIFO's and writes to NSV's. The time-critical loop spins like a top, 1-2ms tops in reading/writing FIFO to/from I/O vars. The reading FIFO -> writing to NSV is slow (30-40ms), but not nearly as slow as reading from NSV and writing to FIFO. Things I have tried: Disconnected typedefs from all the NSV's - this was necessary to deploy a built executable to the RIO (and have it work, that is). Some new bug in LV2011 according to NI. Changed all typedefs to variants and recast them on the RIO - this sped things up by 1/3 or so. I split the work up in subVI's for analysis purposs and the casting and writing to FIFO is very fast, it's definitely the read operation that's being pokey. Current workaround: I'm using an Updated boolean to tell the RIO when to actually read the NSV's so that my average loop time doesn't suffer so much. This works, but I want to know the why's and wherefore's of this being so slow. Further details: Exactly what I'm reading: 15 element boolean array (this is very fast by itself and I'm reading it on every loop now, not just when updated is true with no problem) 4 "position command" clusters (currently cast to variant)Enum targetEnum controlModeEnum ChannelEnum PVDouble commandDouble man controlCluster Control_ParametersEnum PV_TypeCluster PID_gainsDouble KcDouble TiDouble TdCluster Setpoint_rangeDouble highDouble lowCluster output_rangeDouble highDouble low 1 "test command" cluster (currently cast to variant)Enum trigger channelEnum trigger directionDouble lowerLimitDouble upperLimitBoolean StartBoolean Stop Those enums are 16-bit, so we're talking about 2624 bits in the 4 control clusters + 162 in the test command + 15 in the boolean array = 2801 = 350 bytes. Call it 500? with some structure padding from the cluster organization. Why would it take 300+ms to read less than 1KB of data? Best regards, Mike Quote Link to comment
Jon Kokott Posted December 14, 2011 Report Share Posted December 14, 2011 My experience with network shared variables is that they are ridiculously slow. We've always used a tcp connection to share data instead, and found several orders of magnitude of performance achievable. Quote Link to comment
JamesMc86 Posted December 16, 2011 Report Share Posted December 16, 2011 Hmm not sure exactly the reasoning for this but I would probably suggest probably reducing what you are using shared variables for and maybe look at TCP or network streams for command elements. A couple of considerations would be that the rate at which the variable engine can run is going to depend on the RIO. If your progra is using most of the CPU that will impact the performance of the engine. Also ensure your shared variables are sequenced, they each have to wait for access to the engine, it is a shared resource. Hope it gives a few things to look at. Cheers, James (null) Quote Link to comment
mwebster Posted December 19, 2011 Author Report Share Posted December 19, 2011 The sequencing was done already. Without it, the performance was substantially worse. The time-critical code is taking up ~65% of the CPU cycles, but it is "interrupting" every 5ms to do them. Maybe it's inefficient context-switching that's killing it. I may try redoing this with straight TCP-IP in the future, but I just wanted to ask around if anyone else had experienced this. You are able to pass a lot more data as straight doubles a lot faster. Something about variant/cluster packaging just makes the SVE reads so much slower... Mike Quote Link to comment
JamesMc86 Posted December 19, 2011 Report Share Posted December 19, 2011 Hi Mike, If your time critical code is running at 65% it is possible that it is the CPU requirements of the engine that is causing the issue. What is your total usage? Ideally we would aim for about 80%and the engine can take quite a lot. http://zone.ni.com/devzone/cda/tut/p/id/4679 I don't think it has much useful now but I always refer to this document for a quick overview of how it works. What might be worth doing is using RT FIFOs on your variables if your cluster supports it. It will ensure you time critical code is not affected. (null) Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.