Stobber Posted August 19, 2016 Report Share Posted August 19, 2016 I'm using a Keithley instrument driver that works over RS-232 on a cRIO. Our app is leaking memory somewhere, and the component that uses this library is highly suspect. I've been refactoring a bunch of it to use shift registers and IPE structures, and I just found these two doozies (attached, LV 14.1). Together, they build a string byte-by-byte when read from VISA and then convert it to an array of DBL element-by-element. Here's a screenshot of one of them: What's the best way to modify these two so they're better citizens of LVRT (i.e. no memory copies, or as few as absolutely possible)? Query Command.vi Real 64 Converter.vi Quote Link to comment
ShaunR Posted August 20, 2016 Report Share Posted August 20, 2016 (edited) They are not the source of your memory leak. The query could be tidied up a bit but with VISA there aren't many options. A slight change with a small advantage would be to build the concatenation at the loop edge (array of strings) then convert to a single string with the bulld string primitive. Real gains can only be made with fixed size buffers and replace elements but then you need all the ancillory capabilities like setting the size of the buffer and the different modes of operation like with the TCPIP primitives to make it generic. I'd leave it alone if it's working. Optimizing the second one is a no brainer, though. Since I don't know what the licence is for the Keithly Vis, I'll just show the alternative solution rather than post the VI. Edited August 20, 2016 by ShaunR 1 Quote Link to comment
Rolf Kalbermatter Posted August 22, 2016 Report Share Posted August 22, 2016 Yet another possible solution: 1 Quote Link to comment
ShaunR Posted August 22, 2016 Report Share Posted August 22, 2016 (edited) 26 minutes ago, rolfk said: Yet another possible solution: Nice! . Does it ignore the terminating char because of the data type? That's a something to be aware of.. I think there is a bug in the original code since "Swapped" and "Little Endian" are the same. However. From experience "swapped" is the following and not supported by the primitive.: I forget now. But the flatten is quite an expensive operation on large data. Is there a performance consideration between the two approaches? Edited August 22, 2016 by ShaunR Quote Link to comment
Rolf Kalbermatter Posted August 22, 2016 Report Share Posted August 22, 2016 1 hour ago, ShaunR said: Nice! . Does it ignore the terminating char because of the data type? That's a something to be aware of.. Unflatten from String happens to consume as many bytes from the stream as make up complete outgoing data elements. It even creates error 74 if the string happens to be to small to create an array with at least one element. Quote I think there is a bug in the original code since "Swapped" and "Little Endian" are the same. However. From experience "swapped" is the following and not supported by the primitive.: It all depends on the orginal endianess of the data stream. If it comes from a big endian device then swapped is the same as little endian. It's the classical problem of definining what is swapped in respect to the source (device) or the target (LabVIEW), which is always debatable. Your new code has one big problem, it doesn't swap the 16 bit entities anymore, and in both versions another problem. Double floats are 64 bit values and depending on the target they can be swapped in the 32 bit entities too but LabVIEW doesn't have a Swap 32 bit element. There are two Big Endian standards for 64 bit integers, one where only the bytes per 32 bit part are swapped and one where the 32 bit elements are swapped too. Depending which of the two the device is using, Unflatten from String may not be the correct one, but it should be consistent to the full reversal as used in the original code. (your first version doesn't swap the 32 bit values, and your second one doesn't even swap the 16 bit elements). Quote Link to comment
ShaunR Posted August 22, 2016 Report Share Posted August 22, 2016 36 minutes ago, rolfk said: Unflatten from String happens to consume as many bytes from the stream as make up complete outgoing data elements. It even creates error 74 if the string happens to be to small to create an array with at least one element. It all depends on the orginal endianess of the data stream. If it comes from a big endian device then swapped is the same as little endian. It's the classical problem of definining what is swapped in respect to the source (device) or the target (LabVIEW), which is always debatable. Your new code has one big problem, it doesn't swap the 16 bit entities anymore, and in both versions another problem. Double floats are 64 bit values and depending on the target they can be swapped in the 32 bit entities too but LabVIEW doesn't have a Swap 32 bit element. There are two Big Endian standards for 64 bit integers, one where only the bytes per 32 bit part are swapped and one where the 32 bit elements are swapped too. Depending which of the two the device is using, Unflatten from String may not be the correct one, but it should be consistent to the full reversal as used in the original code. (your first version doesn't swap the 32 bit values, and your second one doesn't even swap the 16 bit elements). The second one is just a different frame. It's not a different version-just a separation of "Swapped" from "Little Endian" as defined from the original drop down list. The 64 bit s a valid criticism, though and I'll leave it as an exercise for the OP if he decides that route (unlikely ) Quote Link to comment
Stobber Posted August 22, 2016 Author Report Share Posted August 22, 2016 Thanks, guys. The instrument's data width is fixed all over the API, probably a fixed memory width baked into the hardware design. So there's no apparent need to code the 64-bit case. Here's an attempt to make the serial query VI a little less moronic. I'm pulling into a fixed-size buffer while reading, and then doing a single copy afterward to get the string I need. The buffer grows only when a bigger string is called for and never shrinks. This should limit the number of allocations caused by serial queries. Quote Link to comment
ShaunR Posted August 22, 2016 Report Share Posted August 22, 2016 (edited) Hmm. Isn't the string terminated? It may be that the read byte count is a red herring and doesn't behave as you would expect. It depends if the port was configured to use term chars in the serial init. Edited August 22, 2016 by ShaunR Quote Link to comment
Stobber Posted August 22, 2016 Author Report Share Posted August 22, 2016 2 hours ago, ShaunR said: Hmm. Isn't the string terminated? It may be that the read byte count is a red herring and doesn't behave as you would expect. It depends if the port was configured to use term chars in the serial init. Had to swim far upstream to find the instrument config for this app. Yes, it's terminated by 0xA, as usual. Their query function is definitely looking for a byte count, though. I don't want to have to understand the design goal of the entire API, so I think I'll stick to the edit I have as long as it doesn't cause any regressions or new bugs. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.