Jump to content

Mellroth

Members
  • Posts

    602
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by Mellroth

  1. I don't think that create/dispose methods are that critical, you only do this once, but the get/set methods are called more often. To make create methods more or less independant of the number of elements in the array, one can implement a linked list that holds indices of the free elements, and points to the next. To create a new pointer, only remove first element from that list, i.e. no search for next free element. Dispose then adds free pointer to the end. I'll see if I can find some old linked list implementation on my disks. /J
  2. I could not agree more. A general native way to use pointers/references in LabVIEW has been on my wish list from day one. Especially now with LabVIEW RT and platforms like cRIO, a reference system would be great in order to minimize memory usage. /J
  3. I just quickly checked v1.1.1 and I realized that dispose and obtain_p_from_p needs to be updated. These cases does not retrieve the array index correctly. Dispose uses correct index on counter array, but not on data and boolean array. Maybe also add a check that the boolean element is set to true in all cases where data is accessed? /J
  4. This thread, as well as others, is showing that users are forced to make their own by reference implementation on top of the LVOOP to make use of it. The performance gained is then more or less lost? If the performance using LVOOP with a by reference implementation like yours (looks promising by the way...) is higher compared to maybe dqGOOP. Then performance can not be the answer, since a NI native reference version could do even better in terms of performance. So I agree with bsvingen that I haven
  5. I think you are sending commands at a rate that the instrument can't handle. Try to increase the looptime, or make your program use GPIB interrupts (SRQ). By using SRQ the instrument can assert the SRQ line when it is ready for the next command. /J
  6. Hi, if you are using the frame-API of NI-CAN in LabVIEW, you might want to enable the reentrancy property (look in VI-properties->Execution) on some of the NI-CAN methods, e.g. ncReadObj and ncWriteObj. Otherwise you will have a dependancy between your CAN networks, e.g. the time it takes to output a frame on one network, delays all other outputs on any network. If you are using the Channel-API, all methods are already made reentrant. /J
  7. LVOOP is efficient because it doesn't require any conversion, at least according to Aristos Queue. But since LVOOP is not available on RT targets at the moment, I can't use it. I do not know how variants are stored in memory, but I think you are pretty close. Data should be flattened and some info is added. I really would like to see NI implement a genericDataType, and data to reference functions. The genericDataType should accept anything without any conversion, and should be very similar to a variant. This feature basically already exist, but only for LVOOP classes. Data to reference functions should only mark a wire so that data is not copied when forked (should also change appearance). /J
  8. I liked the way you converted the queue reference to variant, and then extracted the information out of the variant. The purpose of the test was to see how much overhead, compared to jimi's original post (with type added), that was introduced by using this way to pass any queue reference in a variant. With that in mind, I think the results I posted today is relevant. As I said previously I still like DataLogRefnumns better due to the wiring protection, e.g. in _4 and _5 there is no check that the variant actually holds a queue reference. /J
  9. Try using the "Unbundle By Name" / "Bundle By Name" instead of the Bundle/Unbundle. With Bundle/Unbundle the cluster order is important, and the terminals will change if you add new elements to the cluster. The "By Name" primitives doesn't care about cluster order or if you add new elements, as long as the referenced name still exist in the cluster. /J
  10. If you mean my first post, I don't understand either, but the result of my second test is more what I would expect. A typecast should just change the way a piece of memory is interpreted, but conversion to variants must involve data copying since the size of the variant is different from the reference. There is also 4 VIs that must be run on the Variant reference, these also add to the overall timings /J Download File:post-5958-1159251387.vi
  11. I repeated my test from yesterday. I did the same test again with queue size set to 1 in order to rule out memory issues. The test was run 1000 times, and the results was a bit different than yesterday. Variants = 100ms typecast = 2ms Which means that in terms of performance variants will do equally well as typecasting (at least for references). I don't know the reason for the strange result I got yesterday, sorry for that post :headbang: . /J
  12. I too think that those numbers were strange, but I didn't have time to restart my computer to perform the test again. I did restart LabVIEW and run the test with similar result. My purpose of the test was to confirm that typecasting is faster than variant-conversion, not to reject variant datatypes. Actually I had not seen the VIs you used on the variant reference, and I do see them as handy. I will run the test again tomorrow, so stay tuned... /J
  13. I agree that variants can store any type of data and that you can use variants as references. But with DataLogRefnums you get broken wires if you accidently connect a unsupported wire to the reference input, with variants you wont. Since variants accepts all data types, your VI will still run, but with error, and in some cases you will have a hard time finding this error. Regarding performance, typecasting will outperform the variants, even with additional type info. I put a loop around enque/deque operations (setting down element size to 100). Looping 1000 times takes 62000ms using variants, and only 3 ms with typecasting. /J Forgot the modified version... Download File:post-5958-1159186325.vi
  14. Jimi, I think that was a pretty cool example, you actually typecasted the values contained in the queue by typecasting the reference! The problem with the garbage output in the example is due to that you input 64 bit elements and try to extract 32 bit elements, therefore the strange result with every second element set to 0. Change representation from I32 to I64 and the output will be an I64 array, where each element is the DBL value typecasted to I64. In my opinion you should stick with the typecasting, mainly because I prefer the protection you get from the DataLogRefnum. /J
  15. I think this is another bug. To solve this it might be enough to add a check of the "Free" boolean in the get/set cases. The bug I was aiming at was that a reused pointer could cause strange errors. This bug also still exist, since the counter will wrap at 256. Using 64 bit values as pointers, and 32 bits as counter will get the wrap number up to 2^32. /J
  16. Counter is now included in the Global Storage__P.vi This limits the heap size to 2^24 (24bits), but in practice I do not think that this should be a problem. I also updated the part where heap is increased so that 100 additional free elements is added together with the new element. /J Download File:post-5958-1158902910.zip
  17. If I rememeber correctly, LabVIEW will automatically dispose the memory allocated by the DSxxx functions when the VI is done. Instead you could use the Application Zone (AZ) functions, as the application data will be kept from call to call. Hope this helps. /J
  18. I agree on that its is a serious programming error, but I do not see why we shouldn't try to protect the programmer from doing such mistakes. If you should use the upper 8bit value as a counter, the user is more or less protected. The only drawback is that it will take another 8bit of memory for each pointer storage, and also some additional checks to see if pointer is valid. I'll see if I can dig up some old stuff that will add that counter to your implementation, if I got time that is...
  19. Even though you are correct in that it is a natural feature of pointers, I still think it is an important aspect as we work with wires in LabVIEW instead of variables. Basically since each fork copies the pointer value. Each time we try to de-reference a pointer I would like to know whether it is the correct data (in the current implementation you can actually return data, empty, of a disposed pointer without any error). Compare to C If you in create a variable in C and gets the associated pointer/memory address, that pointer will always refer to the same value as long as the variable is valid. When the variable is no longer valid, it is not likely that the next variable will reuse the same memory address, and acting on a pointer that is not valid might lead to a crash. To get around this you could let the highest byte in the pointer be a counter (wrapping at 256), indicating usage order. Check the pointer against this usage counter, and you will not see the same pointer value in 256 turns. Or use I64 values as pointer value, then use the I32-high as the counter and the I32-low as the pointer value. /J
  20. Hi, I think there is a "bug" in that pointer implementation. The problem is that pointers are generated by finding the next free index. This means that if you dispose pointer p and directly obtains a new pointer q, p and q might be the same pointer value. Dereferencing the disposed pointer p would then still be possible, and would return the value from pointer q. Please see the attached *.llb. /J Download File:post-5958-1158762581.zip
  21. Hi, I think you're on the right track. With delimiter set to ',' and format string set to %d it should work. Please try this little VI, I also removed the first line from the data since it seemed like header info. To present this in a graph you should be able to just wire the 2D-array to a Wfm-graph, but transpose the array first so that X,Y,Z values are seen as different data-sets. If you do not want the first column (0, 1, 2, ...), just remove that column. /J Download File:post-5958-1157550703.vi
  22. In LabVIEW 8.20 you could wire TRUE to the "UTC format" input on the "Format Date/Time String" function. Just be aware of that the format string does not work if your time difference is greater than or equal to 24h, in this case the hours will wrap. /J
  23. I don't know if anyone mentioned it, but you can save memory by using subVI's, assuming that the subVI is used more than once in your system. With a decreased memory footprint, you will also notice that the load time of large applications decreases. /J
  24. Would it be unreasonable to also assume that you'll be releasing some kind of "by reference" mechanism at the same time? From my perspective, a good "by reference" model is more or less needed on the RT-targets, since the memory handling is of much greater importance than on the desktop targets. Have you thought of creating two complimentary methods (ToReference and FromReference) that could take anything (not limited to a LV class) as input and return reference to that data, much like the Flatten/Unflatten/ToVariant/FromVariant etc. In principle I think you do have something very similar in the Queue and Notifier implementations. /J
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.