Jump to content

Mellroth

Members
  • Posts

    602
  • Joined

  • Last visited

  • Days Won

    16

Posts posted by Mellroth

  1. I could not agree more.

    A general native way to use pointers/references in LabVIEW has been on my wish list from day one.

    Especially now with LabVIEW RT and platforms like cRIO, a reference system would be great in order to minimize memory usage.

    /J

    Of course it could be. NI has implemented by-value classes that should be able to provide good performance when one doesn't need by-reference objects. These classes can take all the advantage of data flow programming optimizations (or at least they could). If one needs a by-reference classes, then instead of implementing all classes by-reference, I thinkt that a general by-reference system for both data and object buffers should be created. This way you could take advantage of by-value classes when they provide enough flexibility for your purposes. When this is not the case you could use create a reference to the class using a "REF" operator. Because in concurrent multithreading system like LabVIEW, using references requires concurrency control like semaphores to control the simultaneous access to the reference, the same performance cannot be gained than what can be gained with dataflow objects.

    So I don't see any reason why NI should force developers to use by-reference objects if for a particular application better performing by-value objects are better. I rather think that there should be a way similar to C++ OOP to have both by-value objects and variables as well as by-reference objects and variables. So what I really hope is that NI will implement a general reference framework for all buffers, controls and constants no matter weather these are objects or normal variables.

  2. I just quickly checked v1.1.1 and I realized that dispose and obtain_p_from_p needs to be updated.

    These cases does not retrieve the array index correctly. Dispose uses correct index on counter array, but not on data and boolean array.

    Maybe also add a check that the boolean element is set to true in all cases where data is accessed?

    /J

  3. This thread, as well as others, is showing that users are forced to make their own by reference implementation on top of the LVOOP to make use of it. The performance gained is then more or less lost?

    If the performance using LVOOP with a by reference implementation like yours (looks promising by the way...) is higher compared to maybe dqGOOP. Then performance can not be the answer, since a NI native reference version could do even better in terms of performance.

    So I agree with bsvingen that I haven

  4. Hi,

    if you are using the frame-API of NI-CAN in LabVIEW, you might want to enable the reentrancy property (look in VI-properties->Execution) on some of the NI-CAN methods, e.g. ncReadObj and ncWriteObj. Otherwise you will have a dependancy between your CAN networks, e.g. the time it takes to output a frame on one network, delays all other outputs on any network.

    If you are using the Channel-API, all methods are already made reentrant.

    /J

  5. LVOOP is efficient because it doesn't require any conversion, at least according to Aristos Queue.

    But since LVOOP is not available on RT targets at the moment, I can't use it.

    I do not know how variants are stored in memory, but I think you are pretty close. Data should be flattened and some info is added.

    I really would like to see NI implement a genericDataType, and data to reference functions.

    The genericDataType should accept anything without any conversion, and should be very similar to a variant. This feature basically already exist, but only for LVOOP classes.

    Data to reference functions should only mark a wire so that data is not copied when forked (should also change appearance).

    /J

    My experience with LVOOP for simple data is that it's actually more efficient than typecasting (and variant). I don't know why, maybe the reason is that when using LVOOP there is no typecasting at all? It would be nice to know what exactly is going on, i imagine it works like this:

    Typecast: data is first flattened, then unflattened to the actual type

    Variant: data is flattened and the type info is stored along with the flattened data ??

    LVOOP: data is just set into the cluster ??

  6. I liked the way you converted the queue reference to variant, and then extracted the information out of the variant.

    The purpose of the test was to see how much overhead, compared to jimi's original post (with type added), that was introduced by using this way to pass any queue reference in a variant.

    With that in mind, I think the results I posted today is relevant.

    As I said previously I still like DataLogRefnumns better due to the wiring protection, e.g. in _4 and _5 there is no check that the variant actually holds a queue reference.

    /J

    But this will be like comparing apples and oranges, besides you are not sending only a ref anymore. Look at my examples 4 and 5 which is a much better comparison.
  7. Try using the "Unbundle By Name" / "Bundle By Name" instead of the Bundle/Unbundle.

    With Bundle/Unbundle the cluster order is important, and the terminals will change if you add new elements to the cluster.

    The "By Name" primitives doesn't care about cluster order or if you add new elements, as long as the referenced name still exist in the cluster.

    /J

    Hello Ton.

    Thanks for your reply. At first I tried to apply what you said in my program and got lost. Then I went back to your display picture and tried to set that up.

    I do have a question about your block diagram though.

    How did you get your cluster to have the type display "Status 2"? Mine will only display abc or I32 for example (see my block diagram).

    Thanks,

    Philip

  8. If you mean my first post, I don't understand either, but the result of my second test is more what I would expect.

    A typecast should just change the way a piece of memory is interpreted, but conversion to variants must involve data copying since the size of the variant is different from the reference.

    There is also 4 VIs that must be run on the Variant reference, these also add to the overall timings

    /J

    I don't understand how you get those results. Converting a queue ref to variant is just as efficient as typecasting it. Do you do something else than converting the ref?

    Download File:post-5958-1159251387.vi

  9. I repeated my test from yesterday. I did the same test again with queue size set to 1 in order to rule out memory issues.

    The test was run 1000 times, and the results was a bit :oops: different than yesterday.

    Variants = 100ms

    typecast = 2ms

    Which means that in terms of performance variants will do equally well as typecasting (at least for references).

    I don't know the reason for the strange result I got yesterday, sorry for that post :headbang: .

    /J

  10. I too think that those numbers were strange, but I didn't have time to restart my computer to perform the test again.

    I did restart LabVIEW and run the test with similar result.

    My purpose of the test was to confirm that typecasting is faster than variant-conversion, not to reject variant datatypes.

    Actually I had not seen the VIs you used on the variant reference, and I do see them as handy.

    I will run the test again tomorrow, so stay tuned...

    /J

  11. I agree that variants can store any type of data and that you can use variants as references. But with DataLogRefnums you get broken wires if you accidently connect a unsupported wire to the reference input, with variants you wont.

    Since variants accepts all data types, your VI will still run, but with error, and in some cases you will have a hard time finding this error.

    Regarding performance, typecasting will outperform the variants, even with additional type info.

    I put a loop around enque/deque operations (setting down element size to 100).

    Looping 1000 times takes 62000ms using variants, and only 3 ms with typecasting.

    /J

    Forgot the modified version...

    With variants you get a lot of other posibilities for automation later on, for instance in conversion. See the attached VI where int32 arrays are automatically casted to double array (the default type). It will also be safer because you can add errors when wrong queue/ref types accidentally comes in.

    Download File:post-4885-1159183482.vi

    Download File:post-5958-1159186325.vi

  12. Jimi,

    I think that was a pretty cool example, you actually typecasted the values contained in the queue by typecasting the reference!

    The problem with the garbage output in the example is due to that you input 64 bit elements and try to extract 32 bit elements, therefore the strange result with every second element set to 0.

    Change representation from I32 to I64 and the output will be an I64 array, where each element is the DBL value typecasted to I64.

    In my opinion you should stick with the typecasting, mainly because I prefer the protection you get from the DataLogRefnum.

    /J

  13. I think this is another bug. To solve this it might be enough to add a check of the "Free" boolean in the get/set cases.

    The bug I was aiming at was that a reused pointer could cause strange errors. This bug also still exist, since the counter will wrap at 256.

    Using 64 bit values as pointers, and 32 bits as counter will get the wrap number up to 2^32.

    /J

    I just noticed that the "bug" is still there in some circumstances. The counter is only updated when using "Obtain pointer". This makes it possible to use a disposed pointer without an error if the get/set methods are used before obtain new pointer. I think this can be fixed by updating the pointer when disposing the pointer.
  14. I agree on that its is a serious programming error, but I do not see why we shouldn't try to protect the programmer from doing such mistakes.

    If you should use the upper 8bit value as a counter, the user is more or less protected.

    The only drawback is that it will take another 8bit of memory for each pointer storage, and also some additional checks to see if pointer is valid.

    I'll see if I can dig up some old stuff that will add that counter to your implementation, if I got time that is...

  15. Even though you are correct in that it is a natural feature of pointers, I still think it is an important aspect as we work with wires in LabVIEW instead of variables. Basically since each fork copies the pointer value.

    Each time we try to de-reference a pointer I would like to know whether it is the correct data (in the current implementation you can actually return data, empty, of a disposed pointer without any error).

    Compare to C

    If you in create a variable in C and gets the associated pointer/memory address, that pointer will always refer to the same value as long as the variable is valid. When the variable is no longer valid, it is not likely that the next variable will reuse the same memory address, and acting on a pointer that is not valid might lead to a crash.

    To get around this you could let the highest byte in the pointer be a counter (wrapping at 256), indicating usage order. Check the pointer against this usage counter, and you will not see the same pointer value in 256 turns. Or use I64 values as pointer value, then use the I32-high as the counter and the I32-low as the pointer value.

    /J

  16. Hi,

    I think there is a "bug" in that pointer implementation.

    The problem is that pointers are generated by finding the next free index. This means that if you dispose pointer p and directly obtains a new pointer q, p and q might be the same pointer value. Dereferencing the disposed pointer p would then still be possible, and would return the value from pointer q.

    Please see the attached *.llb.

    /J

    Download File:post-5958-1158762581.zip

  17. Hi,

    I think you're on the right track. With delimiter set to ',' and format string set to %d it should work.

    Please try this little VI, I also removed the first line from the data since it seemed like header info.

    To present this in a graph you should be able to just wire the 2D-array to a Wfm-graph, but transpose the array first so that X,Y,Z values are seen as different data-sets.

    If you do not want the first column (0, 1, 2, ...), just remove that column.

    /J

    Download File:post-5958-1157550703.vi

  18. Would it be unreasonable to also assume that you'll be releasing some kind of "by reference" mechanism at the same time?

    From my perspective, a good "by reference" model is more or less needed on the RT-targets, since the memory handling is of much greater importance than on the desktop targets.

    Have you thought of creating two complimentary methods (ToReference and FromReference) that could take anything (not limited to a LV class) as input and return reference to that data, much like the Flatten/Unflatten/ToVariant/FromVariant etc. In principle I think you do have something very similar in the Queue and Notifier implementations.

    /J

    No. At the current version LVClasses cannot be downloaded to any of LV's targets. They execute exclusively on the desktop platforms (Mac, Linux, Windows). They are completely portable across those platforms. The targets, well, ... R&D never promises features. But, in the words of one developer some years ago... "It would not be unreasonable to assume that we might be working on something like that."
  19. Aristos,

    I thought flatten data was a way to store arbitrary complex structures in a continous piece of memory, but it is not the way that data is stored in memory when that complex structure is passed in a wire?

    Many flavours of GOOP floating around, uses some kind of flatten data as storage, and each flatten/unflatten operation requires memory allocation and copying?

    So if NI could make LabVOOP work with the native datatypes instead of flattening data we would not see these memory allocations, with increased access speed and less memory footprint as the result.

    /J

    I don't understand your question. What exactly is the difference between "flatten data" and whatever it is you think NI has access to? I'm not trying to be difficult, but I'm not sure what optimization you see.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.