Jump to content

CharlesB

Members
  • Posts

    59
  • Joined

  • Last visited

Posts posted by CharlesB

  1. PS> I’m a bit stuck on this at the moment because of a problem with “Variant to Data†being too “dumb†when it comes to child classes.   If one has a Variant containing a Parent-class datatype holding a child-class object, and you attempt to use "Variant to Data" to cast this to a Child-class wire, it throws a type mismatch error, even though such a conversion is easily possible.  This is a problem when the library-user want to use clusters containing their own child classes.  There are a couple of work around but both are ugly.

     

    What I do in my implementation is to serialize the class type, and if present when reading it back I instantiate the real class, using LabVIEW's "Get LV Class Default Value By Name". It works neatly in my case. Serializing class type is an option given to the "Data to JSON" VI, so classes that don't have children don't get their type serialized.

     

    Serialized class looks like this:

    {
      "Timings": {
        "@type": "ChildTimings",
        "@value": {
          "Camera delay": 0,
          "Exp time": 0.001,
          "Another field": "3.14159"
         },
      "Name": "Charles"
    }

    Attached my version of the package implementing this (don't pay attention to the version number, it was forked from 1.3.x version). Code is in JSON Object.lvclass:Set Class Instance.vi and JSON Object.lvclass:Get as class instance.vi

     

    Hope you won't find it ugly :-)

     

    lava_lib_json_api-1.4.0.26.vip

  2. Here's what I've done for this in my implementation.

     

    I had the same need, big configurations with nested objects that needed to be serializable. I created a "JSON serializable" class that has "Data to JSON" and "JSON to data" that take JSON values as arguments. My classes inherit from it and de/serialize their private data by overriding these methods, with the option of outputting class name along with the data. JSON.lvlib is modified to handle these classes on get/set JSON. I used the @notation to output the class name, resulting in this kind of output

    "Timings": {
      "@type": "Timings",
      "@value": {
        "Exp time": 0.003,
        "Piezo": {
          "Modulation shape": "Square",
          "Nb steps": 4
        }
      }
    }

    This is the code for deserializing and serializing:http://imgur.com/a/afaKj

     

    The drawback is that I need to modify JSON.lvlib, and I didn't take the time to update my fork to stay inline with the development of JSON library. Also it needs LabVIEW 2013, because of the need of "Get LV Class Default Value By Name".
    The advantage is that the class name is serialized, and loaded back at deserialization. It allows to save/load child classes of a serializable class, giving more flexibility to the configuration.

     

    If someone is interested I can work on updating the fork with last version of JSON.lvlib and publish it

  3. Could it be that generics are soon to enter the LabVIEW world?

     

    Generics are maybe an solution to drjdpowell's request, but as for mine the compiler just needs to be a little smarter to guess what type of object is in my cluster. In this case it's just compile-time type determination.

     

    In 8 years, that's the first time I've heard the cluster/array request. It comes in sometimes for DVRs and for VI Server class refnum types, but infrequently enough that NI has never acted on it.

    To be clear: it is doable, there just hasn't been pressure to do it.

     

    The fact that this type of request rises may be a good sign that LabVIEW users are now more mature and at ease with object-oriented programming, and thus more demanding :rolleyes:

  4. I love how useful is the "Preserve runtime class" node when I wire an object to a more generic subVI, and not having to downcast it after... (first snippet is the subVI, and second is how I use it)

     

    post-1401-0-78332200-1425475065.pngpost-1401-0-15647500-1425474979.png

     

    But how to make it work with cluster of objects?

     

    I tried this in the SubVI:

     

    post-1401-0-97427900-1425474565_thumb.pn

    But no success: unbundling after my subVI call gives me the base class, and I have to downcast it.

     

    post-1401-0-15237500-1425474605_thumb.pn

     

    Any thoughts on this problem?

  5. index.php?app=downloads&module=display&section=screenshot&id=251

    Name: Triple buffer

    Submitter: CharlesB

    Submitted: 21 Oct 2014

    Category: *Uncertified*

    LabVIEW Version: 2011

    License Type: BSD (Most common)

    (initial discussion, with other implementations here)

     

    In the need for displaying large images at a high performance, I wanted to use triple buffering in my program. This type of acquisition allows to acquire large data in buffers, and have it used without copying images back and forth between producer and consumer.

     

    This way consumer thread doesn't wait if a buffer is ready, and producer works at max speed because it never waits or copy any data.

    If the consumer makes the request when a buffer is ready, it is atomically turned into a "lock" state. If a buffer isn't ready, it waits for it, atomically lock it when it is ready.

     

    This class allows to have a producer loop running at its own rate, independently from the consumer. It is useful in the case of a fast producer faster than the consumer, where the consumer doesn't need to process all the data (like a display).

     

    How to use

    Buffers are provided at initialization, through refnums. They can be DVRs, or IMAQ refnums, or any pointer to some memory area.

    Once initialized, consumer gets the refnums with "get latest or wait". The refnum given is locked and guaranteed to stay uncorrupted from the producer loop. If new data has been produced between two consumer calls, the call doesn't wait for new data, and returns the latest one. If not, it waits for the next data.

    At each producer iteration, producer starts with a "reserve data", which returns the refnum in which to fill. Once data is ready, it calls "reserved data is ready". These two calls never wait, so producer is always running at a fastest pace.

    Implementation details

    A condition variable is shared between producer and consumer. This variable is a cluster holding indexes "locked", "grabbing", and "ready". The condition variable has a mechanism that allows to acquire mutex access to the cluster, and atomically release it and wait. When the variable is signaled by the producer, the mutex is re-acquired by the consumer. This guarantees that the consumer that the variable isn't accessed by producer between end of consumer wait and lock by consumer.

    Reference for CV implementation: "Implementing Condition Variables with Semaphores ", Andrew D. Birrell, Microsoft Research

    Click here to download this file

  6.  IMAQ GetImagePixelPtr? That only retrieves a pointer. Are you then using IMAQ SetPixelValue to write individual pixels to an IMAQ ref?

     

    No, I pass the pixel pointer to the DLL function, which can than read or write the raw pixels. This is what gives the best performance.

     

    For making sure of this, I converted my program to LV2014, replacing my DLL calls with native G array operations and the new ImageToEDVR along with an IPE, performance goes down by 50%!

     

    CharlesB, on 23 Oct 2014 - 3:52 PM, said:

        Everything looks like there's a global lock on IMAQ functions?

     

    Not IMAQ functions. IMAQ references. This maybe explains my confusion about corruption, Yes. IMAQ handles resource locking transparently (same as global variables, local variables, and any other shared resource we use in LabVIEW) so we never have to worry about data corruption (unless we cock up the IMAQ names. of course) ;) Once you have an image inside an IMAQ ref. Never, never manipulate it outside of IMAQ (use only the IMAQ functions like copy, extract etc). Going across the IMAQ boundary (either direction) causes huge performance hits. As AQs signature states "we write C++ so you don't have to".

     

    This apparent "global lock" was solved by some DLL functions that were left calling inside UI thread. I have no performance issue with manipulating IMAQ data outside IMAQ functions, so I would say instead that when done correctly, it can solve performance issue without going into Vision runtime licence costs. I concede that it has the drawback of more complexity, as it adds DLL calls, and need to have maintainers that know both LabVIEW and C++.

     

    If you are pixel bashing singly into an IMAQ reference pretending it is just an array of bytes in memory somewhere using IMAQ SetPixelValue, then you will never achieve performance. Get it in with one block copy inside the DLL and never take the data out into values or arrays. Use the IMAQ functions to manipulate and display the data . This will cure any corruption as you will only receive complete frames via your DLL. If you want, you can implement your triple buffering inside the DLL. Will it be fast enough? Maybe. This is where using NI products have the advantage as the answer would be a resounding "easily".

     

    Anecdotally with yours and my methods I can easily get 400 FPS using a 1024x1024 U8 greyscal image in multiple viewers. I'm actually  simulating acquisition with an animation on a cruddy ol' laptop running all sorts of crap in the background. If I don;t do the animation and just a straight buffer copy from one image to another, I get a thousands of frames/sec. However, I'm not trying to put it into an IMAQ ref a pixel at a time.

     

    I haven't tested the IMAQ functions for manipulating image pixels, because at design time a few years ago I didn't want to pay for the Vision runtime license on each deployment. But they probably have similar or better performance.

     

    As of your benchmark, I'm certainly doing more stuff than just buffer copy, so it's hard to compare.

  7. Are the Matrox dll calls thread safe? Are you making any dll calls in your image processing? Is it possible they are executing in the user interface thread?

     

    Ooh thanks! I had some processing CLFN that were specified to run in the UI thread! Now producer loop frequency is more independent of display loop.

  8. Do you have 2012 or later as an option? If so, the IMAQ ImageToEDVR VI will be available.

     

    I'm not sure to understand how it would help here?

     

    This has been really bugging me in that if you have the image in an IMAQ ref (and your 3buff seems to imply you do), how are you able to write a partial frame to get corruption? Are you absolutely, super-duper, positively sure that you don't have two IMAQ images that are inadvertently identically named? That would cause corruption and may not be apparent until higher speeds.

     

    Yes, perfectly sure. Buffers are allocated with different names everywhere, and filled in DLL functions, using IMAQ GetImagePixelPtr.

     

    I have made some some benchmarks, measuring both consumer and producer frequency, and had the 3 solutions. Display is now faster, now that I have dumped my XControl used to embed IMAQ control, which I believe was causing corruption.

    1. Trivial solution: 1-element queue, enqueued by producer. Consumer previews queue, displays, and empty the queue, blocking producer during display
    2. 2 queues solution (by bbean)
    3. Condition variable (mine)

     

    All three solutions have similar performance in all my scenarios, except when I limit consumer loop to a 25 Hz,  in this case producer in 1. is also limited at 25 Hz. Trivial solution shows image corruption in some cases.

     

    Except this case, I never see producer loop being faster than consumer, they both stay at roughly 80 Hz, while it has some margin: when I hide display window, producer goes up at its max speed (200 Hz in this benchmark). When CPU is doing other things, the rates go down to the same values at the same time, as if both loops were synchronized. This is quite strange, because in both 2. and 3. producer loop rate should be independent from consumer.

     

    Consumer really does only display, so there's no reason it would slow down the producer like this... Everything looks like there's a global lock on IMAQ functions? Everything is shared reentrant. Producer is part of an actor, execution system set to "data acquisition" and consumer is in the main VI.

  9. CharlesB...I can't figure out where your race condition is.  Also am not sure why you need all the extra mechanisms (Semaphore, DVR, Status) when you can achieve the same thing using 2 simple queues as shown in my example.  Plus the 2 queue approach guarantees you can not work on the image being displayed until it is put back in the camera/processing pipeline.  IMHO it is a simpler and easier to debug solution.  The other thing my solution does is allow you to do the image processing in "parallel" to your acquisition.

     

    It may be a bit overkill, but DVR and semaphore are here to protect against race condition. I actually just translated the code shown in the paper from MS research. It's important that the operation "unlock then wait then re-lock" is atomic, so that the producer don't read data in between, and if so you have inconsistent operation...

     

    Yes, the 2 queue approach is simpler, and it also works, but it's also interesting to have a G implementation of the condition variable, as this pattern may be helpful in some cases. I agree it's not aligned with the usual paradigm in LabVIEW, but overall it was a good exercise :cool: Also I have a small performance gain with the CV version of triple buffer.

     

    Maybe you can implement condition variable using fewer synchronization mecanism, I'd have to think about it

     

    You’ll need to mark your Image Indicator as “Synchronous Display†if you want it to display before the Producer overwrites the buffer.  Indicators are asynchronous by default and update at something like 60Hz, slower than your 400 frames/second.

     

    BTW, I can’t see how this code would interface with some other process doing the main application work on all 400 frames.  What do you do with the full 400 frames?

     

    Thanks, I didn't know about the synchronous display stuff. The producer is actually doing other processing tasks with the frames, and needs to spend as less time as possible on the display, which is secondary compared to the overall acquisition rate, so I need display-related stuff to be wait-free in the producer.

  10. UPDATE Victory!! The corruption problem wasn't related to the triple-buffering, but to my display which was using XControl. I don't know why, but it looks like my XControl was doing display stuff after setting value, anyway problem is gone.

     

    Note that the solution with two queues posted by bbean perfectly works and have similar performance. Kudos! :worshippy: However I keep my solution, which is more complex, but has a fully independent producer.

     

    Triple buffering.zip

     

    How to use
     
    This class allows to have a producer loop running at its own rate, independently from the consumer. It is useful in the case of a fast producer faster than the consumer, where the consumer doesn't need to process all the data (like a display).
     
    Buffers are provided at initialization, through refnums. They can be DVRs, or IMAQ refnums, or any pointer to some memory area.
    Once initialized, consumer gets the refnums with "get latest or wait". The refnum given is locked and guaranteed to stay uncorrupted from the producer loop. If new data has been produced between two consumer calls, the call doesn't wait for new data, and returns the latest one. If not, it waits for the next data.
    At each producer iteration, producer starts with a "start grab", which returns the refnum in which to fill. Once data is ready, it calls "ready". These two calls never wait, so producer is always running at a faster pace.

     

    Implementation details

     

    A condition variable is shared between producer and consumer. This variable is a cluster holding indexes "locked", "grabbing", and "ready". The condition variable has a mechanism that allows to acquire mutex access to the cluster, and atomically release it and wait. When the variable is signaled by the producer, the mutex is re-acquired by the consumer. This guarantees that the consumer that the variable isn't accessed by producer between end of consumer wait and lock by consumer.

    Reference for CV implementation: "Implementing Condition Variables with Semaphores ", Andrew D. Birrell, Microsoft Research

  11. Once again, thanks everyone for your propositions!

     

    Scenario 2 Implementation:

    Producer Dequeues a buffer from Q2.  Fills it.  Enqueues it to Q1.

    Consumer Dequeues from Q1.  Processes it.  Enqueues it to Q2.

     

    Since Producer is pulling from Q2, there is no chance it will ever overwrite an unprocessed buffer.

     

    Q1 being empty is not a problem.  Means consumer is faster than Producer.

    If Q2 is empty, Consumer is backlogged and a loss must occur.  Producer Dequeues from Q1.

     

    Since an element can only be Dequeued once, there is no chance the Consumer is processing that buffer and it is safe to overwrite.

    But if consumer is too slow, producer will have an empty Q2 when starting to fill, and will have to wait.

     

    I have sketched a simple condition variable (only one waiter allowed) class, that protects access to a variant, and gives two main methods, signal and wait.

     

    It is used in a triple-buffer class, having 3 main methods: start grab, grab ready, get latest. "get latest"  doesn't wait if a buffer has been ready since latest call, and waits if it's not the case. Both methods "start grab" and "grab ready" never wait.

     

    I will post it as soon as it's ready.

  12. However, you can get “lossy behavior†by getting the Consumer to flush the queue and only process the last element, passing any other IMAQ refs returned back to the C—>P queue.  You might need more than 3 refs if your Consumer is very slow.

     

    Forgot to answer on this: if doing this, and the consumer is really slow, you have to adjust the number of buffers, depending on the "slowness" of the consumer, in order to prevent corruption. So for me it has the flow of "too bad, it worked on my setup". More, there is still no way for the producer to know which buffer is locked without the race condition problem.

     

    Everything we said before and use a lossy queue.

     

    Sorry to say, but lossy queue on a shared buffer doesn't solve data corruption, as I said before.

     

    You are over thinking it because LabVIEW is a "Dataflow paradigm". Synchronisation is inherent!. Double/triple buffering is a solution t get around a language problem with synchronising asynchronous processes that labview just doesn't have. If you do emulate a triple buffer, it will be slower than the solutions we are advocating because they use dataflow synchronisation and all the critical sections, semaphores and other techniques required in other languages are not needed.

     

    C++ triple buffering is not the droid you are looking for.

    I think I fully understand the advantages of dataflow programming, My program has more than 1500 VIs, uses the actor model for communication, and doesn't have any semaphore or complex synchronization.

     

    But here we have large buffers, in a pointer-style programming, so dataflow isn't really applicable. I mean, dataflow with pointers isn't really dataflow, since they don't carry the data. These buffers are shared between two threads, and LabVIEW doesn't protect you if you don't have proper synchronization. Sorry to insist, but triple buffer is just a simple sync pattern that fits to my problem if I want the best performance, it is widely used display-related programming.

  13. Again, check that your frame grabber isn’t already buffering, with a “Get latest frame†method.  That is what I would expect it to do.  And this would greatly simplify your program as you’ll only need one loop.

     

    However, you can get “lossy behavior†by getting the Consumer to flush the queue and only process the last element, passing any other IMAQ refs returned back to the C—>P queue.  You might need more than 3 refs if your Consumer is very slow.

     

    Unfortunately I can't use any async framegrabber option, because I'm doing processing on sequence of images, which disables the possibility of a "get latest frame". I really need to implement triple-buffering by myself, or I'll have to slow down the producer with copying image to the display at each iteration.

     

    It seems that LabVIEW doesn't have proper synchronization function to do this, but in C++11 it is really straightforward (condition variable are native), so I'll use a CFLN...

     

    How fast can you run your acquire and processing loop if you do not display?

     

    So passing the imaq reference to a display loop with a notifier doesn't work because you are afraid you may be overwriting the "displayed" image in the notifier loop? 

     

    It runs at full speed if I disable display. And I already tested simple notifier, display gets corrupted by the acquisition

  14. Can you share your code? Are you doing any image processing?  Can you drop frames on the image processing?

    It's a really large app, so I can't share the code, but producer actually does acquire and process, and consumer dose only display. Camera is awfully fast (2Mpixels at 400 FPS), and I'm doing very basic processing, so in the end image display is slower than acquisition, which is why I want to drop frames, without slowing acquisition down.

  15. Uh, it aint magic.   If you can consume 90 frames a second then you can’t produce 100 frames a second.   In that case, a single buffer would lead to something like 45 frames/sec, as the Consumer waits about half the time for the producer.  With two buffers the frame rate would still be less than 90/sec, as jitter in the Producer sometimes causes the Consumer to wait.  With three buffers the jitter doesn’t matter, and one gets 90 frames/sec.  But you don’t get 100.

    BTW, you should check to see if your Matrox frame grabber isn’t already buffering frames asynchronously, rendering additional buffering pointless.

     

    But in my case a slower consumer can drop frames (maybe I should have pointed that before), as it's just for display. So if producer is faster it must not be slowed down waiting for consumer to process every frame. Does my problem make more sense now?

  16. I think you are over thinking this. The inherent nature of a queue is your lock. Only place the IMAQ ref on the queue when the grab is complete and make the queue a maximum length of 3 (although why not make it more?). The producer will wait until a there is at least one space left  when it tries to place a 4th ref on the queue (because it is a fixed length queque). If you have multiple grabs that represent 1 consumer retrieval (3 grabs then the consumer takes all three), then just pass an array of IMAQ refs as the queue element. As 

     

    I wish I was overthinking, but with a simple 3-elements queue, the producer has to wait when queue is full. This means lower performance when the consumer task is longer than producer task. In triple buffering the producer never waits, that's its strength. Take a look at http://en.wikipedia.org/wiki/Multiple_buffering#Triple_buffering if you're not convinced of the advantages.

     

    Yes. You are right. I had forgotten about those. Whats the betting it's just a IMAQ refs queue ;)

     

    I don't use IMAQ for acquisition, only for display (I have Matrox framegrabber, and make calls to their dlls to fill IMAQ references), so I don't have access to these VIs.

     

    See my code

     

    Same remark, producer is filling a queue, and will have to wait if it's full.

  17. Why don’t you just use two queues and three IMAQ refs?  Producer takes a ref from one queue, fills it, and puts on the other queue.  Consumer takes from that queue, reads it, and puts it back on the first queue.   Simple.  Why do you need some complex locking system?

     

    Complex locking is here to be sure that P won't overwrite a buffer locked by C in a race condition, because P won't be aware of it. Both events "C wait ends" and "C locking buffer" must be atomic, or you have this race condition.

     

    With your solution, naming queues PQ and CQ.

    If C is longer than P, PQ might be empty when P is ready to fill a buffer, and has to know which buffer is held by C, in order to not corrupt it. So P has to wait for C to complete operation and fill PQ, which is against purpose of triple buffering where P acquires at full frequency.

  18. DVRs (for the buffers) and semaphores (LabVIEW's "condition variable").

     

    However. You only have one writer and reader, right? So push the DVRs into a queue and you will only copy a pointer. You can then either let LabVIEW handle memory by creating and destroying a DVR for each image or have a round-robin pool of permanent DVRs if you want to be fancy (n-buffering). You were right in your original approach, you just didn't use the DVR so had to copy the data.

     

    In my case buffers are IMAQ references, so I don't think I need DVRs, since IMAQ refs are already pointers on data.

     

    If I give it a try with queues, I need a P->C queue for the "ready" event.

     

    At grab start, P determines on which buffer to fill, which is the one that is neither locked nor ready.

    At grab end, P pushes on the "ready" queue. If C is waiting, the just-filled buffer is marked as locked, and if not, it is marked as ready (and the previous lock remains). But P also needs to know whether a buffer has been locked by C during a fill, like in the third iteration on my original diagram.

     

    So we must have another queue, C->P, that P will check at grab start. Which brings the following race condition problem (P and C are in different threads) if C request happen at the same time a grab ends:

    • B1 is ready, P is filling B2, and B3 is locked
    • C queries the "ready" queue, which already has an element (no wait), B1
    • C thread is put of hold
    • P threads wakes, with a grab end, P pushes in the "ready" queue B2
    • P sees no change in the "locked", queue, and chooses to fill on B1, since B3 is still locked
    • C thread finally wakes, and pushes on the lock queue B1
    • But P has already chose to fill on the B1 buffer, so B1 will be corrupted when C will read data.

    So we need a semaphore around this, so that P doesn't interrupt the sequence "ready dequeue, lock queue". The semaphore will be acquired by C before waiting on the "ready" queue, released after pushing to the "locked" queue, and P will acquire it at grab start. I don't think there's a dead lock risk, but I'm not sure...

     

    That's why condition variables are great: you have an atomic wait and lock, which prevents such race conditions. I pretty sure condition variables can be implemented with queues and semaphores, but I don't know how.

     

    Following on from what Shaun said, I use exactly the set up he described. I pull my images from an FPGA FIFO so I have it set up to give me a DVR which I pass through a series of queues (a messaging hierarchy). The ultimate endpoint of the DVR is the File IO process, but one of the intermediate message routers will periodically copy the data out from the DVR (at a rate of 30 Hz or similar) for display before piping the DVR onto the File IO process.

     

    But how does the original FIFO knows it shouldn't write on the memory location it gave to you? If the producer acquires data at a higher rate than the consumer, there must be a mechanism to prevent data corruption if buffers aren't copied between producer and consumer.

     

    Are you sure of that?  I wouldn’t expect a queue to make a copy of a pointer-based data structure like an array or object.  Unlike Notifiers, which must make a copy on reading.  

     

    So if you put a large array in a queue, and dequeue it in another loop, no data copy is ever made?

  19. In the need for displaying large images at a high performance, I want to use triple buffering in my program. This type of acquisition allows to acquire large data in buffers, and have it used without copying images back and forth between producer and consumer.

     

    This way consumer thread doesn't wait if a buffer is ready, and producer works at max speed because it never waits or copy.
    If the consumer makes the request when a buffer is ready, it is atomically turned into a "lock" state. If a buffer isn't ready, it waits for it, atomically lock it when it is ready.

    The following timing diagram shows how it goes with the 3 buffers.

     

    post-1401-0-56949400-1413476263_thumb.pn

     

    Traditional LabVIEW queues don't fit here because we have large buffers that we don't want to copy.

     

    I tried to implement it with notifiers, but there is always a risk of a race condition between the producer selecting where to fill next acquisition, and locking by the consumer. With condition variables, it is easy because when a wait ends, a mutex is locked. There is no such synchronization primitive in LabVIEW. How can I implement this?

     

    (cross-posted on the dark-side)

  20. Oh OK, so the problem is that the wire type of the variant needs to be of the object type.

     

    The fact that it works for objects that are not inside a cluster was misleading, though.

     

    In my case, where I don't have the wire type in the diagram (because it's generic code), but only a variant of it, "preserve run-time class" can't be used, so I had to copy give object variant the type of the source object, and it works correctly:

     

    post-1401-0-46009400-1403098917_thumb.pn

    Thanks a lot!

  21. I hope someone can shed some light on this problem that arose when working on object serialization with JSON API

     

    Take a cluster of two elements with an object inside it, make it a variant, and cast the wire of the inner object to LVObject. For this I use the OpenG's LVdata library, that allows to access the contents of cluster as an array of variant. Then, still with LVdata, put it back in the cluster. The variant should be compatible with the original data, but it isn't, you'll get an error 91 (incompatible variant) when converting back to data.

    post-1401-0-87557400-1403008823_thumb.pn

    If you don't cast, or if you convert to the object that came after the cast, the conversion is fine.

     

    You may ask what's the point of this code: from a deserialization, it allows to get data back from a JSON string that represents a cluster with objects inside. So it would be really great if I can make it work!

     

    Is it a bug or a "feature"? What can be done to correctly get my data back?

     

    Attached the piece of code that demonstrates the problem (LabVIEW 2011).

     

    Cluster variant and objects.zip

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.