Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Or you can try to use the NI-IMAQ for USB Cameras which was obsoleted by IMAQdx
  2. It's an old (very, very old) joke that relies on making an assumption about the first persons statement with regards to his perception of the problem. My mums dog won't bark at strangers. Your mum doesn't have a dog.
  3. I don't really know that much about it (it's not in 2009 which I'm using here). I do know that this is the method they now employ for all other DAQ such as. FPGA for high throughput applications. I would guess that it is very efficient as they can do DMA transfers directly from acquired memory to application memory without having to go through the OS. Reading out of the DVR would be the bottleneck rather than from acquired memory (system memory as they call it) to LabVIEW memory which is the case with painfully slow IMAQ to Array and Array to IMAQ, for example. Until IMAQ ImageToEDVR that was really the only way when you wanted to stream frames to files. With the EDVR coupled with the asynchronous TDMS (which just so happens to take a DVR) I can see huge speed benefits to file streaming at the very least.
  4. IMAQ GetImagePixelPtr? That only retrieves a pointer. Are you then using IMAQ SetPixelValue to write individual pixels to an IMAQ ref? Not IMAQ functions. IMAQ references. This maybe explains my confusion about corruption, Yes. IMAQ handles resource locking transparently (same as global variables, local variables, and any other shared resource we use in LabVIEW) so we never have to worry about data corruption (unless we cock up the IMAQ names. of course) Once you have an image inside an IMAQ ref. Never, never manipulate it outside of IMAQ (use only the IMAQ functions like copy, extract etc). Going across the IMAQ boundary (either direction) causes huge performance hits. As AQs signature states "we write C++ so you don't have to". If you are pixel bashing singly into an IMAQ reference pretending it is just an array of bytes in memory somewhere using IMAQ SetPixelValue, then you will never achieve performance. Get it in with one block copy inside the DLL and never take the data out into values or arrays. Use the IMAQ functions to manipulate and display the data . This will cure any corruption as you will only receive complete frames via your DLL. If you want, you can implement your triple buffering inside the DLL. Will it be fast enough? Maybe. This is where using NI products have the advantage as the answer would be a resounding "easily". Anecdotally with yours and my methods I can easily get 400 FPS using a 1024x1024 U8 greyscal image in multiple viewers. I'm actually simulating acquisition with an animation on a cruddy ol' laptop running all sorts of crap in the background. If I don;t do the animation and just a straight buffer copy from one image to another, I get a thousands of frames/sec. However, I'm not trying to put it into an IMAQ ref a pixel at a time. Animation Buffer copy.
  5. Compiled with different optimiser settings? One in LVx32 the other in x64? The girl in the next cubical had a headache? It's a bit of a "my mums dog won't bark at strangers". There is a performance monitor (Tools>>Profle>>Performance And Memory). Inspection of that while running may identify the VIs that are working harder than usual and may provide a hint at what is different.
  6. This has been really bugging me in that if you have the image in an IMAQ ref (and your 3buff seems to imply you do), how are you able to write a partial frame to get corruption? Are you absolutely, super-duper, positively sure that you don't have two IMAQ images that are inadvertently identically named? That would cause corruption and may not be apparent until higher speeds.
  7. Do you have 2012 or later as an option? If so, the IMAQ ImageToEDVR VI will be available.
  8. I haven't played with this yet, but from reading everything about this acquisition method it strikes me that it doesn't fit with an asynchronous buffering producer consumer architecture. It seems to be a synchronous method targeted at efficient memory management and high throughput.. It states in the docs: "You must delete this external data value reference before the driver can write new data to the specified portion of the buffer." I read that as you must destroy the ref in order for the acquisition to acquire the next block of data. i.e. if you stick it on a queue/fifo you won't get another acquire until you've popped it off the queue and destroyed it which defeats the object.
  9. Patches only fix the known issues but I've found they have never fixed my issues...lol 2011 was renowned for funnies. I had one code base that would compile fine in 64 bit but not in 32 bit. - the IDE just disappeared. On a couple of machines, the debug highlighting wouldn't highlight unless you set one of the VIs on the diagram to "Suspend When Called" and one of my colleagues reported that his LabVIEW ini file kept on getting wiped out. Is this a mature product? Are you still using 2011 for new projects? If you have moved on and it only happens in 2011, then I would suggest just calling the windows dialogue directly with API calls and put a note in the documentation. Even if you found a reproducible way of showing NI, it is unlikely they will address it and the best you could probably hope for is one of the perpetual CARs.to go with those from 8.x. Not very helpful but maybe solace in shared misery with the "Stability and Performance" release that was neither
  10. Still no good advice as to what to look out for. (You haven't said what version and bitness you are seeing this behaviour in). One of the first things I do when I get peculiar behavior like this is load it up and do a mass compile in the other bitness and/or do a save for previous version. Sometimes extra problems pop out like complaints about a corrupted VI (happened last week to me), a conflict that wasn't being raised before (happened about 3 months ago) or it will go off hunting for a VI it suddenly decides it needs which you removed during the last ice age. Long shot, I know, but it has caught some things in the past..
  11. And the stage is set! In the blue corner, we have the "Triple Buffering Triceratops from Timbuktoo". , In the red corner we have the Quick Queue Quisling from Quebec and, at the last minute. In the green corner we have the old geezers' favourite - the Dreaded Data Pool From Global Grange. Dreaded Data-pool From Global Grange.llb Tune in next week to see the results.
  12. Obviously you don't. Looking forward to your triple buffering and bench-marking it Maybe put it in the CR?
  13. Are you currently using IMAQ Extract Buffer VI and finding it is not adequate?
  14. Everything we said before and use a lossy queue. You are over thinking it because LabVIEW is a "Dataflow paradigm". Synchronisation is inherent!. Double/triple buffering is a solution t get around a language problem with synchronising asynchronous processes that labview just doesn't have. If you do emulate a triple buffer, it will be slower than the solutions we are advocating because they use dataflow synchronisation and all the critical sections, semaphores and other techniques required in other languages are not needed. C++ triple buffering is not the droid you are looking for.
  15. Yes. You are right. I had forgotten about those. Whats the betting it's just a IMAQ refs queue
  16. Can you elaborate on that a bit more? Lossy, how?
  17. I think you are over thinking this. The inherent nature of a queue is your lock. Only place the IMAQ ref on the queue when the grab is complete and make the queue a maximum length of 3 (although why not make it more?). The producer will wait until a there is at least one space left when it tries to place a 4th ref on the queue (because it is a fixed length queque). If you have multiple grabs that represent 1 consumer retrieval (3 grabs then the consumer takes all three), then just pass an array of IMAQ refs as the queue element. As
  18. DVRs (for the buffers) and semaphores (LabVIEW's "condition variable"). However. You only have one writer and reader, right? So push the DVRs into a queue and you will only copy a pointer. You can then either let LabVIEW handle memory by creating and destroying a DVR for each image or have a round-robin pool of permanent DVRs if you want to be fancy (n-buffering). You were right in your original approach, you just didn't use the DVR so had to copy the data.
  19. https://www.youtube.com/watch?v=OdB2dy59Waw
  20. You don't need to do partial matching. You translate the string "The remaining time is %d Seconds" and use the format into string primitive. . You might want to take a look at Passa Mak. It will generate all the language files for translation and switch all the UI controls. (Good luck with the Chinese one. I'm not brave enough to attempt the East Asian translations)
  21. I think the OP was just having difficulties figuring out how to do #5 and #6. He'll be back again when he runs into #2 between modules.
  22. I hate strict typing. It prevents so much generic code. I like the SQLite method of dynamic typing where there is an "affinity" but you can read and write as anything. I also like PHPs dynamic typing, but that is a bit looser than SQLite and therefore a bit more prone to type cast issues, but still few and far between. That is why sometimes you see things like value+0.0 so as to make sure that the type is stored in the variable as a double, say. Generally, though. I have spent a lot of time writing code to get around strict typing. A lot of my code would be a lot cleaner and generic if it didn't have to be riddled with Var to Data with hard-coded clusters and conversion to every datatype under the sun. You wouldn't need a polymorphic VI for every data-type or a humungous case statement for the same. It's why I choose strings which is the next best thing with variants coming in 3rd. <rant> I mean. We have a primitive for string to int, one for string to double another for exponential (and again, all the same in reverse). Really? Why can't we connect a string straight to an integer indicator and override the auto-guess type if we need to. </rant> But yes. I think it can can be done in LabVIEW. They could do it with variants. Not as good as dynamic typing, but it'd be closer. A variant knows what type the data is and you can connect anything to them but they failed to do the other end when you recover the value (I think it was a conscious decision). That is why I call variants "the feature that never was" because they crippled them. I think recovery of data is a bit of a blind spot with NI. Classes suffer the same problem. It's always easy getting stuff into a forma/type/class, but getting it out again is a bugger.
  23. Thinking about it. If you go the micro controller route for the detector, you might as well go for Bluetooth to get the extra range and tell people to keep their ids visible. I once used a similar idea to upload results data to engineers phones when they passed by inspection machines on the factory floor. It continuously scanned for bluetooth phones (not many tablets in those days ) and if it found one, compared the mac address to a user list. It then pushed the files to their SD card. You may remember the OPP Push software that was in the CR a while ago. That was part of it (the bit that detected and pushed the files).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.