Jump to content

dadreamer

Members
  • Posts

    350
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by dadreamer

  1. You could probably take a look at this: https://forums.ni.com/t5/Machine-Vision/Convert-JPEG-image-in-memory-to-Imaq-Image/m-p/3786705#M51129 For PNGs there are already native PNG Data to LV Image VI and LV Image to PNG Data VI.
  2. Another option would be to get NI's hands on IntervalZero RTX64 product, which is able to turn any Windows-driven computer into a real-time target. That would definitely require writing kernel drivers for NI hardware and some utilities/wrappers for LabVIEW to interact with the drivers through user-space libraries. Of course, the latter is possible now with CLFN's but it's not that user-friendly, because focuses mainly on C/C++ programing. Not to mention, that a limited subset of hardware is supported.
  3. Also take into account the bitness of your ActiveX libraries, that you're going to use. If you want to use 32-bit libraries, then you invoke "%systemroot%\SysWoW64\regsvr32.exe" in your command shell. For 64-bit libraries you invoke "%systemroot%\System32\regsvr32.exe" to register. That is true on 64-bit Windows. Better do this manually and, of course, with administrator privileges (otherwise it may not register or may report "fake" success).
  4. You also can create the buttons in run-time with the means of .NET - the basic example is here (of course, you need to attach the event callback (handler) to your button(s) to be able to catch the button events).
  5. If you meant me, then no, I even didn't use your conversations with Jim Kring on OpenG subject. Seriously, what's the joy of just rewriting the prototypes?.. I have studied those on my own, even though I have LV 2.5 distro for a while and do know, that some Occurrence functions are exposed there (in MANAGER.H, to be more precise). Moreover, those headers don't contain the entire interface. This is all, that is presented: /* Occurrence routines */ typedef Private *OccurHdlr; #define kNonOccurrence 0L #define kMaxInterval 0x7FFFFFFFL extern uInt32 gNextTimedOccurInterval; typedef void (*OHdlrProcPtr)(int32); Occurrence AllocOccur(void); int32 DeallocOccur(Occurrence o); OccurHdlr AllocOccurHdlr(Occurrence o, OHdlrProcPtr p, int32 param); int32 DeallocOccurHdlr(OccurHdlr oh); int32 Occur(Occurrence o); void OccurAtTime(Occurrence o, uInt32 t); int32 OnOccurrence(OccurHdlr oh, boolean noPrevious); int32 CancelOnOccur(OccurHdlr oh); boolean ChkOccurrences(void); boolean ChkTimerOccurrences(void); The headers lack OnOccurrenceWithTimeout and FireOccurHdlr and some others (likely, they seem to be non-existent in those early versions). Having said that, I admit that Occurrence API is not that complicated and is easily reversible for more or less experienced LV and asm programmers.
  6. Queues, Notifiers, DVRs and similar stuff, even when seems to be exposed from labview.exe in some form, is totally undocumented. Of course, you could try to RE those functions and if you're lucky enough, you could use few, maybe. But it will take a significant effort of you and won't become worth it at all. To synchronize your library with LabVIEW, you'd better try OS-native API (like Events, Mutexes, Semaphores or WaitableTimers on Windows) or some documented things like PostLVUserEvent or Occur of Occurrence API. To be honest, there are more Occurrence functions revealed, but they're undocumented as well, so use them at your own risk. What about CINs, I do recall that former Queues/Notifiers implementations were made on CINs entirely. I never had a chance to study their code, and not that I really wanted to. I suppose, they're not functional in modern LV versions anymore as they got replaced with better internal analogues.
  7. There's also LV Process pipes implementation (part of GOLPI project), which seems to work in 64-bit LabVIEW and is more or less updated. Honestly I've never given it a serious try and I recall some limitations of it comparing to the Rolf's library (e.g., the lack of stderr support AFAICR).
  8. I think, you need to organize some kind of Inter-Process Communication (IPC) between the two. As long as both apps are made in LabVIEW, you have a wide variety of ways for them to communicate: TCP/IP, UDP, Network Streams, SV, Pipes, Shared Memory etc. I don't recommend the files-based IPC because it has some negative caveats like these. There's also an article on the other side: Inter-Application Communication (rather dated though).
  9. You should use Use Default if Unwired property. I couldn't come quickly with a good example, you may take a look at this post for the start.
  10. Maybe then you'd have more luck trying in PowerShell (if available). Also try without preceding .\ symbols. As I have Python paths written into the PATH environment variable, I don't even need to launch Python's own shell, I just execute that command in common Windows shell and it works.
  11. Take a look at https://github.com/mefistotelis/pylabview You will need 3rd Python and Pillow package: After that you proceed as follows: Unpack the .exe into a separate directory (7-Zip unarchiver works fine for me). Take \.rsrc\RCDATA\2 file and put it near readRSRC.py. Run .\readRSRC.py -x -i ./2 in the command shell. Unpack 2_LVzp.bin to get your VIs. You may also find this thread interesting to read: EXE back to buildable project
  12. Good work done! Another way would be to use MoveBlock function to read out the string data: How to determine string length when dereferencing string pointer using LabVIEW MoveBlock That way you could either read one byte at a time until you reach NULL byte or call StrLen, then allocate an U8 array of proper length and call MoveBlock finally. From what I can vaguely recall, GetValueByPointer XNode is not that fast as LabVIEW native internal functions (if that matters for you). Also I'm kind of unsure, whether you should deallocate that string, when you retrieved it in LabVIEW or the library deallocates it on its own (might be GC or some other technique used). Perhaps you could ask the developer about that or study the source codes. If you don't care, then just check for memory leaks, repeatedly retrieving a string (in a loop) and checking the memory taken by LabVIEW process.
  13. It doesn't seem so. For a row of 1048576 bytes it takes ~4 ms for RAS and ~3 ms for MB. Not a huge difference.
  14. I did a test like yours with For Loop and MoveBlock is a bit faster here. I'm getting 0,03 ms for RAS and 0,01 ms for MB. I took Initialize Array on MB diagram out of the Sequence, because it's just an extra operation. Also make sure you are not timing and filling the output indicator simultaneously, because the latter vastly impacts the measurements.
  15. Only when I disable the wrapper generation on the CLFN, I see some small performance gain in MoveBlock against Replace Array Subset: So, in all other use-cases the native nodes do their job just fine and they're much simplier to use (and more safe also). That is just a PoC method to show, that the work with arrays could be done "traditional way" in LabVIEW too as in text-based languages. I even suppose, Replace Array Subset and In Place Element Structure were both optimised/tweaked in some ways to behave better even in a dumb memory copying.
  16. Yeah, I guess it's obvious for (almost) every programmer. And well illustrated by the 4th method with MoveBlock call. Looking at that one might say, this is how the replace operation is made internally. By the way it's possible to speed up MoveBlock method a little disabling the wrapper generation. But still it is inferior in speed to the native methods (i.e., Replace Array Subset and In Place Element Structure).
  17. Maybe then you will find that VI interesting as well. I made that to compare different methods to replace rows/columns in the array (four known at the moment).
  18. So try to pass this cluster to your DLL (Adapt to Type -> Handles by Value) and see what will happen. In theory you should receive NetworkPortNumber in lower half (I32) of v.
  19. Just pass a 8 bytes wide element as an union (U64 / double / 8 x U8 cluster) and treat it according to the type field after the function call. But I also see, that you have to pass a struct (i.e., a cluster), not a single union. So you should bundle order, type and label fields to your cluster as well. I don't see a definition of valueType and valueLabel items of the Value struct. Assume, they are enum (I32) and long (I32), is that correct? I'm also kind of unsure, who is responsible to set the return type (long, unsigned long, double or string) - the caller or the callee?.. If it's the caller and you want the return to be a string, you have to allocate the necessary amount of memory for char *s (DSNewPtr and friends) and deallocate it later. If it's the callee, then when you're getting the return as a string, you have to deallocate that string, when done with it.
  20. You could try the VIs from here: FFMPEG scripting in LabVIEW (.NET). In FFMPEG Examples v1.1.zip archive you may find yuv420p to RGB.vi, that should do the work for you. Also take a look at yuvplayer, it might be helpful to verify that you're doing the conversion properly.
  21. Well, seems like I had to say that LabVIEW also should behave as RTE does, but it does not for some obscure reason. So, my bad in phrasing there. It's not so easy to answer, not knowing how Vision internals work. I suppose, it has something to do with the way, how Vision's memory manager allocates memory. Perhaps it's more optimized to work in LabVIEW and less (or not) optimized for EXE's. I noticed that in IDE IMAQ Create takes nearly the same amount of time to run (0,03 to 0,06 ms), while in RTE that amount starts at 0,03 and raises on each iteration. Here are the shots to illustrate. IDE: RTE: Maybe someone from NI could elaborate on these differences?.. By the way, I found two more similar issues [1, 2] and the reason behind each one was not clarified.
  22. Same behaviour here on LabVIEW 2020 64-bit, but... Do you really need to create and store in memory 10 000 images at once? I'm even not surprised, that both LabVIEW and RTE go crazy trying to do that. When I take IMAQ Create out of the loops, the situation improves significantly. IDE: 0,1 s for the extracts, 0,4 s for the thresholding EXE: 0,2 s for the extracts, 0,4 s for the thresholding Of course, there's no any reason to divide the whole processing into two separate loops in this case (because you would get the same image slice on all 10 000 iterations. Instead there's a need to do the entire processing in one loop and finalize the image after the loop with IMAQ Dispose. With this approach you'd reuse the same memory location on each interation instead of making a new one. If you need to run the processing in several threads, just create N IMAQ images before the loop, do your things and dispose them, when all the work is done.
  23. I believe it appears to be SQL Toolkit for G, that was superseded by SQL Compatibility VIs from Database Connectivity Toolset, which is named as LabVIEW Database Connectivity toolkit these days. I'm sure, you even could try to replace some of your obsolete VIs to their counterparts from \vi.lib\addons\_SQL directory, if you install Database Connectivity toolkit. As to where you could download that old SQL Toolkit, I'm at a loss to suggest as it was obsoleted a long time ago (LabVIEW 5.0 or so), thus likely no any alive links to download that now.
  24. Maybe you could figure out more information, when you add these lines to your labview.ini: You need to restart LabVIEW after that. When started, it will create DPrintf text file near labview.exe with a tons of a technical information. That info is often very low-level and internal to LabVIEW development, so it may be difficult to analyze. But it's something at least.
  25. How to get a list of image buffers? When I need some piece of code to run in a few instances simultaneously, I just set unique image names based on the meaning/purpose, what that code is invoked for (e.g., "sensor 1 - binarization" or "scanner - edge locator" and sort of). No extra magic here. And I even don't dispose the rest of images, as they are always reused on subsequent runs of the program (and between loop iterations too). Although I never launched too many IMAQ code in parallel (max. 5 threads, I think).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.