Jump to content

dadreamer

Members
  • Posts

    349
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by dadreamer

  1. As I wrote here, ArrayMemInfo node was introduced only in 2017 version. It just didn't exist in 2015. That's why it crashes. After a quick test in 2022 Q3 MoveBlock didn't crash my LV. Gonna get a closer look at the code tomorrow.
  2. Each time a library function wants a struct pointer as an input parameter, you should pass it as Adapt to Type -> Handles by Value. In this case LabVIEW provides a pointer to the structure (cluster). If the struct is very complex (not in your case) it's possible to pass a (prearranged) pointer as an Unsigned Pointer-Sized Integer and take it apart after the call with MoveBlock function. I see some inconsistencies in your struct declaration and the cluster on the diagram. The second field should be triggerCount, but the cluster has it named as triggerindices. The same for the third field: triggerIndices -> ListTI. Might be a naming issue only. Then which representation does that array have? If there are common double values, it's enough to allocate 8*100 bytes of memory. Of course, you may make some memory margin and it's not doing any bad, except taking an extra space in RAM. After the IQSTREAM_GetIQData call you likely want to extract the array data into a LabVIEW array, so before doing DSDisposePtr you would call MoveBlock to transfer the data. Also an important note: Unsigned Pointer-sized Integers are always represented as U64 numbers on the diagram. So if you are working in 32-bit LabVIEW, you should cast it to U32 explicitly before building a cluster. It's even better to make a Conditional Disable Structure with two cases for 32-bits and 64-bits, where a pointer field would be an U32 or U64 number respectively.
  3. @mcduffDone. It's very limited example though as I don't have the OP's SDK and no VIs were posted to tinker with.
  4. You can deal with pointers in LabVIEW with the help of Memory Manager and its functions. Just create an allocated pointer of 8 bytes (size of DBL) with DSNewPtr / DSNewPClr, build your cluster using that and pass it to the DLL. Don't forget to free the pointer in the end with DSDisposePtr. DSNewPtr-DSDisposePtr.vi upd: Seems like I've read it diagonally. Now I see you need a pointer to an array of doubles. So you'd allocate a space in memory large enough to hold all the doubles (not 8 bytes, but 8 x Array Size, i.e. 8 x lNumberDiodes). After the function call you'll need to read the data out of the pointer with MoveBlock function.
  5. Tried that. 2009 is the latest version where the GSW has not been packed yet (lvlib). I have slightly reworked that window to satisfy my preferences (dark mode etc.) and used it for 2020 and 2021 versions. But in 2022 Q3 something has changed in the underlying C code, that's supporting the behaviour of the GSW and due to that my modded GSW stopped working normally. I've applied few workarounds but still there are some quirks. Besides that '09 GSW is too ascetic, maybe I could remake it more extensively, but I decided to leave it as is.
  6. I got used to almost everything in modern LabVIEW, but these two are driving me nuts. - "new styled" bright white splash screen and GSW (Getting Started Window) (since LV 2021 and up); - online help in LabVIEW 2022 Q3 (and probably up?).
  7. Yes. Also stays (and works) in RTE, when the VI is compiled. Also works if saved for previous down to LV 8.0. In fact LV 8.0 didn't have that token in its exe code, but the call remained inlined. LV 8.6 had that token, so confirmed it there as well.
  8. You can squeeze some more time out of the MoveBlock without wrapper generation.
  9. Yeah, it would be nice to finally see those mysterious tokens to confirm or refute our guesses 🙂
  10. Took some time to find this old thread in the Wayback Engine, but here it is: http://web.archive.org/web/20080315135806/http://forums.lavag.org/Comments-in-Configuration-Files-t9183.html&mode=linear You want the "read_configuration_data.vi ( 78.66K )" attachment.
  11. Generally I agree with Rolf here. That ArrayMemInfo node even though looks neat and easy-to-use could easily be removed in the future versions of LabVIEW as it's for internal use only. NI has already removed many undocumented or obsoleted stuff from the core of LV 2022, including all the NXG helper functions like that NCGGetOperateDataPointer. If it goes to production, I'd prefer conventional Memory Manager functions like DSNewPtr and friends.
  12. Selecting this context menu entry opens the browser and displays this page. No more built-in help?.. If so, I assume they didn't fill all the documents yet.
  13. 2022 Q3 is already available for download from NI website. All the three OS'es are in place and the Community edition too. Not touching the subscription issues, nobody stops me from using a trial and during that time I could specify "2022" in my profile 🙂
  14. What about 2022? I suppose expect it being out soon. I've seen some guys are using 2022 Beta meantime.
  15. Why do you use Read from Text File VI instead of Read from Binary File VI? Read from Text File VI does End-Of-Line chars conversion by default. It could be disabled in the RMB context menu of the node.
  16. Windows Imaging Component already has native bitmap decoder for BMP format, that works "out-of-box". So why reinvent all that from the scratch? Of course, there would be a reason, if we were on Linux or macOS. As it's about Windows only, then the WinAPI decoder should be already optimized to process the formats fast enough. I doubt it would be worse in performance than the LabVIEW built-in instruments.
  17. It was a project with MOXA video server, that was digitalizing an analog video signal and outputting it each 25 to 30 frames per second. That SDK was providing only a JPEG stream, hence I had to use some helper decoder of mine. Later we switched to another video server, that was able to output in many other formats besides, including a RTSP one. There exists one more way to make the CLFNs run in the same thread and stay in 'any thread' all the time. I recall that I was having some thread unsafe DLL, that I wanted to use in my project, but I absolutely didn't want it to run in UI thread. Then I used Timed Loop for that and it worked like a charm. But I have never dug those built-in XNodes deep enough to see, how they work internally.
  18. Remove this and try again. I think, if your memory stays on nearly the same level (say, 180 MB) for a few hours and does not jump high drastically, then no need to worry.
  19. For this you would need to: - change the type of the user event in two locations: on the diagram and in the callback code; - slightly change the callback logic, trying some things: first try to post the cluster with the data handle address instead of the array, second try to not allocate the array and copy the data into it, but post the original SDK pointer to LV directly. Plus in fact we even don't decode the data ourselves - the PlayM4 decoder does it for us! As we need the data type mostly and don't need the actual data contents, we could ease the callback logic a little. But I neither want to say you to do this right now, nor I'm inclined to do this myself. I think Rolf is going to come with more advanced diagnostics or solutions.
  20. Actually I would try to eliminate one handle allocation by passing a pointer with PostLVUserEvent and dereferencing it in LabVIEW (+ deallocate it manually, of course) (as I already mentioned on the first pages of this thread). It would be more of a test to see if the large memory consumption goes away. If not, then it's reasonable to allocate the space just one time (when the DLL is loaded, for example, or right on the diagram with DSNewPtr) and free it when the app ends (when the DLL is unloaded or with DSDisposePtr on the diagram).
  21. Win32 Decode Img Stream.vi does not produce memory leaks of any sort. I've been running it on a production for months. Never ever received errors from that VI. As to CoInitializeEx, it was implemented this way in the original thread on SO, I just borrowed the solution. But I checked now, CoInitializeEx always returns TRUE, no matter what. Extra resources are not allocated. I assume it's safe enough to call it multiple times from the same thread. But you may easily add CoUninitialize to there, if you're afraid it works improperly. I'm just thinking this might be not a good idea, given that description of the function: A lot of work would be done on each call. Better to do this once on the app exit. Or leave it to the OS, when LabVIEW quits.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.