Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by dadreamer

  1. On 7/9/2012 at 7:12 AM, flarn2006 said:


    Did you know that this option shows "Autopreallocate arrays and strings" checkbox on VI Properties -> Execution tab?


    Here's the excerpt from the LabVIEW Help about this setting.


    Autopreallocate arrays and strings(FPGA Module) Optimizes array and string operations. This option forces LabVIEW to preallocate memory at compile time rather than dynamically allocating memory at run time. By default, the FPGA Module displays this option for VIs under FPGA targets in the Project Explorer window. This option must be enabled before you can compile VIs that use arrays or strings for FPGA devices. LabVIEW disables the Autopreallocate arrays and strings option on installations of LabVIEW without the FPGA Module. If you create a VI in a version of LabVIEW that does not have the FPGA Module installed and you later target that VI to an FPGA device, you must explicitly place a checkmark in the Autopreallocate arrays and strings checkbox and test the behavior of the VI on the FPGA device to verify that it is operating as expected.


  2. Nice catch, Rolf! It works and I am able to pass the input/output parameters now.




    But it appears that CallInstrument wants to be ran in UI Thread only, else LabVIEW goes big crash. It calls the VI synchronously, waiting until it finishes. That makes me think that this function is not the best idea to use as a callback, because when the user will be interacting with some GUI elements or the program will be running some property/invoke nodes, the callback VI will be waiting UI Thread to become idle, thus we could experience the delay between the events from our callback (or even loss of earlier ones?). It would be much better to run the VI in any thread somehow, but CallInstrument doesn't support that. That's why I decided not to adapt the asm samples for that function for now. Maybe I would be lucky enough to find some other options or overcome the threading issues somehow. Or end on PostLVUserEvent until some better ideas come to mind. 🙂

    On 10/3/2020 at 8:52 PM, Rolf Kalbermatter said:

    But I haven't found any information about that new interface.

    It's likely has to do with these two functions: GetCIntVIServerFuncs and GetCInterfaceFunctionTable. They return some tables, filled with the function pointers. There are InitLVClient / InitLVClient2, UninitLVClient / UninitLVClientNoDelay, WaitLVClientReady, WaitLVShuttingDown and a whole bunch of unnamed functions (safety precautions?). It'll take a serious effort to study how those work.

  3. 13 hours ago, flarn2006 said:

    Have you tried passing it a preallocated string?

    I have not, but I just tried and saw no difference in passing strings into CLFN (no matter, if variable, bounded or fixed length). I think, it could be some archaic setting for early LVs before 2009. It even doesn't cause a VI to be recompiled unlike "Type Only Argument" option. It just adds 0x40000 to the flags field in the terminal's DCO properties.


    Having played with that flags field a little, I can conclude, that it affects IDE behaviour and the terminal look only (to accept some wire(s) or not), but has nothing to do with the compiled code execution. For example, with value of 0x30000 I'm getting this.


    13 hours ago, flarn2006 said:


    Right-click string control/constant, Set String Size

    These tokens should be added to this thread for sure.

  4. 16 hours ago, ShaunR said:

    What about VIRefPrepNativeCall and VIRefFinishNativeCall? They sound interesting but maybe a red herring.

    I'm afraid, we can't use them, because they don't actually run a VI, but "prepare" its state for a run. I guess it's used only for LabVIEW Adapters to be called later from TestStand. VIRefPrepNativeCall requires a VI to be in reserved state, otherwise it returns 1027 error. If we mark the VI as reserved with StatVIRefReserve, then it all goes OK, but the target VI is not executed. Something like this:


    int32_t StatVIRefReserve(uintptr_t viDS, uint32_t *pVIRef, int32_t unknown, int32_t setUnset);
    int32_t VIRefPrepNativeCall(uint32_t viRef, uintptr_t *pVIDS);
    int32_t VIRefFinishNativeCall(uint32_t viRef);
    void  StatVIRefRelease(uint32_t viRef);

    There must be something between VIRefPrepNativeCall and VIRefFinishNativeCall like NCGRunVirtualInstrument, but with the ability to pass the parameters. Of course, we could use WriteDCOTransferData before the call to set our parameters, but the BD / assembly code becomes kinda cumbersome then.

  5. On 9/29/2020 at 12:54 AM, Rolf Kalbermatter said:

    These are not actual functions that exist but just something I came up with. I'm sure something similar actually exists!

    All I could find about this is just these two internal functions:

    • CallVIFromDll
    • NCGRunVirtualInstrument

    For the first one I was able to find .NET prototype only:

    CallVIFromDll.Invoke(Int32 epIndex, IntPtr lvClient, IntPtr entryPointDataSpace)

    I'm kind of unsure, how it could be used for the mentioned task. It looks like it doesn't accept the VI parameters. And what do these arguments mean exactly?..

    As to the second one, it doesn't accept the VI parameters as well and must be called in UI Thread only. The prototype is as follows:

    int32_t NCGRunVirtualInstrument(uint32_t VIRef);

    I did a limited testing and it appears to work. The VI is launched with the parameters on its FP and no panel is shown. We could prepare the parameters before the call with Control Value.Set method. Not very flexible solution, as I think.


    I saw your post from 2016, where you said that you have found some functions suitable for the task. Do you remember the details?

  6. 6 minutes ago, flarn2006 said:

    I do have one question: what is it that's stopping it from working on Linux/Mac?

    A few WinAPI calls are used there. Besides of that I suppose that assembly blocks should be altered and the hard-coded offsets should be checked as well. It all needs a time to debug on those platforms. I have VMs for Linux/macOS on one machine only, so... Do you really need this on Linux or macOS?

  7. It took a while to code it, but here it is finally. 😉 I have found a way to retrieve the object's pointer soon after the last post in this thread, but had to debug and test everything.


    How it works:

    1. As we don't have any public or private API to obtain Base Cookie Jar (BCJ) pointer (that is a LabVIEW global variable), the only way is to examine some function, which uses it, find the place with this global and save the pointer. Actually, this is how we're getting our BCJ. To clarify, it's for Object References only, not for any other references out there.
    2. After we've got BCJ, we call MCGetCookieInfo and get the information, associated with that refnum. As far as I understand, CookieInfo appears to be a C++ class with a few methods and properties (not just an ordinary struct). One of the methods is able to extract the object's pointer from VI DS.
    3. Further we call that unnamed method, using the hard-coded offset of 0xC (for 32 bit) / 0x18 (for 64 bit) and it returns the necessary pointer. The method should be called using __thiscall convention, that is why I'm using the technique described here. I decided not to write a wrapper library, so that everyone everywhere could easily browse the code and alter it, when needed.

    Currently tested on these versions of LabVIEW (both IDE and RTE):

    • 2020 (32/64);
    • 2019 (32);
    • 2018 (64);
    • 2014 (32);
    • 2013 (32);
    • 2011 (32);
    • 2010 (32);
    • 2009 (32).

    It won't work on anything earlier than LV 2009, because ExtFuncCBWrapper is absent there. Also no Linux or macOS at the moment, sorry. Oh, and it may become broken in the future LV releases. Or may not, nobody knows yet. 🙂

    • Like 1
  8. And this is for Windows 😉

    File-New-Win.jpg.ed4c4bf4d994b244841ba60bfbda122f.jpg GVBP-Win.jpg.c942dbdefa34f908248e422b48e320c7.jpg

    I don't want to violate the rules, therefore I'm not going to describe how to achieve this functionality on Windows. If you really want to get it, take a closer look at those Scripting packages, find .lc there, then alter PACKAGE / INCREMENT tokens to LabVIEW_XNodeDevelopment_PKG and COMPONENTS token to LabVIEW_XNodeDevelopment in it. Sure you know what to do next.

  9. You are right, I managed to successfully activate LabVIEW XNode entries with XNodeDevelopment_LabVIEWInternalTag=True token. Here are the screenshots taken on Ubuntu w/ LV 2019 64-bit.

    File-New-Lin.jpg.8bc7eedc3379d25fb65b126c377e0e4d.jpg GVBP-Lin.jpg.665484ad290394ed57945ff7f6b457b9.jpg

    And these are from Sierra w/ the same LV.

    File-New-MacOS.jpg.cada8ed4e9ce93485c5cf8a8eb01263b.jpg GVBP-MacOS.jpg.39aa2d59ad66cdd5479416e816084528.jpg

    In my case the preferences file was here:

    - /home/<user name>/natinst/.config/LabVIEW-<LV version>/labview.conf (on Linux);

    - /Users/<user name>/Library/Preferences/LabVIEW.app <LV version> 64-bit Preferences (on macOS).

  10. I'm still investigating things, but now I start to think that it's kinda complicated task. I've found no easy-to-use function in LabVIEW internals to get that pointer. And there's another difficulty - the refnum should be matched with the object, it relates to. I see no any refnum info in Heap Peek's object (and its DCO) properties. There's UID in the very first string, so that potentially could be used to identify the object needed. In that case the list of all VI objects should be retrieved (from OH or DS Heap, I guess) and each object should be analyzed to know, if its UID matches our one. Somewhat straightforward approach, but it's the best I could come up with. Maybe someone knows a better solution...

    As to refnums, there's MCGetCookieInfo and its wrapper named BaseCookieJar::GetCookieInfo, but I don't know a reliable way to find out a Cookie Jar for concrete LabVIEW instance. And even having that I'm unsure whether that function returns the necessary data.

  11. 10 hours ago, caleyjag said:

    I wasn't able to get the BetterFolderBrowser DLL working on my system unfortunately. 

    Could you elaborate on the problem you are facing? If you received Error 1386: The Specified .NET Class is Not Available in LabVIEW, then you most likely should unblock the DLL downloaded first and only after that you launch LabVIEW and run that example. See this article for the details.

  12. I'm kind of unsure whether this could be accomplished with a common File Dialog or an underlying yellow File Dialog and its ExtFileDialog internal function. But you could switch to .NET and use some third party libraries available. One of those is BetterFolderBrowser proposed here. I have just tested it on both 32- and 64-bit LabVIEW 2019/2020 and it works great. Here's the basic example:


    Untitled 1.vi

    • Like 1
  13. Did you have a look at VI Scripting? If not, check the following example - [LabVIEW]\examples\Application Control\VI Scripting\Creating Objects\Adding Objects.vi To be able to create controls or indicators you should open the BD and change VI server class for "Function" constant to Generic -> GObject -> GObject. Then you change "Subtract" constant to something like "Numeric Control" and run the VI.

    Hope, this will help you move further with your task.

    • Thanks 1
  14. On 7/20/2020 at 2:56 PM, EvgenKo423 said:

    Apparently these private properties should do exactly what you want

    Well, they're obviously not enough to have an absolute control over SH, including memory pools management as per SH API. Unfortunately I don't see any other functions or private nodes exposed, except maybe FreeSmartHeapPool function of mgcore, but that one crashes LV for some reason.

    I'm afraid, my find about mgcore's switching is almost useless, because compiled app (i.e. EXE) uses lvrt.dll, which already has mgcore stuff integrated into it, so no way to disable SH in lvrt, as it would require its recompile from the sources. And I never saw any different versions of LVRT except a classic one and a Full Featured one. Honestly, I don't know, why LabVIEW is shipped with 4 variants of mgcore, even if it's using only one of them.

    On 7/20/2020 at 2:56 PM, EvgenKo423 said:

    key "overanxiousMemoryDeallocation" (which doesn't help either).

    Yeah, it doesn't help much, because it's like you have inserted RD block in the end of every VI. In LabVIEW before 7.x there was "Deallocate memory as soon as possible" checkbox in the settings.



    Deallocate memory as soon as possible - Deallocates the memory of every VI after it completes execution. Doing this can improve memory usage in some applications because subVIs deallocate their memory immediately after executing. However, it slows performance because G must allocate and deallocate memory more frequently and in some instances it might lead to excessive memory fragmentation.

    This setting was stored in INI as anxiousMemoryDeallocation token. In 7.x they removed the checkbox and likely renamed anxiousMemoryDeallocation token to overanxiousMemoryDeallocation. LabVIEW still tries to read overanxiousMemoryDeallocation on the start, thus it could be used if needed. Not much sense for that though.

    By the way, this wiki page should be updated as well.

    • Like 1
    • Thanks 1
  15. 24 minutes ago, Yair said:

    Global Data.Set and Get

    By chance I came across those private nodes too and played with them a little. They allow to retain the data per LabVIEW process. That means, you may get access to data in any VI in any project. Feels like Tags, that are not stored inside VI DS. Neat feature, indeed.

  16. Thanks, Rob! Very well done research with a lot of technical details, as we all here like. 🙂 After reading and re-reading your post and SH related documents and playing with the samples I still have one question. Can we control SH behaviour in any ways or is it up to LabVIEW Memory Manager completely? Say, could I make SH to empty its pools and free all the data cached, thus reclaiming the space occupied? Or it never gives it back to me entirely? Could I disable SH utilization somehow or is it hardcoded to be always on?

    I found few private properties to control Memory Manager settings, e.g. Application.Selective Deallocation, Application.Selective Deallocation.EnableForAllVIs, Application.Selective Deallocation.LargeBufferThreshold and Application.NeverShrinkBuffers, but playing around these doesn't help much. I would say, it even worsens the situation in some cases. Currently I see no way to return the occupied memory, thereby LabVIEW can (and will) eat as much memory as it needs for its purposes. So, we have to live with it, don't we?..

    upd: I think I found something. In [LabVIEW]\resource folder there are four variants of Memory Manager library:

    • mgcore_20_0.dll - no SH, no AT (Allocation Tracker)
    • mgcore_AT_20_0.dll - with AT
    • mgcore_AT_SH_20_0.dll - with both SH and AT
    • mgcore_SH_20_0.dll - with SH

    LabVIEW uses SH version by default. If we switch to "no SH, no AT" version by backupping mgcore_SH_20_0.dll and renaming mgcore_20_0.dll to mgcore_SH_20_0.dll, then the memory consumption is somewhat reduced and we get more memory back after RD was called. On default mgcore_SH_20_0.dll I'm getting these values:

    • LabVIEW is opened and the example is running - 199 056 KB;
    • After the string array was once created (RD is on) - 779 256 KB (the peak value is at ~800 000 KB);
    • After the VI is stopped and closed - 491 096 KB.

    On mgcore_20_0.dll I'm getting these values:

    • LabVIEW is opened and the example is running - 181 980 KB;
    • After the string array was once created (RD is on) - 329 764 KB (the peak value is at ~600 000 KB);
    • After the VI is stopped and closed - 380 200 KB.

    Of course, it all needs more extensive testing. I see however, that "no SH, no AT" version uses less memory for the operations and so it could be prefferable, when the system is fairly RAM limited.

    • Thanks 2
  17. 30 minutes ago, D. Ackermann said:

    If I use "Specify path on diagram" checkbox, the dll is not automatically put in the data folder when building an exe.

    All the DLLs may be added into the project manually (by RMB click -> Add -> File) and in the build spec's on the Source Files tab the DLLs should be put into Always Included category. When the build finishes, you will have the DLLs in the 'data' folder. Just tested with a trivial project and it worked fine.

  18. 39 minutes ago, D. Ackermann said:

    I tried using a relative path in the CLFN path field, but it always replaces it with the absolute path.

    From my own experience with CLFNs, if you set "Specify path on diagram" checkbox in the CLFN's settings, LabVIEW always uses the path from the diagram and never uses the path from "Library name or path" field. When you set that checkbox everywhere, all you need is to construct proper path for both 32 and 64 bits and pass it into your CLFN(s). Here's the article, which may help: How to Configure LabVIEW to Use Relative Paths for DLLs?

    Another option for you might be using of an asterisk in the library name to distinguish between 32 and 64 bits. Refer to Configuring the Call Library Function Node article and look for how to use the * wildcard.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.