Jump to content

dadreamer

Members
  • Posts

    350
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by dadreamer

  1. No, that XNodeDevelopment_LabVIEWInternalTag token doesn't make any effect in LabVIEW for Windows. Moreover it even isn't contained in the executable.
  2. And this is for Windows 😉 I don't want to violate the rules, therefore I'm not going to describe how to achieve this functionality on Windows. If you really want to get it, take a closer look at those Scripting packages, find .lc there, then alter PACKAGE / INCREMENT tokens to LabVIEW_XNodeDevelopment_PKG and COMPONENTS token to LabVIEW_XNodeDevelopment in it. Sure you know what to do next.
  3. You are right, I managed to successfully activate LabVIEW XNode entries with XNodeDevelopment_LabVIEWInternalTag=True token. Here are the screenshots taken on Ubuntu w/ LV 2019 64-bit. And these are from Sierra w/ the same LV. In my case the preferences file was here: - /home/<user name>/natinst/.config/LabVIEW-<LV version>/labview.conf (on Linux); - /Users/<user name>/Library/Preferences/LabVIEW.app <LV version> 64-bit Preferences (on macOS).
  4. I want to remind once again that all this information is just to have fun playing with LabVIEW and not intended for real projects use. I believe you all understand that. 🙂 Not that a big opening here and even not an opening for some ones, but I found this interesting enough to make a thread. As you may already know, when some library is being called using CLF Node, LabVIEW enters ExtFuncWrapper first to do some guard checks to prevent itself from a silent crash and output some error message to the user instead. I've always considered that wrapper boring enough to study, thus never looked inside. But when once again I faced that I can't call some function through CLFN and have to write my own wrapper library, I asked myself why we cannot call some code by its pointer as in almost any well-known text language?.. Therefore I decided to know how ExtFuncWrapper calls the function. It turned out that ExtFuncWrapper receives the function's pointer (along with the parameters struct pointer) and calls that pointer later as it is (i.e., not doing any manipulations with it). So we can use it to call any function and even any code chunk directly from the diagram! After further research I found ExtFuncWrapper not very convenient to use, because to bypass some checks the parameters struct should be prepared accordingly before the call. But there are many ExtFunc... wrappers in labview.exe and ExtFuncCBWrapper is much easier to use. It has the following prototype: int32_t ExtFuncCBWrapper(uintptr_t CodeChunk, int32_t UseTLS, void *CodeParams); Here CodeChunk is our func / code pointer, UseTLS is 0 as we don't use LabVIEW's Thread Local Storage and CodeParams is our parameters struct. When called ExtFuncCBWrapper runs that CodeChunk, passing CodeParams to it, so we can freely use it later to do what we want. Nuff said, here are the samples. This one increments a Numeric. ExtFuncCBWrapper-Increment.vi As you can see, I'm passing a Numeric value as CodeParams pointer into ExtFuncCBWrapper and in the assembly I have to pull that pointer out to deal with my parameters. I'm not that excellent in asm codes, so I used one of many online x86 compilers-decompilers out there. It's even simplier in 64-bit IDE as the first parameter is already written into RCX. Okay, here goes a more advanced example - it calculates a sum of two input Numerics. ExtFuncCBWrapper-SumOfTwo.vi Here I'm passing a cluster of three parameters as CodeParams pointer (two Numerics and the resulting Sum) and in the asm I'm grabbing the first parameter, adding it to the second one and writing the result into the third one. Pretty simple operations. Now let's do some really wild asm on the diagram! 😉 The latter example calls MessageBox function from user32.dll. ExtFuncCBWrapper-MsgBox.vi This is what Rolf calls a diagram voodoo. 😃 I have to provide 4 parameters to MessageBox, 2 of which are string pointers. Thus I'm getting these pointers and writing them into my params cluster (along with the panel handle and the dialog type). When ExtFuncCBWrapper is called, in the asm code I have to use the prologue and epilogue pieces to prevent the stack corruption as I'm pushing 4 parameters later onto the stack to provide them to MessageBox. After the call I'm writing the result into the function return parameter. In 64-bit IDE the prologue/epilogue is somewhat simplier. Maybe you already noticed that I'm doing VirtualProtect before calling ExtFuncCBWrapper. This is done to pass through Windows Data Execution Prevention (DEP) protection. I'm setting execute and read/write access for the memory page with my code, otherwise LabVIEW refuses to run my code and throws an exception. Surprisingly it is thrown only in 64-bit IDE, but in 32-bit LV I can just wire the U8 array to ExtFuncCBWrapper, not going through that DSNewPtr-MoveBlock-VirtualProtect-DSDisposePtr chain. I did not start to figure out such a behaviour. Well, to be honest, I doubt that someone will find all these samples really useful for his/her tasks, because these are very low-level operations and it's much easier to use a common CLFN or a helper DLL. They are here just to show that the things described are definitely doable from an ordinary diagram, and that doesn't require writing any libraries. With a good asm skills it's even possible to realize callback functions or call some exotic functions (like C++ class methods). Some things might be improved also, e.g. embedding a generic assembly compiler to have a possibility to write the codes instead of raw bytes on the diagram. Ideally it's even possible to implement an Inline Assembly Node. Unfortunately, I have neither the time nor much desire to do it myself.
  5. I'm still investigating things, but now I start to think that it's kinda complicated task. I've found no easy-to-use function in LabVIEW internals to get that pointer. And there's another difficulty - the refnum should be matched with the object, it relates to. I see no any refnum info in Heap Peek's object (and its DCO) properties. There's UID in the very first string, so that potentially could be used to identify the object needed. In that case the list of all VI objects should be retrieved (from OH or DS Heap, I guess) and each object should be analyzed to know, if its UID matches our one. Somewhat straightforward approach, but it's the best I could come up with. Maybe someone knows a better solution... As to refnums, there's MCGetCookieInfo and its wrapper named BaseCookieJar::GetCookieInfo, but I don't know a reliable way to find out a Cookie Jar for concrete LabVIEW instance. And even having that I'm unsure whether that function returns the necessary data.
  6. This gives you a pointer to VI Data Space, not to the object's own space. Well, I see what you want to obtain, so I recheck later, whether it's possible to retrieve a pointer to the object itself.
  7. Do you need VI Data Space pointer? If yes, then you could use GetDSFromVIRef internal function - check my samples with ReadDCOTransferData / WriteDCOTransferData calls. That function is available in LabVIEW starting from 2009 version.
  8. Should ManagementEventWatcher.Start method be called in order for it to run asynchronously? Something like this: WMI_USBStorage_Event_withDeviceIDandTimeOut.viCB.vi Tested this sample on my own USB drive and it works fine.
  9. Could you elaborate on the problem you are facing? If you received Error 1386: The Specified .NET Class is Not Available in LabVIEW, then you most likely should unblock the DLL downloaded first and only after that you launch LabVIEW and run that example. See this article for the details.
  10. I'm kind of unsure whether this could be accomplished with a common File Dialog or an underlying yellow File Dialog and its ExtFileDialog internal function. But you could switch to .NET and use some third party libraries available. One of those is BetterFolderBrowser proposed here. I have just tested it on both 32- and 64-bit LabVIEW 2019/2020 and it works great. Here's the basic example: Untitled 1.vi
  11. Did you have a look at VI Scripting? If not, check the following example - [LabVIEW]\examples\Application Control\VI Scripting\Creating Objects\Adding Objects.vi To be able to create controls or indicators you should open the BD and change VI server class for "Function" constant to Generic -> GObject -> GObject. Then you change "Subtract" constant to something like "Numeric Control" and run the VI. Hope, this will help you move further with your task.
  12. Well, they're obviously not enough to have an absolute control over SH, including memory pools management as per SH API. Unfortunately I don't see any other functions or private nodes exposed, except maybe FreeSmartHeapPool function of mgcore, but that one crashes LV for some reason. I'm afraid, my find about mgcore's switching is almost useless, because compiled app (i.e. EXE) uses lvrt.dll, which already has mgcore stuff integrated into it, so no way to disable SH in lvrt, as it would require its recompile from the sources. And I never saw any different versions of LVRT except a classic one and a Full Featured one. Honestly, I don't know, why LabVIEW is shipped with 4 variants of mgcore, even if it's using only one of them. Yeah, it doesn't help much, because it's like you have inserted RD block in the end of every VI. In LabVIEW before 7.x there was "Deallocate memory as soon as possible" checkbox in the settings. This setting was stored in INI as anxiousMemoryDeallocation token. In 7.x they removed the checkbox and likely renamed anxiousMemoryDeallocation token to overanxiousMemoryDeallocation. LabVIEW still tries to read overanxiousMemoryDeallocation on the start, thus it could be used if needed. Not much sense for that though. By the way, this wiki page should be updated as well.
  13. By chance I came across those private nodes too and played with them a little. They allow to retain the data per LabVIEW process. That means, you may get access to data in any VI in any project. Feels like Tags, that are not stored inside VI DS. Neat feature, indeed.
  14. Thanks, Rob! Very well done research with a lot of technical details, as we all here like. 🙂 After reading and re-reading your post and SH related documents and playing with the samples I still have one question. Can we control SH behaviour in any ways or is it up to LabVIEW Memory Manager completely? Say, could I make SH to empty its pools and free all the data cached, thus reclaiming the space occupied? Or it never gives it back to me entirely? Could I disable SH utilization somehow or is it hardcoded to be always on? I found few private properties to control Memory Manager settings, e.g. Application.Selective Deallocation, Application.Selective Deallocation.EnableForAllVIs, Application.Selective Deallocation.LargeBufferThreshold and Application.NeverShrinkBuffers, but playing around these doesn't help much. I would say, it even worsens the situation in some cases. Currently I see no way to return the occupied memory, thereby LabVIEW can (and will) eat as much memory as it needs for its purposes. So, we have to live with it, don't we?.. upd: I think I found something. In [LabVIEW]\resource folder there are four variants of Memory Manager library: mgcore_20_0.dll - no SH, no AT (Allocation Tracker) mgcore_AT_20_0.dll - with AT mgcore_AT_SH_20_0.dll - with both SH and AT mgcore_SH_20_0.dll - with SH LabVIEW uses SH version by default. If we switch to "no SH, no AT" version by backupping mgcore_SH_20_0.dll and renaming mgcore_20_0.dll to mgcore_SH_20_0.dll, then the memory consumption is somewhat reduced and we get more memory back after RD was called. On default mgcore_SH_20_0.dll I'm getting these values: LabVIEW is opened and the example is running - 199 056 KB; After the string array was once created (RD is on) - 779 256 KB (the peak value is at ~800 000 KB); After the VI is stopped and closed - 491 096 KB. On mgcore_20_0.dll I'm getting these values: LabVIEW is opened and the example is running - 181 980 KB; After the string array was once created (RD is on) - 329 764 KB (the peak value is at ~600 000 KB); After the VI is stopped and closed - 380 200 KB. Of course, it all needs more extensive testing. I see however, that "no SH, no AT" version uses less memory for the operations and so it could be prefferable, when the system is fairly RAM limited.
  15. All the DLLs may be added into the project manually (by RMB click -> Add -> File) and in the build spec's on the Source Files tab the DLLs should be put into Always Included category. When the build finishes, you will have the DLLs in the 'data' folder. Just tested with a trivial project and it worked fine.
  16. From my own experience with CLFNs, if you set "Specify path on diagram" checkbox in the CLFN's settings, LabVIEW always uses the path from the diagram and never uses the path from "Library name or path" field. When you set that checkbox everywhere, all you need is to construct proper path for both 32 and 64 bits and pass it into your CLFN(s). Here's the article, which may help: How to Configure LabVIEW to Use Relative Paths for DLLs? Another option for you might be using of an asterisk in the library name to distinguish between 32 and 64 bits. Refer to Configuring the Call Library Function Node article and look for how to use the * wildcard.
  17. Yeah! LabVIEW 2020 64-bit - RD doesn't work for strings as it should do. Even when the VI is unloaded, LabVIEW still holds some memory allocated and never releases. No tracks of it on NI forums. Anyone with internal access to the list?
  18. AFAIK i386 is for Windows 32-bit and wx64 is for Windows 64-bit. Following this logic u should be for Linux and m for macOS, but I'm not 100% sure. I'm just saying, what I saw when browsing through a number of CINs with their PLAT sections.
  19. You might try loading macOS LV version into debugger, because it has more debug symbols unstripped unlike Windows and Linux versions. As I recall I was able to read out the rest of the parameters and their types just by browsing the code in IDA. It's mostly about old LV versions before LV 2009. Check LVSB and PLAT resource sections (and maybe LIsb for external subroutines), if you're going to study how CINs work. There are Rolf's articles also, that could help you to put all the pieces together: https://forums.ni.com/t5/LabVIEW/What-happened-to-CINs-And-how-else-can-another-language-work/m-p/2726539#M807177
  20. Try to use this VI to get your window on top of the world: Set Calling VI Wnd Topmost & Active.vi This is the thread, where it came from. You may use it this way: wnd_test.vi Or you may invoke the SubVI at strictly specified intervals, you decide.
  21. Okay then. Your software - your rules. 🙂 I'm just worried about some cases with it, so I'm going to ask. As I'm mostly work on modern LabVIEWs (2018, 2019, 2020) and your tools weren't tested on anything higher than LV 2014... What is the worst thing that can happen, when I try to unpack/pack such VIs? Could those be not fully unpacked or packed? Or something got corrupted? Is it safe to ignore the frequent warnings on VI (un)packing? There are always few of them, e.g. Block b'VICD' section 0 XML export exception: The block is only partially exported as XML. , Block b'VICD' section 0 binary prepare exception: Re-creating binary is not implemented. , Block b'VICD' section 0 left in original raw form, without re-building. Sometimes I saw the message "No matching salt found by Interface scan; doing brute-force scan", when packing back VI w/ some sections slightly modified. It then leaves me waiting to get the process finished (honestly, my biggest record is 10 minutes 😄, I always interrupted it). How/what could I do to escape that? Is there some option to force the recalculation, like in flarn's utility (fixing checksums)? It works fine even on the recent LabVIEW 2020. Do you mean rewriting pylabview for my needs? Even if I, say, want to rename some section? I imagined that as few additional parameters, e.g. "section" and "new section", and that's all to do the job. And no need to repack everything. Ok, how could I easily rename some section, having all the VI's files already extracted? I don't worry about the times at all now, as it's not for my work tasks. I experiment with that mostly for my own purposes. I'm aware of tools like that and use some. Thx for the advice anyway. This is not a question of finding some proper addresses in memory for me, it's all about universality. 😉 In the past I already made some plug-in, that was relying on the internal memory offsets. It was a total pain in the ass to find and code correct offsets for each and every LV version. And that was even much more pain with each new LV version, because there were absolutely new offsets. So it was requiring a large amount of time to provide versatility. Finally I gave that up years ago. Since then I hate hard-coding variables based on unreliable internal knowledge, such as memory locations etc. That may change and changes rapidly and all the code goes to trash. I prefer not to write such programs at all.
  22. Thanks, I will take a look, when will have fun studying any VI internals again. Could I request one more feature, if possible? It would be very nice to have a support for in-place sections modification (e.g., type, id or binary contents), without unpacking into .xml and packing back (like in flarn's utility). I assume, the checksums should be recalculated, if dependent sections are altered, as it's already done for the password option. That would save time on simple binary operations. Meanwhile I was going to make a VI to show/hide those toolbar buttons, just for fun. Reality shows that the offsets in VI's memory are varying vastly between different LV versions, bitness and IDE/RTE mode. So likely I won't be posting that. I just put this little picture, so you could name ViBhBit3 and ViBhBit4 bits in ButtonsHidden field. I suppose the rest of the bits is for reserved purpose and does nothing, but I'll recheck on older LV versions and update this posting. Now closer to your code: class VI_BTN_HIDE_FLAGS(enum.Enum): """ VI Tool Bar Buttons Hidding flags """ RunButton = 1 << 0 # Indicates whether to display the Run button on the toolbar while the VI runs and when it's in edit mode as well. In LV14: Customize Window Apearence -> Show Run button. When off also hides Run Continuously button. SetBPButton = 1 << 1 # Set Breakpoint button (LV 3.x and earlier). StepIOButton = 1 << 2 # Step Into/Over button (LV 3.x and earlier). When on the button is shown, but disabled (inactive). PauseButton = 1 << 3 # Indicates whether to display the Pause button on the toolbar while the VI runs and when it's in edit mode as well. Not implemented in LV as a separate GUI setting. DebuggingButton = 1 << 4 # Indicates whether to display the Highlight Execution, Start Single Stepping (Step Into, Step Over) and Step Out buttons on the toolbar while the VI runs and when it's in edit mode as well. Not implemented in LV as a separate GUI setting. FreeRunButton = 1 << 5 # Indicates whether to display the Run Continuously button on the toolbar while the VI runs and when it's in edit mode as well. In LV14: Customize Window Apearence -> Show Run Continuously button. The button isn't shown, if Run button is hidden. LogAtCompButton = 1 << 6 # Log at Completion button (LV 3.x and earlier). AbortButton = 1 << 7 # Indicates whether to display the Abort Execution button on the toolbar while the VI runs and when it's in edit mode as well. In LV14: Customize Window Apearence -> Show Abort button. ViBhBit8 = 1 << 8 # unknown PrintAtCompButton = 1 << 9 # Print at Completion button (LV 3.x and earlier). EditRunModeButton = 1 << 10 # Change to Edit/Run Mode button (LV 3.x and earlier). ViBhBit11 = 1 << 11 # unknown ViBhBit12 = 1 << 12 # unknown ViBhBit13 = 1 << 13 # unknown ViBhBit14 = 1 << 14 # unknown ViBhBit15 = 1 << 15 # unknown --- Here's my "dirty"/hacky tool to show and hide all the toolbar buttons: But_Show_Hide.vi Tested that on LV 2009 to 2020 (both 32- and 64-bits). Maybe it will work on anything newer than LV 2020 in the future, nobody knows yet. It won't work on versions older than LV 2009, because the difference between LV 2009 and LV 8.x is way too large. And I will not be maintaining this tool at all, as it's (almost) useless for anyone including me. Use it as is or forget it. Small update: In LV 3.1.1 and earlier ones there was no separate Tools palette, all the tool buttons were on the toolbar. Here's how it looked for stopped VIs: This was for running VIs: Button 2 ("Pencil with Run arrow") was for switching between Edit and Run modes, button 5 ("...") was to set a breakpoint on the VI, button 6 ("___") was to pause the running VI, buttons 8 and 9 ("Run arrow with file/floppy") were to enable logging or printing respectively at the VI completion. Sure you know the rest. When pressed button 5 was turning into "!" button (breakpoint was set). When pressed button 6 was turning into "Square wave" button (to unpause the VI) and was showing "Single square step" button (to step into/over the VI nodes). Here's how those buttons looked pressed: I've updated the corresponding bits in my tool to reflect those early buttons too, even though the tool won't work on anything lower than LV 2009. The source of pylabview might be updated accordingly.
  23. I'm unable to invoke Heap Peek with the keys combination on VM with Ubuntu. I would check on a common installation, but don't have one ATM. But adding various tokens to the config file (/home/<username>/natinst/.config/LabVIEW-x/labview.conf) works fine for me.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.