-
Posts
353 -
Joined
-
Last visited
-
Days Won
35
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by dadreamer
-
Calling arbitrary code straight from the diagram
dadreamer replied to dadreamer's topic in LabVIEW General
All I could find about this is just these two internal functions: CallVIFromDll NCGRunVirtualInstrument For the first one I was able to find .NET prototype only: CallVIFromDll.Invoke(Int32 epIndex, IntPtr lvClient, IntPtr entryPointDataSpace) I'm kind of unsure, how it could be used for the mentioned task. It looks like it doesn't accept the VI parameters. And what do these arguments mean exactly?.. As to the second one, it doesn't accept the VI parameters as well and must be called in UI Thread only. The prototype is as follows: int32_t NCGRunVirtualInstrument(uint32_t VIRef); I did a limited testing and it appears to work. The VI is launched with the parameters on its FP and no panel is shown. We could prepare the parameters before the call with Control Value.Set method. Not very flexible solution, as I think. I saw your post from 2016, where you said that you have found some functions suitable for the task. Do you remember the details?- 12 replies
-
It took a while to code it, but here it is finally. 😉 I have found a way to retrieve the object's pointer soon after the last post in this thread, but had to debug and test everything. Refnum_to_Pointer.rar How it works: As we don't have any public or private API to obtain Base Cookie Jar (BCJ) pointer (that is a LabVIEW global variable), the only way is to examine some function, which uses it, find the place with this global and save the pointer. Actually, this is how we're getting our BCJ. To clarify, it's for Object References only, not for any other references out there. After we've got BCJ, we call MCGetCookieInfo and get the information, associated with that refnum. As far as I understand, CookieInfo appears to be a C++ class with a few methods and properties (not just an ordinary struct). One of the methods is able to extract the object's pointer from VI DS. Further we call that unnamed method, using the hard-coded offset of 0xC (for 32 bit) / 0x18 (for 64 bit) and it returns the necessary pointer. The method should be called using __thiscall convention, that is why I'm using the technique described here. I decided not to write a wrapper library, so that everyone everywhere could easily browse the code and alter it, when needed. Currently tested on all LabVIEW versions from 2009 to 2020 (both 32- and 64-bit, both IDE and RTE). It won't work on anything earlier than LV 2009, because ExtFuncCBWrapper is absent there. Also no Linux or macOS at the moment, sorry. Oh, and it may become broken in the future LV releases. Or may not, nobody knows yet. 🙂
-
How to enable native XNode development on Linux/Mac
dadreamer replied to Sparkette's topic in VI Scripting
No, that XNodeDevelopment_LabVIEWInternalTag token doesn't make any effect in LabVIEW for Windows. Moreover it even isn't contained in the executable. -
How to enable native XNode development on Linux/Mac
dadreamer replied to Sparkette's topic in VI Scripting
And this is for Windows 😉 I don't want to violate the rules, therefore I'm not going to describe how to achieve this functionality on Windows. If you really want to get it, take a closer look at those Scripting packages, find .lc there, then alter PACKAGE / INCREMENT tokens to LabVIEW_XNodeDevelopment_PKG and COMPONENTS token to LabVIEW_XNodeDevelopment in it. Sure you know what to do next. -
How to enable native XNode development on Linux/Mac
dadreamer replied to Sparkette's topic in VI Scripting
You are right, I managed to successfully activate LabVIEW XNode entries with XNodeDevelopment_LabVIEWInternalTag=True token. Here are the screenshots taken on Ubuntu w/ LV 2019 64-bit. And these are from Sierra w/ the same LV. In my case the preferences file was here: - /home/<user name>/natinst/.config/LabVIEW-<LV version>/labview.conf (on Linux); - /Users/<user name>/Library/Preferences/LabVIEW.app <LV version> 64-bit Preferences (on macOS). -
I want to remind once again that all this information is just to have fun playing with LabVIEW and not intended for real projects use. I believe you all understand that. 🙂 Not that a big opening here and even not an opening for some ones, but I found this interesting enough to make a thread. As you may already know, when some library is being called using CLF Node, LabVIEW enters ExtFuncWrapper first to do some guard checks to prevent itself from a silent crash and output some error message to the user instead. I've always considered that wrapper boring enough to study, thus never looked inside. But when once again I faced that I can't call some function through CLFN and have to write my own wrapper library, I asked myself why we cannot call some code by its pointer as in almost any well-known text language?.. Therefore I decided to know how ExtFuncWrapper calls the function. It turned out that ExtFuncWrapper receives the function's pointer (along with the parameters struct pointer) and calls that pointer later as it is (i.e., not doing any manipulations with it). So we can use it to call any function and even any code chunk directly from the diagram! After further research I found ExtFuncWrapper not very convenient to use, because to bypass some checks the parameters struct should be prepared accordingly before the call. But there are many ExtFunc... wrappers in labview.exe and ExtFuncCBWrapper is much easier to use. It has the following prototype: int32_t ExtFuncCBWrapper(uintptr_t CodeChunk, int32_t UseTLS, void *CodeParams); Here CodeChunk is our func / code pointer, UseTLS is 0 as we don't use LabVIEW's Thread Local Storage and CodeParams is our parameters struct. When called ExtFuncCBWrapper runs that CodeChunk, passing CodeParams to it, so we can freely use it later to do what we want. Nuff said, here are the samples. This one increments a Numeric. ExtFuncCBWrapper-Increment.vi As you can see, I'm passing a Numeric value as CodeParams pointer into ExtFuncCBWrapper and in the assembly I have to pull that pointer out to deal with my parameters. I'm not that excellent in asm codes, so I used one of many online x86 compilers-decompilers out there. It's even simplier in 64-bit IDE as the first parameter is already written into RCX. Okay, here goes a more advanced example - it calculates a sum of two input Numerics. ExtFuncCBWrapper-SumOfTwo.vi Here I'm passing a cluster of three parameters as CodeParams pointer (two Numerics and the resulting Sum) and in the asm I'm grabbing the first parameter, adding it to the second one and writing the result into the third one. Pretty simple operations. Now let's do some really wild asm on the diagram! 😉 The latter example calls MessageBox function from user32.dll. ExtFuncCBWrapper-MsgBox.vi This is what Rolf calls a diagram voodoo. 😃 I have to provide 4 parameters to MessageBox, 2 of which are string pointers. Thus I'm getting these pointers and writing them into my params cluster (along with the panel handle and the dialog type). When ExtFuncCBWrapper is called, in the asm code I have to use the prologue and epilogue pieces to prevent the stack corruption as I'm pushing 4 parameters later onto the stack to provide them to MessageBox. After the call I'm writing the result into the function return parameter. In 64-bit IDE the prologue/epilogue is somewhat simplier. Maybe you already noticed that I'm doing VirtualProtect before calling ExtFuncCBWrapper. This is done to pass through Windows Data Execution Prevention (DEP) protection. I'm setting execute and read/write access for the memory page with my code, otherwise LabVIEW refuses to run my code and throws an exception. Surprisingly it is thrown only in 64-bit IDE, but in 32-bit LV I can just wire the U8 array to ExtFuncCBWrapper, not going through that DSNewPtr-MoveBlock-VirtualProtect-DSDisposePtr chain. I did not start to figure out such a behaviour. Well, to be honest, I doubt that someone will find all these samples really useful for his/her tasks, because these are very low-level operations and it's much easier to use a common CLFN or a helper DLL. They are here just to show that the things described are definitely doable from an ordinary diagram, and that doesn't require writing any libraries. With a good asm skills it's even possible to realize callback functions or call some exotic functions (like C++ class methods). Some things might be improved also, e.g. embedding a generic assembly compiler to have a possibility to write the codes instead of raw bytes on the diagram. Ideally it's even possible to implement an Inline Assembly Node. Unfortunately, I have neither the time nor much desire to do it myself.
- 12 replies
-
- 4
-
Common Error and Fix - LabVIEW caught fatal signal
dadreamer replied to Jim Kring's topic in LabVIEW Bugs
That article is available on Web Archive. -
I'm still investigating things, but now I start to think that it's kinda complicated task. I've found no easy-to-use function in LabVIEW internals to get that pointer. And there's another difficulty - the refnum should be matched with the object, it relates to. I see no any refnum info in Heap Peek's object (and its DCO) properties. There's UID in the very first string, so that potentially could be used to identify the object needed. In that case the list of all VI objects should be retrieved (from OH or DS Heap, I guess) and each object should be analyzed to know, if its UID matches our one. Somewhat straightforward approach, but it's the best I could come up with. Maybe someone knows a better solution... As to refnums, there's MCGetCookieInfo and its wrapper named BaseCookieJar::GetCookieInfo, but I don't know a reliable way to find out a Cookie Jar for concrete LabVIEW instance. And even having that I'm unsure whether that function returns the necessary data.
-
Selecting multiple folder paths at the same time?
dadreamer replied to caleyjag's topic in User Interface
Could you elaborate on the problem you are facing? If you received Error 1386: The Specified .NET Class is Not Available in LabVIEW, then you most likely should unblock the DLL downloaded first and only after that you launch LabVIEW and run that example. See this article for the details. -
Selecting multiple folder paths at the same time?
dadreamer replied to caleyjag's topic in User Interface
I'm kind of unsure whether this could be accomplished with a common File Dialog or an underlying yellow File Dialog and its ExtFileDialog internal function. But you could switch to .NET and use some third party libraries available. One of those is BetterFolderBrowser proposed here. I have just tested it on both 32- and 64-bit LabVIEW 2019/2020 and it works great. Here's the basic example: Untitled 1.vi -
Did you have a look at VI Scripting? If not, check the following example - [LabVIEW]\examples\Application Control\VI Scripting\Creating Objects\Adding Objects.vi To be able to create controls or indicators you should open the BD and change VI server class for "Function" constant to Generic -> GObject -> GObject. Then you change "Subtract" constant to something like "Numeric Control" and run the VI. Hope, this will help you move further with your task.
-
Well, they're obviously not enough to have an absolute control over SH, including memory pools management as per SH API. Unfortunately I don't see any other functions or private nodes exposed, except maybe FreeSmartHeapPool function of mgcore, but that one crashes LV for some reason. I'm afraid, my find about mgcore's switching is almost useless, because compiled app (i.e. EXE) uses lvrt.dll, which already has mgcore stuff integrated into it, so no way to disable SH in lvrt, as it would require its recompile from the sources. And I never saw any different versions of LVRT except a classic one and a Full Featured one. Honestly, I don't know, why LabVIEW is shipped with 4 variants of mgcore, even if it's using only one of them. Yeah, it doesn't help much, because it's like you have inserted RD block in the end of every VI. In LabVIEW before 7.x there was "Deallocate memory as soon as possible" checkbox in the settings. This setting was stored in INI as anxiousMemoryDeallocation token. In 7.x they removed the checkbox and likely renamed anxiousMemoryDeallocation token to overanxiousMemoryDeallocation. LabVIEW still tries to read overanxiousMemoryDeallocation on the start, thus it could be used if needed. Not much sense for that though. By the way, this wiki page should be updated as well.
-
Quick Drop plugin - retain data across calls
dadreamer replied to Marshall Eubanks's topic in VI Scripting
By chance I came across those private nodes too and played with them a little. They allow to retain the data per LabVIEW process. That means, you may get access to data in any VI in any project. Feels like Tags, that are not stored inside VI DS. Neat feature, indeed. -
Thanks, Rob! Very well done research with a lot of technical details, as we all here like. 🙂 After reading and re-reading your post and SH related documents and playing with the samples I still have one question. Can we control SH behaviour in any ways or is it up to LabVIEW Memory Manager completely? Say, could I make SH to empty its pools and free all the data cached, thus reclaiming the space occupied? Or it never gives it back to me entirely? Could I disable SH utilization somehow or is it hardcoded to be always on? I found few private properties to control Memory Manager settings, e.g. Application.Selective Deallocation, Application.Selective Deallocation.EnableForAllVIs, Application.Selective Deallocation.LargeBufferThreshold and Application.NeverShrinkBuffers, but playing around these doesn't help much. I would say, it even worsens the situation in some cases. Currently I see no way to return the occupied memory, thereby LabVIEW can (and will) eat as much memory as it needs for its purposes. So, we have to live with it, don't we?.. upd: I think I found something. In [LabVIEW]\resource folder there are four variants of Memory Manager library: mgcore_20_0.dll - no SH, no AT (Allocation Tracker) mgcore_AT_20_0.dll - with AT mgcore_AT_SH_20_0.dll - with both SH and AT mgcore_SH_20_0.dll - with SH LabVIEW uses SH version by default. If we switch to "no SH, no AT" version by backupping mgcore_SH_20_0.dll and renaming mgcore_20_0.dll to mgcore_SH_20_0.dll, then the memory consumption is somewhat reduced and we get more memory back after RD was called. On default mgcore_SH_20_0.dll I'm getting these values: LabVIEW is opened and the example is running - 199 056 KB; After the string array was once created (RD is on) - 779 256 KB (the peak value is at ~800 000 KB); After the VI is stopped and closed - 491 096 KB. On mgcore_20_0.dll I'm getting these values: LabVIEW is opened and the example is running - 181 980 KB; After the string array was once created (RD is on) - 329 764 KB (the peak value is at ~600 000 KB); After the VI is stopped and closed - 380 200 KB. Of course, it all needs more extensive testing. I see however, that "no SH, no AT" version uses less memory for the operations and so it could be prefferable, when the system is fairly RAM limited.
-
All the DLLs may be added into the project manually (by RMB click -> Add -> File) and in the build spec's on the Source Files tab the DLLs should be put into Always Included category. When the build finishes, you will have the DLLs in the 'data' folder. Just tested with a trivial project and it worked fine.
-
From my own experience with CLFNs, if you set "Specify path on diagram" checkbox in the CLFN's settings, LabVIEW always uses the path from the diagram and never uses the path from "Library name or path" field. When you set that checkbox everywhere, all you need is to construct proper path for both 32 and 64 bits and pass it into your CLFN(s). Here's the article, which may help: How to Configure LabVIEW to Use Relative Paths for DLLs? Another option for you might be using of an asterisk in the library name to distinguish between 32 and 64 bits. Refer to Configuring the Call Library Function Node article and look for how to use the * wildcard.
-
Yeah! LabVIEW 2020 64-bit - RD doesn't work for strings as it should do. Even when the VI is unloaded, LabVIEW still holds some memory allocated and never releases. No tracks of it on NI forums. Anyone with internal access to the list?
-
You might try loading macOS LV version into debugger, because it has more debug symbols unstripped unlike Windows and Linux versions. As I recall I was able to read out the rest of the parameters and their types just by browsing the code in IDA. It's mostly about old LV versions before LV 2009. Check LVSB and PLAT resource sections (and maybe LIsb for external subroutines), if you're going to study how CINs work. There are Rolf's articles also, that could help you to put all the pieces together: https://forums.ni.com/t5/LabVIEW/What-happened-to-CINs-And-how-else-can-another-language-work/m-p/2726539#M807177
-
Try to use this VI to get your window on top of the world: Set Calling VI Wnd Topmost & Active.vi This is the thread, where it came from. You may use it this way: wnd_test.vi Or you may invoke the SubVI at strictly specified intervals, you decide.