-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Well I"m not sure about multithreading issues with respect to .Net but .Net is more flexible for sure than ActiveX, which was build on OLE with all its pre-Win32 legacy issues. So it does seem reasonable that the .Net component is by default instantiated and run as a separate thread and therefore satisfies the Apartment threading requirements of the ActiveX component, without being locked into the LabVIEW UI thread. How much the .Net component would have to do, to actually make sure it is working like that, I would not know. It is entirely possible that the .Net component has to manage the entire threading explicitly for the ActiveX component.Since the ActiveX component doesn't really count as fully managed service in terms of .Net there will also have to be some explicit managed-unmanaged translation code in the .Net component. Fortunately if you use tlbimp/aximp to create the ActiveX wrapper you should be safe from having to bother about these issues.
-
I'm almost 100% sure it is not documented. And from the looks of it it won't really help you here. The function linked to by the xnode is the only useful exported function in there. If the xnode doesn't know how to deal with variants to make the xnode interface work with that library function, the library function itself most likely doesn't know either. But can you explain what you try to do? Do you really need the runtime variant type feature of a Variant, or do you just want adaptable code that is fine to get the right datatype at edit time? If the second is true, some polymorphic VIs and possibly the ILVDataInterface in combination with Adapt To Type as Call Library Node parameter type might be enough. Just be aware that ILVDataInterface is only available since LabVIEW 2009.
-
Well, that made me think! It might be not the LvVariant that is refcounted but the data inside. Take a look at ILVDataInterface.h. I think all the diagram data is probably wrapped in that and refcounted and then the LvVariant is a thin wrapper around that. To manage polymorphic data. The ILVData interfaces in there inherit from IUnknown, most likely the same as the COM IUnknown interface, eventhough NI seems to have gone the path of creating a nidl tool that replaces the midl compiler from MS in order to make it work for non MS-platforms. Quite an undertaking but I would certainly start with the source code from Wine for that, who made a widl tool! While Wine is LGPL, as long as it is only for inhouse use, and I doubt NI ever planned to release that toolchain, this would be basically still be fine.
-
We need a new password cracker :-(
Rolf Kalbermatter replied to Sparkette's topic in LabVIEW General
showing off, he? -
Well, refcounting is for ease of resource management. You simply follow some well defined rules about when to call AddRef() and Release() and then the last Release() automatically deallocates the object. It controls the lifetime of the object, not of the data in the object. Flattening a Variant basically AddRefs() the object, retrieves the data and stores it in a flattened stream and then calls Release() on the object. If nobody else still has AddRefed() the object, it will now be deallocated. Unflattening does a new() for a Variant object which creates the object with one refcount, then AddRef() and then unflatten and store the unflattened data in the object, then Release() again. Works perfect for COM and why not for LabVIEW Variants? The master pointer is an old concept taken from the Mac OS 7 memory manager. Basically the memory manager maintained an array of master pointers from which handles got allocated. The entry in the master pointer array was the first pointer in the handle, which pointed to the actual memory area. This allowed the memory manager to maintain a preallocated pool of master pointers and swap allocated memory areas between an "allocated" master pointer pool and an "previously allocated" pool when a handle was freed, in order to safe potentially some memory allocations, but also meant that that pool could be exhausted and then you had to call a special function MoreMasterPointers() or similar to extend the pool. The current handle implementation in LabVIEW still inherits some characteristics from this, but as far as I know a handle is not anymore allocated from a master pointer array but instead is just a pointer to a pointer to a memory area. The memory area does actually have a hidden header in front of the handle that stores information about the size of the handle, some flags and at least in some builds of LabVIEW a back pointer to the pointer that points to the memory area. But most of this information is not anymore actively used (except potentially in special internal debug builds to more easily track down memory corruptions).
-
I don't think so. Handles do not really maintain a ref count in itself. And I doubt a LvVariant is a real handle although it seems to be a pointer to a pointer (or more precisely a pointer to a C++ class). I would suspect the LvVariant to implement some COM like refcounting but would not know how to get the refcount properly increased and decreased.
-
That is most likely not a real fix but just an avoidance of a symptome. It looks like LabVIEW is actually maintining some form of reference count on Variants. Copying the pointer alone is likely not enough to keep the variant alive, but somehow the reference count has to be incremented too (and properly decremented afterwards again, to avoid lingering Variants.
-
And why do you think a variant is 8 bytes long? I don't know how long it is, but it is either a pointer or a more complex structure whose size you cannot easily determine. In the pointer case, which I would tend to believe it is, it would be either 4 bytes or 8 bytes long depending on if you run this on 32 bit or 64 bit. The pointer theory is further enforced by the lonely definition of the LvVariant typedef in the extcode.h file. It is certainly an object so will hold a virtual dispatch table (a pointer, yes) and the actual data itself in whatever format LabVIEW would like. Most likely they chose a similar approach to the Windows VARIANT layout with a variable telling the type of the data and the actual data either as pointer for non-scalar data and directly embedded for scalars. If you run your VI on LabVIEW for 32 bit, the second MoveBlock will overwrite data in memory beyond the Variantpointer and therefore destroy something in memory. Please note also that LabVIEW knows in fact two variants. One is the native LvVariant and the other is the Windows VARIANT. They look the same on the diagram and LabVIEW will happily coerce from one to the other but in memory they are different. And while you can configure a CLN parameter to be a Windows VARIANT, this is only supported on Windows obviously. Still wish they would document the LvVariant API that is exported by LabVIEW.exe.
-
And that is not necessarily LabVIEWs fault. Most ActiveX controls/servers are not designed to be multithreading safe, declaring themselves to be apartment threaded, meaning any specific caller has to call them always from the same thread. LabVIEW being itself highly dynamic threading can only guarantee this by executing them from the only special thread it knows about, and that is the UI thread. Theoretically LabVIEW could of course extend its already very complex threading model by providing a dynamic number of fixed threads (nice contradiction in here ) one for each apartment threaded ActiveX control the user wants to run, but that would make the whole threading in LabVIEW mainly even more complex and more likely to fail under heavy conditions, so it would buy little in the end. The best solution would be to require the ActiveX component developer to provide a free threading compatible component and register it as such in Windows. Then LabVIEW can call it from any of its threads and does not have to push it into the UI thread. But as already pointed out, free threading capable ActiveX components are a very rare species, since they are not exactly trivial to develop and the apartement threading works in the majority of use cases good enough.
-
calling Labview DLL from VB.NET
Rolf Kalbermatter replied to Alex723's topic in Calling External Code
Attachment to post on the NI site, as the attachment upload seems borked there. VB VISA.zip -
labpython Problem with labpython and numpy
Rolf Kalbermatter replied to Gombo's topic in OpenG General Discussions
The NIPrivatePtr() macro definitely is a LabVIEW 2012 invention. That didn't exist before. In LabVIEW 8.5 until 2011 this was called LV_PRIVATE_POINTER() and before that PrivatP(). So as you can see the bad guys here are the NI developers, devilish changing macro definitions over the course of decades. The change from PrivateP() to LV_PRIVATE_POINTER() is most likely due to supporting a development environment (like maybe a new MacOSX SDK) that introduced exactly this macro for its own use and then NI has to change its own macros to prevent a name collision. Them forcing parties like Apple or Microsoft to change their macro definitions because of having longer rights on the naming is not an option. The last change looks a bit bogus. Most likely the macro in 8.5 was added by a Microsoft inspired hungarian notion loving guy or gal in the LabVIEW team, while someone in 2012 decided that this notion is evil and changed it to the more Apple like Upper/Lowercase notion, that is used throughout the LabVIEW source in most places (since LabVIEW was originally written on the Mac and then ported to other platforms later). Changing LabPython such that it can compile with all the different LabVIEW cintools header versions is a nice exercise in preprocessor macro magic, and one I honestly do not feel any inclinations to do at this moment. If (and the emphasis is here on IF) I ever would need to recompile LabPython for any reason, I would still need the old definitions since I generally use cintools headers that are based mostly on the LabVIEW 7.1 definitions, with a few minor edits to support 64 bit compilation. -
labpython Problem with labpython and numpy
Rolf Kalbermatter replied to Gombo's topic in OpenG General Discussions
I can assure you that I have not put in any deliberate typo in the source code, both in the C or LabVIEW part. I can't guarantee that something might be messed up somehow because of some settings on my machine at that time, that would not work out of the box on a standard installation, but there was no intention to make LabPython not compile for others. Would be also pretty useless, given that the ENTIRE source code is actually openly accessible. As to why LabPython is the way it is now, that has a long story. The first idea was simply to have a LabVIEW Python Script server so that you could execute Python scripts in the LabVIEW formula script node. While this was a fun idea, the practical usefullness of this solution turned out to be rather limited, since the script text was in fact compiled into the VI and therefore not possible to change at runtime. Looking at it a bit more it seemed quite trivial to simply export some more of the DLL functions and write LabVIEW VIs to access them, to have a truly dynamic script execution. And the dynamic linking to the Python server was a further afterthought to the whole story, since Python had regular updates and each version used a different DLL name. But working with the Python community turned out a bit complicated at that time. While Python both supports embedding other environments into it as well as embedding Python into other systems (like here in LabVIEW) support for the latter was considered highly unimportant and various discussions about improving that interface where shutdown as undesirable. So once I got it working in LabVIEW I sort of lost every drive to do anything more with it. The fact that you seem to be the first noticing a typo (after more than 10 years) only points out that nobody apparently has bothered in all that time to even take a serious look beyond the Python VIs, despite that the entire source code is openly available. Also that typo might be rather something that was 12 years ago not a typo, with Visual C 6.0 and LabVIEW 6.0 cintools. -
Tinkering with someone else secrets can really cause paranoia. Up until now it surely wasn't done, but maybe you brought some legal guy on an idea now. I doubt that developers like AQ would ever even consider such a feature, but if suddenly the powers to be, after reading your post, decide that this is a good idea, he can't really tell them no!
-
It's a long time that I looked at ASIO. It appeared to to be a somewhat old fashioned API that wasn't interfacable to LabVIEW without a good intermediate shared library layer. If it would support low latencies is beyond my knowledge and also if it would be naturally achievable or if the intermediate layer would have to jump through several hoops and loops to support that. MME or more correctly DirectX as is used at least by the second versioun of the Sound APIs can be used for much less than 300 ms latency too, but you have to do some serious effort for that, something the NI developers didn't feel like doing, which is understandable as NI is in the business of selling high quality data acquisition hardware for that purpose. Anyhow that project did finally not go anywhere as the effort for the ASIO interface was considered to expensive in relation to the rest of the project and buying NI hardware was simply cheaper than the combined costs of a high performance ASIO based audio interface hardware and the development cost of the LabVIEW ASIO interface. Also you need to understand that ASIO is some sort of pseudo standard, proposed by one party in the field, with several others adopting it more or less accurately, with the less being in vast majority.
-
It's not the lack of description in the VIs or such that is feeling wrong, but the kind of post you do. Posting a ZIP file with a single line of text is really not going to make anyone understand what it might be about, and we all are way to busy to have enough time at our hands to download every single piece of file on the net to play around with it. So the natural reaction is: what the heck? When you go back at those posts about LabVIEW scripting you will find that they weren't usually just a ZIP file thrown out to the masses but there was some time invested to write about what it was, why and how. That helps tremendously. I can understand that you are excited about having undigged something and would want to make everybody on earth know what a great achievement that is. But that is not helped by just posting as is, without some extra explanation about what it is. And FPGA is specific enough that the target audience is rather limited anyhow.
- 14 replies
-
Polling text file for changes
Rolf Kalbermatter replied to Mark Zacharoni Hamilton's topic in LabVIEW General
Most file systems including the usual ones under Windows know two sets of basic access rights for files. One is the access right which determines if the application has read, write or both access and are usually defined when opening the file. The other are deny rights which specify what the application wants to allow other applications to do with the file while it has it open. When your application is trying to open a file the OS is verifying if the requested access rights will conflict with any defined deny rights by other applications which have the same file opened at the moment. If there is no conflict the open operation is granted, otherwise you get an access denied error. So the first important part is what deny rights the other application defines for the file when it opens it. Since it is writing to the file, it by default denies any other write rights to the file by other applications, but it can also choose to specifically require to deny any and all rights for other applications. There is nothing LabVIEW (or any other application) could do about it, if the other application decides to request exclusive access when opening the file. But if it doesn't request denial of read rights to other applications, then you can open that file with read rights (but usually not for write access) while it is open in a different process/application. The access rights to request are defined in LabVIEW when opening the file with Open/Create/Replace File. This will have implicit deny write access for other applications when the write access right is requested (and may make the Open fail if write access is requested and another application has the file already open for write or explicitedly denied write access for the file). The Advanced File->Deny Access function can be used to change the default deny access rights for a file after opening it. Here you can deny both read and/or write access for other processes independent of the chosen access right when opening the file refnum. -
Gopel boundary scan DLL
Rolf Kalbermatter replied to Bjarne Joergensen's topic in Calling External Code
That shouldn't make any difference. A buffer of four bytes with bCnt = 4 is absolutely equivalent to an int32 passed by reference with bCnt = 4. The DLL function has absolutely no way to see a difference there. The only thing that remains would be what the aVarId value needs to be. The description is totally cryptic and might be a very good candidate to have understood wrong. -
Gopel boundary scan DLL
Rolf Kalbermatter replied to Bjarne Joergensen's topic in Calling External Code
Well the information is not really enough to make conclusive statements here but two or three things make me wonder a bit. In the description it says "Pointer to the handle of the tCascon structure ...." so I'm wondiering if this parameter should be passed by reference. But the C++ prototype doesn't look like it should. Are you sure the function is stdcall? The CASRUNFUNC might be specifying the calling convention but without seeing the headers it is impossible to say. If it doesn't specify it it will depend on the compiler switch used to compile the DLL but if nothing is specified and the CASRUNFUNC doesn't specify an explicit calling convention it is normally always cdecl. Last but not least, what does not work? What do you get? A crash, an error return code, no info? -
luaview LuaVIEW, accessing pre-processor keywords
Rolf Kalbermatter replied to TimVargo's topic in Calling External Code
Well, it's not exactly trivial to do that, and it also will only work if you use tasks to execute the scripts but included VI should give you an idea about how to go about it. I didn't code the actual passing out of the different data elements in detail but I'm sure you can adapt it to whatever needs you might have. Get Task Information.vi -
There are at least three spam posts in the Lava blog but normal users don't seem to have the right to report them. So I'm doing it here.
-
Then you have to consider if you can live with time for the system to be not working while you replace components that have failed at some point. If that is possible you can just make sure to keep some spare parts around and replace them as they fail. If uninterrupted 24/7 operation is mandatory then even the PXI solution isn't a save bet but definitely a more likely one, to work like that once the system is deployed and not modified anymore.
-
Application Task Kill on Exit?
Rolf Kalbermatter replied to hooovahh's topic in Application Design & Architecture
Well in principle when you kill an application the OS will take care about deallocating all the memory and handles that application has opened. However in practice it is possible that the OS is not able to track down every single resource that got allocated by the process. As far as memory is concerned I would not fret to much, since that is fairly easy for the OS to determine. Where it could get hairy is when your application used device drivers to open resources and one of them does not get closed properly. Since the actual allocation was in fact done by the device driver, the OS is not always able to determine on whose behalves that was done and such resources can easily remain open and lock up certain parts of the system until you restart the computer. It's theoretically also possible that such locked resources could do dangerous things to the integrity of the OS, to the point that it gets unstable even after a restart although that's not very likely. Since you say that you have carefully made sure that all allocated resources like files, IO resources and handles and what else have been properly closed, it is most likely not going to destroy your computer in any way that could not be solved by fully restart it after a complete shutdown. What would concern me however with such a solution is that you might end up making a tiny change to your application and unless you carefully test it to release all resources properly by disabling the kill option and making sure the application closes properly, no matter how long that may take, this small change could suddenly prevent a resource from being properly released. Since your application gets killed you may not notice this until your system gets unstable because of corrupted system files. -
fundamental question about order of execution
Rolf Kalbermatter replied to Jordan Kuehn's topic in Real-Time
Thanks that makes sense! And I'm probably mostly safe from that issue because I tend to make my FGVs quite intelligent so that they are not really polled in high performance loops, but rather would manage them instead. It does show a potential problem in the arbitration of VI access though if that arbitration eats up that much resources. -
fundamental question about order of execution
Rolf Kalbermatter replied to Jordan Kuehn's topic in Real-Time
I'm aware that it is. However in my experience they very quickly evolve because of additional requirements as the project grows. And I prefer to have the related code centralized in the FGV than have it sprinkled around in several subVIs throughout the project or as happens often when quickly adding a new feature, even just attached to the global variable itself in the various GUI VIs. Now if I could add some logic into the NSV itself and maintain it with it, then who knows :-). As it stands now the even more clear approach would be to then write a LabVIEW library or LVOOP that manages all aspects of such a "global variable" logic and use it as such. But that is quite a bit more initial effort than creating a FGV and I also like the fact that I can easily do a "Find all Instances" and quickly visit all places where my FGV is used, when reviewing modifications to its internal logic. Will have to checkout the performance test VIs you posted. The parallel access numbers you posted look very much like you somehow forcefully sequentialized access to those VIs in order to create out of sequence access collisions. Otherwise I can't see why accessing the FGV in 4 places should suddenly take about 15 times as long. So basically the NSV + RT FIFO is more or less doing what the FGV solution would be doing by maintaining a local copy that gets written to the network when it changes but only polling the internal copy normally? -
I haven't really looked at the current implementation of pluggin controls through DLLs but a quick glance at the IMAQ control made me believe that the DLL part is really just some form of template handler and the actual implementation is still compiled into the LabVIEW kernel. And changing that kernel is definitely not something that could be done by anyone outside the LabVIEW development team even if you had access to the source code. In old days (V 3.x) LabVIEW had a different interface for external custom controls based on a special kind of CIN. It was based directly on installing a virtual method table for the control in question and this virtual method table was responsible to react to all kind of events like mouse clicks, drawing, etc. However since this virtual method table changed with every new LabVIEW version such controls would have been very difficult to allow to move up to a new LabVIEW version without recompile. Also the registration process of such controls was limited in that LabVIEW did only reserve a limited amount of slots in its global tables for such external plugin controls. It most likely was a proof of concept that wasn't maintained when LabVIEW extended that virtual method table to allow for new features like undo and what else in 4.0 and was completely axed in 5.0. It required more or less the entire set of LabVIEW header files including the private headers in order to create such a control from C code. Also the only real documentation was supposedly the LabVIEW source code itself. I do believe that the Picture control started its initial life as such a control but quickly got incorporated as whole into the LabVIEW source code itself as it was much easier to maintain the code changes that each new LabVIEW version caused to the virtual table interface of all LabVIEW controls. In short, writing custom controls based on a C(++) interface in LabVIEW while technically possible would require such a deep understanding of the LabVIEW internals that it seems highly unlikely that NI ever will consider that outside of some very closely supervised NDA with some very special parties. It would also require access to many LabVIEW internal manager APIs that are often not exported in any way on the C side of LabVIEW and only partly on the VI server interface. LabVIEW 3 and 4 did export quite a lot of low level C API manager calls such as text manager, graphic drawing, and window handling, which for a large part got completely removed in version 5 and 6 in favor of exporting more and more functionality through the newly added VI server interface on the diagram level.