Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. And why do you think a variant is 8 bytes long? I don't know how long it is, but it is either a pointer or a more complex structure whose size you cannot easily determine. In the pointer case, which I would tend to believe it is, it would be either 4 bytes or 8 bytes long depending on if you run this on 32 bit or 64 bit. The pointer theory is further enforced by the lonely definition of the LvVariant typedef in the extcode.h file. It is certainly an object so will hold a virtual dispatch table (a pointer, yes) and the actual data itself in whatever format LabVIEW would like. Most likely they chose a similar approach to the Windows VARIANT layout with a variable telling the type of the data and the actual data either as pointer for non-scalar data and directly embedded for scalars. If you run your VI on LabVIEW for 32 bit, the second MoveBlock will overwrite data in memory beyond the Variantpointer and therefore destroy something in memory. Please note also that LabVIEW knows in fact two variants. One is the native LvVariant and the other is the Windows VARIANT. They look the same on the diagram and LabVIEW will happily coerce from one to the other but in memory they are different. And while you can configure a CLN parameter to be a Windows VARIANT, this is only supported on Windows obviously. Still wish they would document the LvVariant API that is exported by LabVIEW.exe.
  2. And that is not necessarily LabVIEWs fault. Most ActiveX controls/servers are not designed to be multithreading safe, declaring themselves to be apartment threaded, meaning any specific caller has to call them always from the same thread. LabVIEW being itself highly dynamic threading can only guarantee this by executing them from the only special thread it knows about, and that is the UI thread. Theoretically LabVIEW could of course extend its already very complex threading model by providing a dynamic number of fixed threads (nice contradiction in here ) one for each apartment threaded ActiveX control the user wants to run, but that would make the whole threading in LabVIEW mainly even more complex and more likely to fail under heavy conditions, so it would buy little in the end. The best solution would be to require the ActiveX component developer to provide a free threading compatible component and register it as such in Windows. Then LabVIEW can call it from any of its threads and does not have to push it into the UI thread. But as already pointed out, free threading capable ActiveX components are a very rare species, since they are not exactly trivial to develop and the apartement threading works in the majority of use cases good enough.
  3. Attachment to post on the NI site, as the attachment upload seems borked there. VB VISA.zip
  4. The NIPrivatePtr() macro definitely is a LabVIEW 2012 invention. That didn't exist before. In LabVIEW 8.5 until 2011 this was called LV_PRIVATE_POINTER() and before that PrivatP(). So as you can see the bad guys here are the NI developers, devilish changing macro definitions over the course of decades. The change from PrivateP() to LV_PRIVATE_POINTER() is most likely due to supporting a development environment (like maybe a new MacOSX SDK) that introduced exactly this macro for its own use and then NI has to change its own macros to prevent a name collision. Them forcing parties like Apple or Microsoft to change their macro definitions because of having longer rights on the naming is not an option. The last change looks a bit bogus. Most likely the macro in 8.5 was added by a Microsoft inspired hungarian notion loving guy or gal in the LabVIEW team, while someone in 2012 decided that this notion is evil and changed it to the more Apple like Upper/Lowercase notion, that is used throughout the LabVIEW source in most places (since LabVIEW was originally written on the Mac and then ported to other platforms later). Changing LabPython such that it can compile with all the different LabVIEW cintools header versions is a nice exercise in preprocessor macro magic, and one I honestly do not feel any inclinations to do at this moment. If (and the emphasis is here on IF) I ever would need to recompile LabPython for any reason, I would still need the old definitions since I generally use cintools headers that are based mostly on the LabVIEW 7.1 definitions, with a few minor edits to support 64 bit compilation.
  5. I can assure you that I have not put in any deliberate typo in the source code, both in the C or LabVIEW part. I can't guarantee that something might be messed up somehow because of some settings on my machine at that time, that would not work out of the box on a standard installation, but there was no intention to make LabPython not compile for others. Would be also pretty useless, given that the ENTIRE source code is actually openly accessible. As to why LabPython is the way it is now, that has a long story. The first idea was simply to have a LabVIEW Python Script server so that you could execute Python scripts in the LabVIEW formula script node. While this was a fun idea, the practical usefullness of this solution turned out to be rather limited, since the script text was in fact compiled into the VI and therefore not possible to change at runtime. Looking at it a bit more it seemed quite trivial to simply export some more of the DLL functions and write LabVIEW VIs to access them, to have a truly dynamic script execution. And the dynamic linking to the Python server was a further afterthought to the whole story, since Python had regular updates and each version used a different DLL name. But working with the Python community turned out a bit complicated at that time. While Python both supports embedding other environments into it as well as embedding Python into other systems (like here in LabVIEW) support for the latter was considered highly unimportant and various discussions about improving that interface where shutdown as undesirable. So once I got it working in LabVIEW I sort of lost every drive to do anything more with it. The fact that you seem to be the first noticing a typo (after more than 10 years) only points out that nobody apparently has bothered in all that time to even take a serious look beyond the Python VIs, despite that the entire source code is openly available. Also that typo might be rather something that was 12 years ago not a typo, with Visual C 6.0 and LabVIEW 6.0 cintools.
  6. Tinkering with someone else secrets can really cause paranoia. Up until now it surely wasn't done, but maybe you brought some legal guy on an idea now. I doubt that developers like AQ would ever even consider such a feature, but if suddenly the powers to be, after reading your post, decide that this is a good idea, he can't really tell them no!
  7. It's a long time that I looked at ASIO. It appeared to to be a somewhat old fashioned API that wasn't interfacable to LabVIEW without a good intermediate shared library layer. If it would support low latencies is beyond my knowledge and also if it would be naturally achievable or if the intermediate layer would have to jump through several hoops and loops to support that. MME or more correctly DirectX as is used at least by the second versioun of the Sound APIs can be used for much less than 300 ms latency too, but you have to do some serious effort for that, something the NI developers didn't feel like doing, which is understandable as NI is in the business of selling high quality data acquisition hardware for that purpose. Anyhow that project did finally not go anywhere as the effort for the ASIO interface was considered to expensive in relation to the rest of the project and buying NI hardware was simply cheaper than the combined costs of a high performance ASIO based audio interface hardware and the development cost of the LabVIEW ASIO interface. Also you need to understand that ASIO is some sort of pseudo standard, proposed by one party in the field, with several others adopting it more or less accurately, with the less being in vast majority.
  8. It's not the lack of description in the VIs or such that is feeling wrong, but the kind of post you do. Posting a ZIP file with a single line of text is really not going to make anyone understand what it might be about, and we all are way to busy to have enough time at our hands to download every single piece of file on the net to play around with it. So the natural reaction is: what the heck? When you go back at those posts about LabVIEW scripting you will find that they weren't usually just a ZIP file thrown out to the masses but there was some time invested to write about what it was, why and how. That helps tremendously. I can understand that you are excited about having undigged something and would want to make everybody on earth know what a great achievement that is. But that is not helped by just posting as is, without some extra explanation about what it is. And FPGA is specific enough that the target audience is rather limited anyhow.
  9. Most file systems including the usual ones under Windows know two sets of basic access rights for files. One is the access right which determines if the application has read, write or both access and are usually defined when opening the file. The other are deny rights which specify what the application wants to allow other applications to do with the file while it has it open. When your application is trying to open a file the OS is verifying if the requested access rights will conflict with any defined deny rights by other applications which have the same file opened at the moment. If there is no conflict the open operation is granted, otherwise you get an access denied error. So the first important part is what deny rights the other application defines for the file when it opens it. Since it is writing to the file, it by default denies any other write rights to the file by other applications, but it can also choose to specifically require to deny any and all rights for other applications. There is nothing LabVIEW (or any other application) could do about it, if the other application decides to request exclusive access when opening the file. But if it doesn't request denial of read rights to other applications, then you can open that file with read rights (but usually not for write access) while it is open in a different process/application. The access rights to request are defined in LabVIEW when opening the file with Open/Create/Replace File. This will have implicit deny write access for other applications when the write access right is requested (and may make the Open fail if write access is requested and another application has the file already open for write or explicitedly denied write access for the file). The Advanced File->Deny Access function can be used to change the default deny access rights for a file after opening it. Here you can deny both read and/or write access for other processes independent of the chosen access right when opening the file refnum.
  10. That shouldn't make any difference. A buffer of four bytes with bCnt = 4 is absolutely equivalent to an int32 passed by reference with bCnt = 4. The DLL function has absolutely no way to see a difference there. The only thing that remains would be what the aVarId value needs to be. The description is totally cryptic and might be a very good candidate to have understood wrong.
  11. Well the information is not really enough to make conclusive statements here but two or three things make me wonder a bit. In the description it says "Pointer to the handle of the tCascon structure ...." so I'm wondiering if this parameter should be passed by reference. But the C++ prototype doesn't look like it should. Are you sure the function is stdcall? The CASRUNFUNC might be specifying the calling convention but without seeing the headers it is impossible to say. If it doesn't specify it it will depend on the compiler switch used to compile the DLL but if nothing is specified and the CASRUNFUNC doesn't specify an explicit calling convention it is normally always cdecl. Last but not least, what does not work? What do you get? A crash, an error return code, no info?
  12. Well, it's not exactly trivial to do that, and it also will only work if you use tasks to execute the scripts but included VI should give you an idea about how to go about it. I didn't code the actual passing out of the different data elements in detail but I'm sure you can adapt it to whatever needs you might have. Get Task Information.vi
  13. There are at least three spam posts in the Lava blog but normal users don't seem to have the right to report them. So I'm doing it here.
  14. Then you have to consider if you can live with time for the system to be not working while you replace components that have failed at some point. If that is possible you can just make sure to keep some spare parts around and replace them as they fail. If uninterrupted 24/7 operation is mandatory then even the PXI solution isn't a save bet but definitely a more likely one, to work like that once the system is deployed and not modified anymore.
  15. Well in principle when you kill an application the OS will take care about deallocating all the memory and handles that application has opened. However in practice it is possible that the OS is not able to track down every single resource that got allocated by the process. As far as memory is concerned I would not fret to much, since that is fairly easy for the OS to determine. Where it could get hairy is when your application used device drivers to open resources and one of them does not get closed properly. Since the actual allocation was in fact done by the device driver, the OS is not always able to determine on whose behalves that was done and such resources can easily remain open and lock up certain parts of the system until you restart the computer. It's theoretically also possible that such locked resources could do dangerous things to the integrity of the OS, to the point that it gets unstable even after a restart although that's not very likely. Since you say that you have carefully made sure that all allocated resources like files, IO resources and handles and what else have been properly closed, it is most likely not going to destroy your computer in any way that could not be solved by fully restart it after a complete shutdown. What would concern me however with such a solution is that you might end up making a tiny change to your application and unless you carefully test it to release all resources properly by disabling the kill option and making sure the application closes properly, no matter how long that may take, this small change could suddenly prevent a resource from being properly released. Since your application gets killed you may not notice this until your system gets unstable because of corrupted system files.
  16. Thanks that makes sense! And I'm probably mostly safe from that issue because I tend to make my FGVs quite intelligent so that they are not really polled in high performance loops, but rather would manage them instead. It does show a potential problem in the arbitration of VI access though if that arbitration eats up that much resources.
  17. I'm aware that it is. However in my experience they very quickly evolve because of additional requirements as the project grows. And I prefer to have the related code centralized in the FGV than have it sprinkled around in several subVIs throughout the project or as happens often when quickly adding a new feature, even just attached to the global variable itself in the various GUI VIs. Now if I could add some logic into the NSV itself and maintain it with it, then who knows :-). As it stands now the even more clear approach would be to then write a LabVIEW library or LVOOP that manages all aspects of such a "global variable" logic and use it as such. But that is quite a bit more initial effort than creating a FGV and I also like the fact that I can easily do a "Find all Instances" and quickly visit all places where my FGV is used, when reviewing modifications to its internal logic. Will have to checkout the performance test VIs you posted. The parallel access numbers you posted look very much like you somehow forcefully sequentialized access to those VIs in order to create out of sequence access collisions. Otherwise I can't see why accessing the FGV in 4 places should suddenly take about 15 times as long. So basically the NSV + RT FIFO is more or less doing what the FGV solution would be doing by maintaining a local copy that gets written to the network when it changes but only polling the internal copy normally?
  18. I haven't really looked at the current implementation of pluggin controls through DLLs but a quick glance at the IMAQ control made me believe that the DLL part is really just some form of template handler and the actual implementation is still compiled into the LabVIEW kernel. And changing that kernel is definitely not something that could be done by anyone outside the LabVIEW development team even if you had access to the source code. In old days (V 3.x) LabVIEW had a different interface for external custom controls based on a special kind of CIN. It was based directly on installing a virtual method table for the control in question and this virtual method table was responsible to react to all kind of events like mouse clicks, drawing, etc. However since this virtual method table changed with every new LabVIEW version such controls would have been very difficult to allow to move up to a new LabVIEW version without recompile. Also the registration process of such controls was limited in that LabVIEW did only reserve a limited amount of slots in its global tables for such external plugin controls. It most likely was a proof of concept that wasn't maintained when LabVIEW extended that virtual method table to allow for new features like undo and what else in 4.0 and was completely axed in 5.0. It required more or less the entire set of LabVIEW header files including the private headers in order to create such a control from C code. Also the only real documentation was supposedly the LabVIEW source code itself. I do believe that the Picture control started its initial life as such a control but quickly got incorporated as whole into the LabVIEW source code itself as it was much easier to maintain the code changes that each new LabVIEW version caused to the virtual table interface of all LabVIEW controls. In short, writing custom controls based on a C(++) interface in LabVIEW while technically possible would require such a deep understanding of the LabVIEW internals that it seems highly unlikely that NI ever will consider that outside of some very closely supervised NDA with some very special parties. It would also require access to many LabVIEW internal manager APIs that are often not exported in any way on the C side of LabVIEW and only partly on the VI server interface. LabVIEW 3 and 4 did export quite a lot of low level C API manager calls such as text manager, graphic drawing, and window handling, which for a large part got completely removed in version 5 and 6 in favor of exporting more and more functionality through the newly added VI server interface on the diagram level.
  19. There are several issues at hand here. First, killing an application instead or exiting it is very similar to using the abort button in a LabVIEW VI. It is a bit like stopping your car by running it in a concrete wall. Works very quickly and perfectly if your only concern is to stop as fast as possible but the causalities "might" be significant. LabVIEW does a lot of housekeeping when loading VIs and as a well behaved citizen of the OS it is running on attempts to release all the memory it has allocated during the course of running. Since a VI consists typically of quite a few memory blocks for the different parts of it, this amounts quickly to a lot of pointers. Running through all those tables and freeing every single memory block does cost time. In addition if you run in the IDE there is a considerable amount of framework providers that hook the application exit event and do their own release of VI resources before they even let LabVIEW itself go to start working on the actual memory block allocations. As more toolkits and extensions you have installed as longer the IDE will take to unload. Now on most modern OS systems the OS will actually do cleanup on exit of an application so strictly speaking it is not really necessary to cleanup before exit. But this cleanup is limited to resources that the OS has allocated through normal means on request of the application. It includes things like memory allocations and OS handles such as files, network sockets, and synchronization objects such as events and queues. It works fairly well and seems almost instantaneous but only because much of the work is done in the background. Windows won't maintain a list of every memory block allocated by an application but manages memory in pages that get allocated to the process. So releasing that memory is not like having to walk a list of 1000ds of pointers and deallocating them one for one, but it simply changes a few bytes in its page allocation manager and the memory page is suddenly freed per 4K or even bigger junks. Collecting all the handles that the OS has created on behalves of the application is a more involved process and takes time but can be done in a background process so the application seems to be terminated but its resources aren't yet fully claimed right away. That is for instance why a network socket usually isn't immediately available for reopening when it was closed implicitly. The problem is that relying on the OS to clean up everything is a very insecure way of going about the matter. There are differences between OS versions which resources get properly claimed after process termination and even bigger differences between different OS platforms. Most modern desktop OSes do a pretty good job in that, the RT systems do very little in that respect. On the other hand it is not common to start and stop RT control tasks frequently (except during development) so that might be not a to bad situation either. Simply going to deallocate everything properly before exiting is the most secure way of operation. If they would decide to "optimize" the application shutdown by only deallocating the resources that are known to cause problems, I'm sure there would be a handful of developers getting tied up by this to write test cases for the different OSes, and add unit tests to the daily test build runs to verify that the assumptions about what to deallocate and what not are still valid on all supported OSes and versions. It might be also a very strong reason to scrap support for any OS version immediately that is older than 2 years in order to keep the possible permutations for the unit tests manageable. And that trimming the working set has negative impact on the process termination time, is quite logical in most cases. It really only helps if there is a lot of memory blocks (not necessarily MBs) that has been allocated previously and freed later on. The trimming will release any memory pages that are not used by the application anymore to the OS and page out all the others but the most frequently accessed ones to the page file. Since the memory blocks allocated for all the VIs are still valid, trimming can not free the pages they are located in and will therefore page them out. Only when the VIs are released (unloaded) are those blocks freed but in order for the OS to free them it has to access them which triggers the paging handler to map those blocks back into memory. So trimming the memory set has potentially returned some huge memory blocks to the OS that had been used for the analysis part in the application but were then freed by LabVIEW, and will simply be reclaimed by LabVIEW when needed again. But it also paged out all the memory blocks where the VI structures are stored for the large VI hierarchy and when LabVIEW then goes and unloads the VI hierarchy it triggers the virtual memory manager many times while freeing all the memory associated with the VI hierarchy. And the virtual memory manager is a VERY slow beast in comparison to most other things on the computer, since it needs to interrupt the entire OS for the duration of its operation in order to not corrupt the memory management tables of the OS.
  20. I think the argument that one has an advantage over the other in terms of the current situation is valid for both cases :-). Future modifications to the application could render the decision to go for one or the other invalid in both cases. The NSV only case when that variable is suddenly also polled repeatedly throughout the application, rather than only at initialization, the FGV case in that someone makes modifications to the application without understanding FGVs and in the process of that modification botches its functionality. For me the choice is clear as I use FGVs all the time, understand them quite well and can dream up an FGV much quicker than I can get an overview of an architecture where global variables are sprinkled throughout the code. And an NSV is very much a global variable, just with a potentially rather resource hungry network access engine chained to its hands and legs.
  21. It's not strictly necessary since LabVIEW does an implicit open on a VISA resource when it does find that that resource hasn't been opened yet. LabVIEW stores the internal VISA handle that belongs to a VISA resource with the resource itself in a global list of VISA resources. However suppose you didn't use the VISA Open in your executable: The implicit Open would have failed too, but possibly without a good way to report that error. So I really prefer to always explicitly open VISA resources anyway. Costs nothing when writing the code, but makes it much clearer what is happening and possibly improves error detection.
  22. I would dispute the "more" in more robust in respect to an FGV/Action Engine. It's possibly equally robust at the cost of querying a NSV repeatedly, which is certainly a more resource intensive operation than querying an FGV with a shift register, even if the NSV would be deployed and hosted on the cRIO. It would be unavoidable if someone else on the network could also write to the NSV but in the case where it is clearly published by the cRIO only, there is no advantage at all in using only an NSV alone other than not having to write a small VI, but that is a one time cost.
  23. It might be more helpful if you post both the zip file you want to extract and the code you created. Debugging from screen shots is feels so awkward that I simply refuse to spend any time on that. Also make sure to post any VIs in 2011 or earlier. I don't have at the moment always access to a machine with 2012 installed. One thing I see however is that you pass in the application directory to the target path. This should be the file path of the file you want to create! And if the filename is the same as the one in the ZIP archive (but watch out here as paths in an archive can be relative paths defining several directory levels) then you do not need to connect the internal name at all, as it will be extracted from the passed in target path. If you had posted the VI and ZIP file in the beginning I could have run it and seen the problem immediately. Deducing such things from a screen shot is more difficult since there is no context help and all that available.
  24. What is the contents of the ZIP_File.zip? The higher level VIs that extract the entire archive to a directory would be a good place to see how these VIs should be called.
  25. Yes that is what I was thinking. On "read" just read the local FGV shift register and on "write" update both the NSV as well as the shift register. As long as you can make sure that the write always happens through this FGV on the RT system and anyone else only reads the NSV this should be perfectly race free. Most likely you can even perform an optimization in the FGV to only write to the NSV when the new value is different than the previous.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.