Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,907
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. Well in principle when you kill an application the OS will take care about deallocating all the memory and handles that application has opened. However in practice it is possible that the OS is not able to track down every single resource that got allocated by the process. As far as memory is concerned I would not fret to much, since that is fairly easy for the OS to determine. Where it could get hairy is when your application used device drivers to open resources and one of them does not get closed properly. Since the actual allocation was in fact done by the device driver, the OS is not always able to determine on whose behalves that was done and such resources can easily remain open and lock up certain parts of the system until you restart the computer. It's theoretically also possible that such locked resources could do dangerous things to the integrity of the OS, to the point that it gets unstable even after a restart although that's not very likely. Since you say that you have carefully made sure that all allocated resources like files, IO resources and handles and what else have been properly closed, it is most likely not going to destroy your computer in any way that could not be solved by fully restart it after a complete shutdown. What would concern me however with such a solution is that you might end up making a tiny change to your application and unless you carefully test it to release all resources properly by disabling the kill option and making sure the application closes properly, no matter how long that may take, this small change could suddenly prevent a resource from being properly released. Since your application gets killed you may not notice this until your system gets unstable because of corrupted system files.
  2. Thanks that makes sense! And I'm probably mostly safe from that issue because I tend to make my FGVs quite intelligent so that they are not really polled in high performance loops, but rather would manage them instead. It does show a potential problem in the arbitration of VI access though if that arbitration eats up that much resources.
  3. I'm aware that it is. However in my experience they very quickly evolve because of additional requirements as the project grows. And I prefer to have the related code centralized in the FGV than have it sprinkled around in several subVIs throughout the project or as happens often when quickly adding a new feature, even just attached to the global variable itself in the various GUI VIs. Now if I could add some logic into the NSV itself and maintain it with it, then who knows :-). As it stands now the even more clear approach would be to then write a LabVIEW library or LVOOP that manages all aspects of such a "global variable" logic and use it as such. But that is quite a bit more initial effort than creating a FGV and I also like the fact that I can easily do a "Find all Instances" and quickly visit all places where my FGV is used, when reviewing modifications to its internal logic. Will have to checkout the performance test VIs you posted. The parallel access numbers you posted look very much like you somehow forcefully sequentialized access to those VIs in order to create out of sequence access collisions. Otherwise I can't see why accessing the FGV in 4 places should suddenly take about 15 times as long. So basically the NSV + RT FIFO is more or less doing what the FGV solution would be doing by maintaining a local copy that gets written to the network when it changes but only polling the internal copy normally?
  4. I haven't really looked at the current implementation of pluggin controls through DLLs but a quick glance at the IMAQ control made me believe that the DLL part is really just some form of template handler and the actual implementation is still compiled into the LabVIEW kernel. And changing that kernel is definitely not something that could be done by anyone outside the LabVIEW development team even if you had access to the source code. In old days (V 3.x) LabVIEW had a different interface for external custom controls based on a special kind of CIN. It was based directly on installing a virtual method table for the control in question and this virtual method table was responsible to react to all kind of events like mouse clicks, drawing, etc. However since this virtual method table changed with every new LabVIEW version such controls would have been very difficult to allow to move up to a new LabVIEW version without recompile. Also the registration process of such controls was limited in that LabVIEW did only reserve a limited amount of slots in its global tables for such external plugin controls. It most likely was a proof of concept that wasn't maintained when LabVIEW extended that virtual method table to allow for new features like undo and what else in 4.0 and was completely axed in 5.0. It required more or less the entire set of LabVIEW header files including the private headers in order to create such a control from C code. Also the only real documentation was supposedly the LabVIEW source code itself. I do believe that the Picture control started its initial life as such a control but quickly got incorporated as whole into the LabVIEW source code itself as it was much easier to maintain the code changes that each new LabVIEW version caused to the virtual table interface of all LabVIEW controls. In short, writing custom controls based on a C(++) interface in LabVIEW while technically possible would require such a deep understanding of the LabVIEW internals that it seems highly unlikely that NI ever will consider that outside of some very closely supervised NDA with some very special parties. It would also require access to many LabVIEW internal manager APIs that are often not exported in any way on the C side of LabVIEW and only partly on the VI server interface. LabVIEW 3 and 4 did export quite a lot of low level C API manager calls such as text manager, graphic drawing, and window handling, which for a large part got completely removed in version 5 and 6 in favor of exporting more and more functionality through the newly added VI server interface on the diagram level.
  5. There are several issues at hand here. First, killing an application instead or exiting it is very similar to using the abort button in a LabVIEW VI. It is a bit like stopping your car by running it in a concrete wall. Works very quickly and perfectly if your only concern is to stop as fast as possible but the causalities "might" be significant. LabVIEW does a lot of housekeeping when loading VIs and as a well behaved citizen of the OS it is running on attempts to release all the memory it has allocated during the course of running. Since a VI consists typically of quite a few memory blocks for the different parts of it, this amounts quickly to a lot of pointers. Running through all those tables and freeing every single memory block does cost time. In addition if you run in the IDE there is a considerable amount of framework providers that hook the application exit event and do their own release of VI resources before they even let LabVIEW itself go to start working on the actual memory block allocations. As more toolkits and extensions you have installed as longer the IDE will take to unload. Now on most modern OS systems the OS will actually do cleanup on exit of an application so strictly speaking it is not really necessary to cleanup before exit. But this cleanup is limited to resources that the OS has allocated through normal means on request of the application. It includes things like memory allocations and OS handles such as files, network sockets, and synchronization objects such as events and queues. It works fairly well and seems almost instantaneous but only because much of the work is done in the background. Windows won't maintain a list of every memory block allocated by an application but manages memory in pages that get allocated to the process. So releasing that memory is not like having to walk a list of 1000ds of pointers and deallocating them one for one, but it simply changes a few bytes in its page allocation manager and the memory page is suddenly freed per 4K or even bigger junks. Collecting all the handles that the OS has created on behalves of the application is a more involved process and takes time but can be done in a background process so the application seems to be terminated but its resources aren't yet fully claimed right away. That is for instance why a network socket usually isn't immediately available for reopening when it was closed implicitly. The problem is that relying on the OS to clean up everything is a very insecure way of going about the matter. There are differences between OS versions which resources get properly claimed after process termination and even bigger differences between different OS platforms. Most modern desktop OSes do a pretty good job in that, the RT systems do very little in that respect. On the other hand it is not common to start and stop RT control tasks frequently (except during development) so that might be not a to bad situation either. Simply going to deallocate everything properly before exiting is the most secure way of operation. If they would decide to "optimize" the application shutdown by only deallocating the resources that are known to cause problems, I'm sure there would be a handful of developers getting tied up by this to write test cases for the different OSes, and add unit tests to the daily test build runs to verify that the assumptions about what to deallocate and what not are still valid on all supported OSes and versions. It might be also a very strong reason to scrap support for any OS version immediately that is older than 2 years in order to keep the possible permutations for the unit tests manageable. And that trimming the working set has negative impact on the process termination time, is quite logical in most cases. It really only helps if there is a lot of memory blocks (not necessarily MBs) that has been allocated previously and freed later on. The trimming will release any memory pages that are not used by the application anymore to the OS and page out all the others but the most frequently accessed ones to the page file. Since the memory blocks allocated for all the VIs are still valid, trimming can not free the pages they are located in and will therefore page them out. Only when the VIs are released (unloaded) are those blocks freed but in order for the OS to free them it has to access them which triggers the paging handler to map those blocks back into memory. So trimming the memory set has potentially returned some huge memory blocks to the OS that had been used for the analysis part in the application but were then freed by LabVIEW, and will simply be reclaimed by LabVIEW when needed again. But it also paged out all the memory blocks where the VI structures are stored for the large VI hierarchy and when LabVIEW then goes and unloads the VI hierarchy it triggers the virtual memory manager many times while freeing all the memory associated with the VI hierarchy. And the virtual memory manager is a VERY slow beast in comparison to most other things on the computer, since it needs to interrupt the entire OS for the duration of its operation in order to not corrupt the memory management tables of the OS.
  6. I think the argument that one has an advantage over the other in terms of the current situation is valid for both cases :-). Future modifications to the application could render the decision to go for one or the other invalid in both cases. The NSV only case when that variable is suddenly also polled repeatedly throughout the application, rather than only at initialization, the FGV case in that someone makes modifications to the application without understanding FGVs and in the process of that modification botches its functionality. For me the choice is clear as I use FGVs all the time, understand them quite well and can dream up an FGV much quicker than I can get an overview of an architecture where global variables are sprinkled throughout the code. And an NSV is very much a global variable, just with a potentially rather resource hungry network access engine chained to its hands and legs.
  7. It's not strictly necessary since LabVIEW does an implicit open on a VISA resource when it does find that that resource hasn't been opened yet. LabVIEW stores the internal VISA handle that belongs to a VISA resource with the resource itself in a global list of VISA resources. However suppose you didn't use the VISA Open in your executable: The implicit Open would have failed too, but possibly without a good way to report that error. So I really prefer to always explicitly open VISA resources anyway. Costs nothing when writing the code, but makes it much clearer what is happening and possibly improves error detection.
  8. I would dispute the "more" in more robust in respect to an FGV/Action Engine. It's possibly equally robust at the cost of querying a NSV repeatedly, which is certainly a more resource intensive operation than querying an FGV with a shift register, even if the NSV would be deployed and hosted on the cRIO. It would be unavoidable if someone else on the network could also write to the NSV but in the case where it is clearly published by the cRIO only, there is no advantage at all in using only an NSV alone other than not having to write a small VI, but that is a one time cost.
  9. It might be more helpful if you post both the zip file you want to extract and the code you created. Debugging from screen shots is feels so awkward that I simply refuse to spend any time on that. Also make sure to post any VIs in 2011 or earlier. I don't have at the moment always access to a machine with 2012 installed. One thing I see however is that you pass in the application directory to the target path. This should be the file path of the file you want to create! And if the filename is the same as the one in the ZIP archive (but watch out here as paths in an archive can be relative paths defining several directory levels) then you do not need to connect the internal name at all, as it will be extracted from the passed in target path. If you had posted the VI and ZIP file in the beginning I could have run it and seen the problem immediately. Deducing such things from a screen shot is more difficult since there is no context help and all that available.
  10. What is the contents of the ZIP_File.zip? The higher level VIs that extract the entire archive to a directory would be a good place to see how these VIs should be called.
  11. Yes that is what I was thinking. On "read" just read the local FGV shift register and on "write" update both the NSV as well as the shift register. As long as you can make sure that the write always happens through this FGV on the RT system and anyone else only reads the NSV this should be perfectly race free. Most likely you can even perform an optimization in the FGV to only write to the NSV when the new value is different than the previous.
  12. Well but the 9501 is in the cRIO too! So do you mean that the SV is once written in the RT application during initialization and once in the FPGA code or such? Because if it is both done in the RT code I still think you have basically only one source of the data for this and can encapsulate it in a non re-entrant buffer VI that will make sure to synchronize access to the local variable and the SW.
  13. Actually it's not misleading at all. If you specify a service name rather than a port number the LabVIEW node will query the "service locator service" on the target machine to query the port number it should use. This "service locator service" is part of the NI webserver. Your target application when specyfing a service name to the Open TCP Listener will register itself with the allocated port in the local "service locator service". So you have two options here: 1) document this behaviour in your application manual and sell it as a feature 2) Change your server application to use an explicit port when registering and your client to listen to that port Note to others coming across this: In order for the service name registration to work in the LabVIEW TCP/IP nodes, one needs to make sure to have the NI System WebServer component installed on the server machine. If you build an application installer don't forget to select that component in the Additional Installs section (part of the according LabVIEW runtime engine).
  14. I think you just circumvented the race condition but didn't really solve it. Suppose your network has a hiccup and the update gets delayed for longer than your small delay! A possible solution would be to have a local copy of the shared variable that is the real value and whenever you update it you update also the shared variable. Of course this will only work if you can limit writing of that SV to one location.
  15. You are unreasonable and you know that. Moving to C++ to make sure to not run into problems or bugs sounds like a plan, but a bad one. Which C compiler do you plan to use, GCC, Visual C, Intel CC, Borland C, Watcom C? They all are great but none of them is bug free and when you have managed a few more complex C++ projects you will know that, as you are sure to run into some code constructs that do not always get compiled right by some of them. Nothing that can't be worked around but still. And that is just the compiler. Lets not talk about the IDE's that have all their obnoxious behaviors and bugs too. So the question at the end of the day that remains is what are the people at your company more familiar with and how much code can be produced and how bug free can you and your colleagues make your own code. Changing programming languages because of an (acquired) antipathy is a sure way for disaster.
  16. I would have to echo Jordan's comments. NI isn't perfect but certainly one of the better suppliers in software land. And as software developer yourself you should know that fixing a bug without a reproducible error procedure is very hard and often impossible. So far all that is mentioned in this thread are symptoms and possible reasons of what could have been a contributing factor to what you saw happening. Without more info about how to produce this error it most likely will be almost impossible to come up with a fix.
  17. In addition to what Ned says, Telnet is a protocol in itself that sits on top of TCP. So just going to send the string that you normally enter in the Telnet prompt definitely will not work! You have to implement the Telnet protocol (which is quite simple BTW) on top of the TCP primitives too. However the Internet Toolkit contains a full Telnet client library.
  18. Well there might be some sort of bug in LabVIEW here, but it seems that LabVIEW for some reasons was made to believe at that point that the lvlib and the according VIs would not come from the same volume (drive). That is AFAIK the only reason for LabVIEW to use absolute paths when referring from one resource file to another one it would depend on. When loading that lvlib, your collegue probably got a warning to the fact that VIs where loaded from a different path than where they were expected (maybe he had them still in memory loaded from the old path). This dialogue is however IMHO rather unhelpful in many cases, as it does not always give a good overview as to why this warning was created and offers even less possibilities to fix it.
  19. Shaun has basically said all. Your .sys driver is a Windows kernel driver (a really more or less unavoidable thing if you want to access register addresses and physical memory, which is what PCI cards require). This kernel driver will definitely not be possible to be loaded into Pahrlap as the Pharlap kernel works quite a bit different than the Windows kernel. For one it's a lot leaner and optimized for RT tasks, while the Windows kernel is a huge thing that tries to do just about everything. The DLL simply is the user mode access library for the kernel driver, to make it easier to use it. Even if that DLL would be Pharlap compatible, which is actually highly unlikely if they used a modern Visual C compiler to create it, it would not help since the real driver is located in the kernel driver and can't be used under Pharlap anyways. Writing a kernel driver is just as Shaun says a very time consuming and specialized work. It's definitely one of the more advanced C programming tasks and requires expert knowledge. Also debugging it is a pain in the ass: Everytime you encounter an error you usually have to restart the system, make the changes, compile and link the driver, install it again and then start debugging again. This is because if your kernel driver causes a bad memory access your entire system is potentially borked for good and continuing to run from there could have catastrophic consequences for your entire system integrity. Writing a Pharlap kernel driver is even more special, since there is very little information available about how to do it. And it requires one to buy the Pharlap ETS development license which is also quite an expense. That all said, I got a crazy idea, that I'm not sure has any merits. VISA allows to access hardware resources on register level by creating an INF file on Windows with the VISA Driver Wizard. Not sure if this is an option under LabVIEW RT, this document seems vague about if only NI-VISA itself is available under RT or also the driver Wizard itself (or more precisely the according VISA Low Level Register driver as you could probably do the development under normal Windows and then copy the entire VI hierarchy and INF file over to the RT system, if the API is supported).
  20. LabVIEW uses URL format for its XML based paths which happens to be always Unix style. Symbolic paths are rather something like "<instrlib>/aardvark/aardvark.llb/Aardvark GPIO Set.vi", however the HTML expansion does make that a little less obvious in Dan's post. To my knowledge LabVIEW only should use absolute paths as reference if they happen to refer to a different volume. On real Unix systems this is of course not an issue as there you have one unique filesystem root, but I have a hunch your colleague may have accessed the VIs through a mounted drive letter. I could see that causing possible problems if the VI library was loaded through a different drive letter than the actual VI. Shouldn't usually be possible but its not entirely impossible. The actual path in the XML file may not show to be different because the path handling that determines if paths are on the same volume works likely on a different level and when the paths are finally converted to the URL style format they are most likely normalized, which means reducing the path to its minimal form and that could resolve drive aliases.
  21. Of course there are different ways an image could be invalid. However considering he was looking for a simple LabVIEW VI "to check if the image data [of an input terminal] is valid or not" it seemed like a logical consideration that he might be looking for something along the lines of the Not a Number/Path/Refnum node. And since IMAQ images are in fact simply a special form of refnum too, which I think isn't obvious to most, I wanted to point out that this might be the solution. He probably wants an easy way to detect if the input terminal received a valid image refnum. Anything else will require implementation specific use of one or more IMAQ Vision VIs to check if the specific properties or contents of the valid image reference meet the desired specifications.
  22. It requires a little out of the box thinking but try the Not a Number, Path, Refnum primitive.
  23. Which isn't a bad thing if you intend to distribute the VIs to other platforms than Windows!
  24. That function does not do the same than what this VI does. For one the string part in the lvzip library is always in the unix form, while the other side should be in the native path format. Try to convert a path like /test/test/test.txt on Windows with this function. Of course you can replace all the occurrences of into / on Windows, in the resulting string, but that just complicates the whole thing. I probably end up putting that entire function into the actual shared library function, since it also needs to do character encoding conversion too to allow to work with filenames containing extended ASCII (or UTF8) characters. And to make everything interesting the whole encoding is VERY different on each platform. The strings in the ZIP file are under Windows normally stored with the OEM charset while LabVIEW as a true GUI application uses everywhere the ANSI codepage. Basically they are both locale (country) specific and contain more or less the same characters but of course on different places! That is the reason that filenames containing extended characters look wrong when extracted with the lvzip library currently. On other platforms there isn't even a true standard with the various ZIP tools as to how to encode the filenames in an archive. It usually just uses whatever is the current locale on the system, which at least on modern Unixes is usually UTF8. The ZIP format allows for UTF8 filenames too, but since on Unix most ZIP tools are programmed to use the current locale, they do store UTF8 names but do not set the flag that says so! Of course there are also many ZIP tools that still don't really know about UTF8 too so extracting an archive with them that was created with UTF names causes serious trouble. Basically there is no absolutely correct way to make lvzip deal properly with all these things. My plan is to make it work such that for packing it uses only standard ASCI when the filenames don't contain any extended character and otherwise always use UTF8. For unpacking it will have to deal with the UTF8 flag and otherwise assume whatever is the current locale, which can and will go wrong if the archive wasn't created with the same locale than it is retrieved. On Unix there is no good way to support extraction of files with extended ASCII characters at all, unless I pull in iconv or iuc as a dependency.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.