Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Most file systems including the usual ones under Windows know two sets of basic access rights for files. One is the access right which determines if the application has read, write or both access and are usually defined when opening the file. The other are deny rights which specify what the application wants to allow other applications to do with the file while it has it open. When your application is trying to open a file the OS is verifying if the requested access rights will conflict with any defined deny rights by other applications which have the same file opened at the moment. If there is no conflict the open operation is granted, otherwise you get an access denied error. So the first important part is what deny rights the other application defines for the file when it opens it. Since it is writing to the file, it by default denies any other write rights to the file by other applications, but it can also choose to specifically require to deny any and all rights for other applications. There is nothing LabVIEW (or any other application) could do about it, if the other application decides to request exclusive access when opening the file. But if it doesn't request denial of read rights to other applications, then you can open that file with read rights (but usually not for write access) while it is open in a different process/application. The access rights to request are defined in LabVIEW when opening the file with Open/Create/Replace File. This will have implicit deny write access for other applications when the write access right is requested (and may make the Open fail if write access is requested and another application has the file already open for write or explicitedly denied write access for the file). The Advanced File->Deny Access function can be used to change the default deny access rights for a file after opening it. Here you can deny both read and/or write access for other processes independent of the chosen access right when opening the file refnum.
  2. That shouldn't make any difference. A buffer of four bytes with bCnt = 4 is absolutely equivalent to an int32 passed by reference with bCnt = 4. The DLL function has absolutely no way to see a difference there. The only thing that remains would be what the aVarId value needs to be. The description is totally cryptic and might be a very good candidate to have understood wrong.
  3. Well the information is not really enough to make conclusive statements here but two or three things make me wonder a bit. In the description it says "Pointer to the handle of the tCascon structure ...." so I'm wondiering if this parameter should be passed by reference. But the C++ prototype doesn't look like it should. Are you sure the function is stdcall? The CASRUNFUNC might be specifying the calling convention but without seeing the headers it is impossible to say. If it doesn't specify it it will depend on the compiler switch used to compile the DLL but if nothing is specified and the CASRUNFUNC doesn't specify an explicit calling convention it is normally always cdecl. Last but not least, what does not work? What do you get? A crash, an error return code, no info?
  4. Well, it's not exactly trivial to do that, and it also will only work if you use tasks to execute the scripts but included VI should give you an idea about how to go about it. I didn't code the actual passing out of the different data elements in detail but I'm sure you can adapt it to whatever needs you might have. Get Task Information.vi
  5. There are at least three spam posts in the Lava blog but normal users don't seem to have the right to report them. So I'm doing it here.
  6. Then you have to consider if you can live with time for the system to be not working while you replace components that have failed at some point. If that is possible you can just make sure to keep some spare parts around and replace them as they fail. If uninterrupted 24/7 operation is mandatory then even the PXI solution isn't a save bet but definitely a more likely one, to work like that once the system is deployed and not modified anymore.
  7. Well in principle when you kill an application the OS will take care about deallocating all the memory and handles that application has opened. However in practice it is possible that the OS is not able to track down every single resource that got allocated by the process. As far as memory is concerned I would not fret to much, since that is fairly easy for the OS to determine. Where it could get hairy is when your application used device drivers to open resources and one of them does not get closed properly. Since the actual allocation was in fact done by the device driver, the OS is not always able to determine on whose behalves that was done and such resources can easily remain open and lock up certain parts of the system until you restart the computer. It's theoretically also possible that such locked resources could do dangerous things to the integrity of the OS, to the point that it gets unstable even after a restart although that's not very likely. Since you say that you have carefully made sure that all allocated resources like files, IO resources and handles and what else have been properly closed, it is most likely not going to destroy your computer in any way that could not be solved by fully restart it after a complete shutdown. What would concern me however with such a solution is that you might end up making a tiny change to your application and unless you carefully test it to release all resources properly by disabling the kill option and making sure the application closes properly, no matter how long that may take, this small change could suddenly prevent a resource from being properly released. Since your application gets killed you may not notice this until your system gets unstable because of corrupted system files.
  8. Thanks that makes sense! And I'm probably mostly safe from that issue because I tend to make my FGVs quite intelligent so that they are not really polled in high performance loops, but rather would manage them instead. It does show a potential problem in the arbitration of VI access though if that arbitration eats up that much resources.
  9. I'm aware that it is. However in my experience they very quickly evolve because of additional requirements as the project grows. And I prefer to have the related code centralized in the FGV than have it sprinkled around in several subVIs throughout the project or as happens often when quickly adding a new feature, even just attached to the global variable itself in the various GUI VIs. Now if I could add some logic into the NSV itself and maintain it with it, then who knows :-). As it stands now the even more clear approach would be to then write a LabVIEW library or LVOOP that manages all aspects of such a "global variable" logic and use it as such. But that is quite a bit more initial effort than creating a FGV and I also like the fact that I can easily do a "Find all Instances" and quickly visit all places where my FGV is used, when reviewing modifications to its internal logic. Will have to checkout the performance test VIs you posted. The parallel access numbers you posted look very much like you somehow forcefully sequentialized access to those VIs in order to create out of sequence access collisions. Otherwise I can't see why accessing the FGV in 4 places should suddenly take about 15 times as long. So basically the NSV + RT FIFO is more or less doing what the FGV solution would be doing by maintaining a local copy that gets written to the network when it changes but only polling the internal copy normally?
  10. I haven't really looked at the current implementation of pluggin controls through DLLs but a quick glance at the IMAQ control made me believe that the DLL part is really just some form of template handler and the actual implementation is still compiled into the LabVIEW kernel. And changing that kernel is definitely not something that could be done by anyone outside the LabVIEW development team even if you had access to the source code. In old days (V 3.x) LabVIEW had a different interface for external custom controls based on a special kind of CIN. It was based directly on installing a virtual method table for the control in question and this virtual method table was responsible to react to all kind of events like mouse clicks, drawing, etc. However since this virtual method table changed with every new LabVIEW version such controls would have been very difficult to allow to move up to a new LabVIEW version without recompile. Also the registration process of such controls was limited in that LabVIEW did only reserve a limited amount of slots in its global tables for such external plugin controls. It most likely was a proof of concept that wasn't maintained when LabVIEW extended that virtual method table to allow for new features like undo and what else in 4.0 and was completely axed in 5.0. It required more or less the entire set of LabVIEW header files including the private headers in order to create such a control from C code. Also the only real documentation was supposedly the LabVIEW source code itself. I do believe that the Picture control started its initial life as such a control but quickly got incorporated as whole into the LabVIEW source code itself as it was much easier to maintain the code changes that each new LabVIEW version caused to the virtual table interface of all LabVIEW controls. In short, writing custom controls based on a C(++) interface in LabVIEW while technically possible would require such a deep understanding of the LabVIEW internals that it seems highly unlikely that NI ever will consider that outside of some very closely supervised NDA with some very special parties. It would also require access to many LabVIEW internal manager APIs that are often not exported in any way on the C side of LabVIEW and only partly on the VI server interface. LabVIEW 3 and 4 did export quite a lot of low level C API manager calls such as text manager, graphic drawing, and window handling, which for a large part got completely removed in version 5 and 6 in favor of exporting more and more functionality through the newly added VI server interface on the diagram level.
  11. There are several issues at hand here. First, killing an application instead or exiting it is very similar to using the abort button in a LabVIEW VI. It is a bit like stopping your car by running it in a concrete wall. Works very quickly and perfectly if your only concern is to stop as fast as possible but the causalities "might" be significant. LabVIEW does a lot of housekeeping when loading VIs and as a well behaved citizen of the OS it is running on attempts to release all the memory it has allocated during the course of running. Since a VI consists typically of quite a few memory blocks for the different parts of it, this amounts quickly to a lot of pointers. Running through all those tables and freeing every single memory block does cost time. In addition if you run in the IDE there is a considerable amount of framework providers that hook the application exit event and do their own release of VI resources before they even let LabVIEW itself go to start working on the actual memory block allocations. As more toolkits and extensions you have installed as longer the IDE will take to unload. Now on most modern OS systems the OS will actually do cleanup on exit of an application so strictly speaking it is not really necessary to cleanup before exit. But this cleanup is limited to resources that the OS has allocated through normal means on request of the application. It includes things like memory allocations and OS handles such as files, network sockets, and synchronization objects such as events and queues. It works fairly well and seems almost instantaneous but only because much of the work is done in the background. Windows won't maintain a list of every memory block allocated by an application but manages memory in pages that get allocated to the process. So releasing that memory is not like having to walk a list of 1000ds of pointers and deallocating them one for one, but it simply changes a few bytes in its page allocation manager and the memory page is suddenly freed per 4K or even bigger junks. Collecting all the handles that the OS has created on behalves of the application is a more involved process and takes time but can be done in a background process so the application seems to be terminated but its resources aren't yet fully claimed right away. That is for instance why a network socket usually isn't immediately available for reopening when it was closed implicitly. The problem is that relying on the OS to clean up everything is a very insecure way of going about the matter. There are differences between OS versions which resources get properly claimed after process termination and even bigger differences between different OS platforms. Most modern desktop OSes do a pretty good job in that, the RT systems do very little in that respect. On the other hand it is not common to start and stop RT control tasks frequently (except during development) so that might be not a to bad situation either. Simply going to deallocate everything properly before exiting is the most secure way of operation. If they would decide to "optimize" the application shutdown by only deallocating the resources that are known to cause problems, I'm sure there would be a handful of developers getting tied up by this to write test cases for the different OSes, and add unit tests to the daily test build runs to verify that the assumptions about what to deallocate and what not are still valid on all supported OSes and versions. It might be also a very strong reason to scrap support for any OS version immediately that is older than 2 years in order to keep the possible permutations for the unit tests manageable. And that trimming the working set has negative impact on the process termination time, is quite logical in most cases. It really only helps if there is a lot of memory blocks (not necessarily MBs) that has been allocated previously and freed later on. The trimming will release any memory pages that are not used by the application anymore to the OS and page out all the others but the most frequently accessed ones to the page file. Since the memory blocks allocated for all the VIs are still valid, trimming can not free the pages they are located in and will therefore page them out. Only when the VIs are released (unloaded) are those blocks freed but in order for the OS to free them it has to access them which triggers the paging handler to map those blocks back into memory. So trimming the memory set has potentially returned some huge memory blocks to the OS that had been used for the analysis part in the application but were then freed by LabVIEW, and will simply be reclaimed by LabVIEW when needed again. But it also paged out all the memory blocks where the VI structures are stored for the large VI hierarchy and when LabVIEW then goes and unloads the VI hierarchy it triggers the virtual memory manager many times while freeing all the memory associated with the VI hierarchy. And the virtual memory manager is a VERY slow beast in comparison to most other things on the computer, since it needs to interrupt the entire OS for the duration of its operation in order to not corrupt the memory management tables of the OS.
  12. I think the argument that one has an advantage over the other in terms of the current situation is valid for both cases :-). Future modifications to the application could render the decision to go for one or the other invalid in both cases. The NSV only case when that variable is suddenly also polled repeatedly throughout the application, rather than only at initialization, the FGV case in that someone makes modifications to the application without understanding FGVs and in the process of that modification botches its functionality. For me the choice is clear as I use FGVs all the time, understand them quite well and can dream up an FGV much quicker than I can get an overview of an architecture where global variables are sprinkled throughout the code. And an NSV is very much a global variable, just with a potentially rather resource hungry network access engine chained to its hands and legs.
  13. It's not strictly necessary since LabVIEW does an implicit open on a VISA resource when it does find that that resource hasn't been opened yet. LabVIEW stores the internal VISA handle that belongs to a VISA resource with the resource itself in a global list of VISA resources. However suppose you didn't use the VISA Open in your executable: The implicit Open would have failed too, but possibly without a good way to report that error. So I really prefer to always explicitly open VISA resources anyway. Costs nothing when writing the code, but makes it much clearer what is happening and possibly improves error detection.
  14. I would dispute the "more" in more robust in respect to an FGV/Action Engine. It's possibly equally robust at the cost of querying a NSV repeatedly, which is certainly a more resource intensive operation than querying an FGV with a shift register, even if the NSV would be deployed and hosted on the cRIO. It would be unavoidable if someone else on the network could also write to the NSV but in the case where it is clearly published by the cRIO only, there is no advantage at all in using only an NSV alone other than not having to write a small VI, but that is a one time cost.
  15. It might be more helpful if you post both the zip file you want to extract and the code you created. Debugging from screen shots is feels so awkward that I simply refuse to spend any time on that. Also make sure to post any VIs in 2011 or earlier. I don't have at the moment always access to a machine with 2012 installed. One thing I see however is that you pass in the application directory to the target path. This should be the file path of the file you want to create! And if the filename is the same as the one in the ZIP archive (but watch out here as paths in an archive can be relative paths defining several directory levels) then you do not need to connect the internal name at all, as it will be extracted from the passed in target path. If you had posted the VI and ZIP file in the beginning I could have run it and seen the problem immediately. Deducing such things from a screen shot is more difficult since there is no context help and all that available.
  16. What is the contents of the ZIP_File.zip? The higher level VIs that extract the entire archive to a directory would be a good place to see how these VIs should be called.
  17. Yes that is what I was thinking. On "read" just read the local FGV shift register and on "write" update both the NSV as well as the shift register. As long as you can make sure that the write always happens through this FGV on the RT system and anyone else only reads the NSV this should be perfectly race free. Most likely you can even perform an optimization in the FGV to only write to the NSV when the new value is different than the previous.
  18. Well but the 9501 is in the cRIO too! So do you mean that the SV is once written in the RT application during initialization and once in the FPGA code or such? Because if it is both done in the RT code I still think you have basically only one source of the data for this and can encapsulate it in a non re-entrant buffer VI that will make sure to synchronize access to the local variable and the SW.
  19. Actually it's not misleading at all. If you specify a service name rather than a port number the LabVIEW node will query the "service locator service" on the target machine to query the port number it should use. This "service locator service" is part of the NI webserver. Your target application when specyfing a service name to the Open TCP Listener will register itself with the allocated port in the local "service locator service". So you have two options here: 1) document this behaviour in your application manual and sell it as a feature 2) Change your server application to use an explicit port when registering and your client to listen to that port Note to others coming across this: In order for the service name registration to work in the LabVIEW TCP/IP nodes, one needs to make sure to have the NI System WebServer component installed on the server machine. If you build an application installer don't forget to select that component in the Additional Installs section (part of the according LabVIEW runtime engine).
  20. I think you just circumvented the race condition but didn't really solve it. Suppose your network has a hiccup and the update gets delayed for longer than your small delay! A possible solution would be to have a local copy of the shared variable that is the real value and whenever you update it you update also the shared variable. Of course this will only work if you can limit writing of that SV to one location.
  21. You are unreasonable and you know that. Moving to C++ to make sure to not run into problems or bugs sounds like a plan, but a bad one. Which C compiler do you plan to use, GCC, Visual C, Intel CC, Borland C, Watcom C? They all are great but none of them is bug free and when you have managed a few more complex C++ projects you will know that, as you are sure to run into some code constructs that do not always get compiled right by some of them. Nothing that can't be worked around but still. And that is just the compiler. Lets not talk about the IDE's that have all their obnoxious behaviors and bugs too. So the question at the end of the day that remains is what are the people at your company more familiar with and how much code can be produced and how bug free can you and your colleagues make your own code. Changing programming languages because of an (acquired) antipathy is a sure way for disaster.
  22. I would have to echo Jordan's comments. NI isn't perfect but certainly one of the better suppliers in software land. And as software developer yourself you should know that fixing a bug without a reproducible error procedure is very hard and often impossible. So far all that is mentioned in this thread are symptoms and possible reasons of what could have been a contributing factor to what you saw happening. Without more info about how to produce this error it most likely will be almost impossible to come up with a fix.
  23. In addition to what Ned says, Telnet is a protocol in itself that sits on top of TCP. So just going to send the string that you normally enter in the Telnet prompt definitely will not work! You have to implement the Telnet protocol (which is quite simple BTW) on top of the TCP primitives too. However the Internet Toolkit contains a full Telnet client library.
  24. Well there might be some sort of bug in LabVIEW here, but it seems that LabVIEW for some reasons was made to believe at that point that the lvlib and the according VIs would not come from the same volume (drive). That is AFAIK the only reason for LabVIEW to use absolute paths when referring from one resource file to another one it would depend on. When loading that lvlib, your collegue probably got a warning to the fact that VIs where loaded from a different path than where they were expected (maybe he had them still in memory loaded from the old path). This dialogue is however IMHO rather unhelpful in many cases, as it does not always give a good overview as to why this warning was created and offers even less possibilities to fix it.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.