Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. There are several issues at hand here. First, killing an application instead or exiting it is very similar to using the abort button in a LabVIEW VI. It is a bit like stopping your car by running it in a concrete wall. Works very quickly and perfectly if your only concern is to stop as fast as possible but the causalities "might" be significant. LabVIEW does a lot of housekeeping when loading VIs and as a well behaved citizen of the OS it is running on attempts to release all the memory it has allocated during the course of running. Since a VI consists typically of quite a few memory blocks for the different parts of it, this amounts quickly to a lot of pointers. Running through all those tables and freeing every single memory block does cost time. In addition if you run in the IDE there is a considerable amount of framework providers that hook the application exit event and do their own release of VI resources before they even let LabVIEW itself go to start working on the actual memory block allocations. As more toolkits and extensions you have installed as longer the IDE will take to unload. Now on most modern OS systems the OS will actually do cleanup on exit of an application so strictly speaking it is not really necessary to cleanup before exit. But this cleanup is limited to resources that the OS has allocated through normal means on request of the application. It includes things like memory allocations and OS handles such as files, network sockets, and synchronization objects such as events and queues. It works fairly well and seems almost instantaneous but only because much of the work is done in the background. Windows won't maintain a list of every memory block allocated by an application but manages memory in pages that get allocated to the process. So releasing that memory is not like having to walk a list of 1000ds of pointers and deallocating them one for one, but it simply changes a few bytes in its page allocation manager and the memory page is suddenly freed per 4K or even bigger junks. Collecting all the handles that the OS has created on behalves of the application is a more involved process and takes time but can be done in a background process so the application seems to be terminated but its resources aren't yet fully claimed right away. That is for instance why a network socket usually isn't immediately available for reopening when it was closed implicitly. The problem is that relying on the OS to clean up everything is a very insecure way of going about the matter. There are differences between OS versions which resources get properly claimed after process termination and even bigger differences between different OS platforms. Most modern desktop OSes do a pretty good job in that, the RT systems do very little in that respect. On the other hand it is not common to start and stop RT control tasks frequently (except during development) so that might be not a to bad situation either. Simply going to deallocate everything properly before exiting is the most secure way of operation. If they would decide to "optimize" the application shutdown by only deallocating the resources that are known to cause problems, I'm sure there would be a handful of developers getting tied up by this to write test cases for the different OSes, and add unit tests to the daily test build runs to verify that the assumptions about what to deallocate and what not are still valid on all supported OSes and versions. It might be also a very strong reason to scrap support for any OS version immediately that is older than 2 years in order to keep the possible permutations for the unit tests manageable. And that trimming the working set has negative impact on the process termination time, is quite logical in most cases. It really only helps if there is a lot of memory blocks (not necessarily MBs) that has been allocated previously and freed later on. The trimming will release any memory pages that are not used by the application anymore to the OS and page out all the others but the most frequently accessed ones to the page file. Since the memory blocks allocated for all the VIs are still valid, trimming can not free the pages they are located in and will therefore page them out. Only when the VIs are released (unloaded) are those blocks freed but in order for the OS to free them it has to access them which triggers the paging handler to map those blocks back into memory. So trimming the memory set has potentially returned some huge memory blocks to the OS that had been used for the analysis part in the application but were then freed by LabVIEW, and will simply be reclaimed by LabVIEW when needed again. But it also paged out all the memory blocks where the VI structures are stored for the large VI hierarchy and when LabVIEW then goes and unloads the VI hierarchy it triggers the virtual memory manager many times while freeing all the memory associated with the VI hierarchy. And the virtual memory manager is a VERY slow beast in comparison to most other things on the computer, since it needs to interrupt the entire OS for the duration of its operation in order to not corrupt the memory management tables of the OS.
  2. I think the argument that one has an advantage over the other in terms of the current situation is valid for both cases :-). Future modifications to the application could render the decision to go for one or the other invalid in both cases. The NSV only case when that variable is suddenly also polled repeatedly throughout the application, rather than only at initialization, the FGV case in that someone makes modifications to the application without understanding FGVs and in the process of that modification botches its functionality. For me the choice is clear as I use FGVs all the time, understand them quite well and can dream up an FGV much quicker than I can get an overview of an architecture where global variables are sprinkled throughout the code. And an NSV is very much a global variable, just with a potentially rather resource hungry network access engine chained to its hands and legs.
  3. It's not strictly necessary since LabVIEW does an implicit open on a VISA resource when it does find that that resource hasn't been opened yet. LabVIEW stores the internal VISA handle that belongs to a VISA resource with the resource itself in a global list of VISA resources. However suppose you didn't use the VISA Open in your executable: The implicit Open would have failed too, but possibly without a good way to report that error. So I really prefer to always explicitly open VISA resources anyway. Costs nothing when writing the code, but makes it much clearer what is happening and possibly improves error detection.
  4. I would dispute the "more" in more robust in respect to an FGV/Action Engine. It's possibly equally robust at the cost of querying a NSV repeatedly, which is certainly a more resource intensive operation than querying an FGV with a shift register, even if the NSV would be deployed and hosted on the cRIO. It would be unavoidable if someone else on the network could also write to the NSV but in the case where it is clearly published by the cRIO only, there is no advantage at all in using only an NSV alone other than not having to write a small VI, but that is a one time cost.
  5. It might be more helpful if you post both the zip file you want to extract and the code you created. Debugging from screen shots is feels so awkward that I simply refuse to spend any time on that. Also make sure to post any VIs in 2011 or earlier. I don't have at the moment always access to a machine with 2012 installed. One thing I see however is that you pass in the application directory to the target path. This should be the file path of the file you want to create! And if the filename is the same as the one in the ZIP archive (but watch out here as paths in an archive can be relative paths defining several directory levels) then you do not need to connect the internal name at all, as it will be extracted from the passed in target path. If you had posted the VI and ZIP file in the beginning I could have run it and seen the problem immediately. Deducing such things from a screen shot is more difficult since there is no context help and all that available.
  6. What is the contents of the ZIP_File.zip? The higher level VIs that extract the entire archive to a directory would be a good place to see how these VIs should be called.
  7. Yes that is what I was thinking. On "read" just read the local FGV shift register and on "write" update both the NSV as well as the shift register. As long as you can make sure that the write always happens through this FGV on the RT system and anyone else only reads the NSV this should be perfectly race free. Most likely you can even perform an optimization in the FGV to only write to the NSV when the new value is different than the previous.
  8. Well but the 9501 is in the cRIO too! So do you mean that the SV is once written in the RT application during initialization and once in the FPGA code or such? Because if it is both done in the RT code I still think you have basically only one source of the data for this and can encapsulate it in a non re-entrant buffer VI that will make sure to synchronize access to the local variable and the SW.
  9. Actually it's not misleading at all. If you specify a service name rather than a port number the LabVIEW node will query the "service locator service" on the target machine to query the port number it should use. This "service locator service" is part of the NI webserver. Your target application when specyfing a service name to the Open TCP Listener will register itself with the allocated port in the local "service locator service". So you have two options here: 1) document this behaviour in your application manual and sell it as a feature 2) Change your server application to use an explicit port when registering and your client to listen to that port Note to others coming across this: In order for the service name registration to work in the LabVIEW TCP/IP nodes, one needs to make sure to have the NI System WebServer component installed on the server machine. If you build an application installer don't forget to select that component in the Additional Installs section (part of the according LabVIEW runtime engine).
  10. I think you just circumvented the race condition but didn't really solve it. Suppose your network has a hiccup and the update gets delayed for longer than your small delay! A possible solution would be to have a local copy of the shared variable that is the real value and whenever you update it you update also the shared variable. Of course this will only work if you can limit writing of that SV to one location.
  11. You are unreasonable and you know that. Moving to C++ to make sure to not run into problems or bugs sounds like a plan, but a bad one. Which C compiler do you plan to use, GCC, Visual C, Intel CC, Borland C, Watcom C? They all are great but none of them is bug free and when you have managed a few more complex C++ projects you will know that, as you are sure to run into some code constructs that do not always get compiled right by some of them. Nothing that can't be worked around but still. And that is just the compiler. Lets not talk about the IDE's that have all their obnoxious behaviors and bugs too. So the question at the end of the day that remains is what are the people at your company more familiar with and how much code can be produced and how bug free can you and your colleagues make your own code. Changing programming languages because of an (acquired) antipathy is a sure way for disaster.
  12. I would have to echo Jordan's comments. NI isn't perfect but certainly one of the better suppliers in software land. And as software developer yourself you should know that fixing a bug without a reproducible error procedure is very hard and often impossible. So far all that is mentioned in this thread are symptoms and possible reasons of what could have been a contributing factor to what you saw happening. Without more info about how to produce this error it most likely will be almost impossible to come up with a fix.
  13. In addition to what Ned says, Telnet is a protocol in itself that sits on top of TCP. So just going to send the string that you normally enter in the Telnet prompt definitely will not work! You have to implement the Telnet protocol (which is quite simple BTW) on top of the TCP primitives too. However the Internet Toolkit contains a full Telnet client library.
  14. Well there might be some sort of bug in LabVIEW here, but it seems that LabVIEW for some reasons was made to believe at that point that the lvlib and the according VIs would not come from the same volume (drive). That is AFAIK the only reason for LabVIEW to use absolute paths when referring from one resource file to another one it would depend on. When loading that lvlib, your collegue probably got a warning to the fact that VIs where loaded from a different path than where they were expected (maybe he had them still in memory loaded from the old path). This dialogue is however IMHO rather unhelpful in many cases, as it does not always give a good overview as to why this warning was created and offers even less possibilities to fix it.
  15. Shaun has basically said all. Your .sys driver is a Windows kernel driver (a really more or less unavoidable thing if you want to access register addresses and physical memory, which is what PCI cards require). This kernel driver will definitely not be possible to be loaded into Pahrlap as the Pharlap kernel works quite a bit different than the Windows kernel. For one it's a lot leaner and optimized for RT tasks, while the Windows kernel is a huge thing that tries to do just about everything. The DLL simply is the user mode access library for the kernel driver, to make it easier to use it. Even if that DLL would be Pharlap compatible, which is actually highly unlikely if they used a modern Visual C compiler to create it, it would not help since the real driver is located in the kernel driver and can't be used under Pharlap anyways. Writing a kernel driver is just as Shaun says a very time consuming and specialized work. It's definitely one of the more advanced C programming tasks and requires expert knowledge. Also debugging it is a pain in the ass: Everytime you encounter an error you usually have to restart the system, make the changes, compile and link the driver, install it again and then start debugging again. This is because if your kernel driver causes a bad memory access your entire system is potentially borked for good and continuing to run from there could have catastrophic consequences for your entire system integrity. Writing a Pharlap kernel driver is even more special, since there is very little information available about how to do it. And it requires one to buy the Pharlap ETS development license which is also quite an expense. That all said, I got a crazy idea, that I'm not sure has any merits. VISA allows to access hardware resources on register level by creating an INF file on Windows with the VISA Driver Wizard. Not sure if this is an option under LabVIEW RT, this document seems vague about if only NI-VISA itself is available under RT or also the driver Wizard itself (or more precisely the according VISA Low Level Register driver as you could probably do the development under normal Windows and then copy the entire VI hierarchy and INF file over to the RT system, if the API is supported).
  16. LabVIEW uses URL format for its XML based paths which happens to be always Unix style. Symbolic paths are rather something like "<instrlib>/aardvark/aardvark.llb/Aardvark GPIO Set.vi", however the HTML expansion does make that a little less obvious in Dan's post. To my knowledge LabVIEW only should use absolute paths as reference if they happen to refer to a different volume. On real Unix systems this is of course not an issue as there you have one unique filesystem root, but I have a hunch your colleague may have accessed the VIs through a mounted drive letter. I could see that causing possible problems if the VI library was loaded through a different drive letter than the actual VI. Shouldn't usually be possible but its not entirely impossible. The actual path in the XML file may not show to be different because the path handling that determines if paths are on the same volume works likely on a different level and when the paths are finally converted to the URL style format they are most likely normalized, which means reducing the path to its minimal form and that could resolve drive aliases.
  17. Of course there are different ways an image could be invalid. However considering he was looking for a simple LabVIEW VI "to check if the image data [of an input terminal] is valid or not" it seemed like a logical consideration that he might be looking for something along the lines of the Not a Number/Path/Refnum node. And since IMAQ images are in fact simply a special form of refnum too, which I think isn't obvious to most, I wanted to point out that this might be the solution. He probably wants an easy way to detect if the input terminal received a valid image refnum. Anything else will require implementation specific use of one or more IMAQ Vision VIs to check if the specific properties or contents of the valid image reference meet the desired specifications.
  18. It requires a little out of the box thinking but try the Not a Number, Path, Refnum primitive.
  19. Which isn't a bad thing if you intend to distribute the VIs to other platforms than Windows!
  20. That function does not do the same than what this VI does. For one the string part in the lvzip library is always in the unix form, while the other side should be in the native path format. Try to convert a path like /test/test/test.txt on Windows with this function. Of course you can replace all the occurrences of into / on Windows, in the resulting string, but that just complicates the whole thing. I probably end up putting that entire function into the actual shared library function, since it also needs to do character encoding conversion too to allow to work with filenames containing extended ASCII (or UTF8) characters. And to make everything interesting the whole encoding is VERY different on each platform. The strings in the ZIP file are under Windows normally stored with the OEM charset while LabVIEW as a true GUI application uses everywhere the ANSI codepage. Basically they are both locale (country) specific and contain more or less the same characters but of course on different places! That is the reason that filenames containing extended characters look wrong when extracted with the lvzip library currently. On other platforms there isn't even a true standard with the various ZIP tools as to how to encode the filenames in an archive. It usually just uses whatever is the current locale on the system, which at least on modern Unixes is usually UTF8. The ZIP format allows for UTF8 filenames too, but since on Unix most ZIP tools are programmed to use the current locale, they do store UTF8 names but do not set the flag that says so! Of course there are also many ZIP tools that still don't really know about UTF8 too so extracting an archive with them that was created with UTF names causes serious trouble. Basically there is no absolutely correct way to make lvzip deal properly with all these things. My plan is to make it work such that for packing it uses only standard ASCI when the filenames don't contain any extended character and otherwise always use UTF8. For unpacking it will have to deal with the UTF8 flag and otherwise assume whatever is the current locale, which can and will go wrong if the archive wasn't created with the same locale than it is retrieved. On Unix there is no good way to support extraction of files with extended ASCII characters at all, unless I pull in iconv or iuc as a dependency.
  21. I'm trying to look at this. I assume you work on OS X 10.8? Basically all Carbon type file IO functions seem to have been depreciated in 10.8. And one of them probably has a hickup now. The translation of Mac OS errors to LabVIEW errors is always a tricky thing, and I know I could probably have put more effort into that routine in the C code, yet it's mostly useless information anyhow, other than that it went wrong somehow. My current difficulty is that I do not have a modern Mac available that could run 10.8 in any way. So I have to work on an old (and terriiiiiibly sloooooooow) PPC machine for the moment. Should still be able to test and compile the code to get at least running for 10.5 and then will have to get you to run some tests. I just want you to know that I'm working on this, but I can't make any tight promises as to when the new Mac OS X sharedlib will be ready for you to test. Having a more modern Mac available would help but I have to work with what I have here.
  22. That change was made on April 10, 2011 to the VI in Subversion. Not sure when the latest release of the ZLIB library was created by JGCode. It might have been just before that. On April 11, 2011 an additional change was made to also support Pharlap and VxWorks targets and on July 17, 2012 an additional change to support Windows 64 Bit (which is still irrelevant as the DLL is not yet released for 64 bit). I have a lot of code changes on my local system mostly to support 64 bit but also some fixes to the string encoding problem but it is all in an unfinished state and I hesitate to commit it to the repository as anyone trying to create a new distribution library of that would create a likely somewhat broken library currently. I'm also not exactly sure about the current procedure to create a new library release as well as the mentioned icon palette changes in the last release made by JGCode. I didn't see any commits of those changes to the repository. Otherwise I might have created a new release myself with the current code.
  23. While this is simple it is a brute force approach. Users likely will not like that their user interface suddenly uses the "wrong" decimal point, since this setting changes the decimal sign for everything in the LabVIEW app. The better approach is to think about localization and make the code use explicit formats where necessary and leave the default where things are presented to users. For the legacy string functions you have the mentioned boolean input, for Scan From String and Format into String you have the %.; or %,; prefix to the format string which tells the function to use an explicit decimal sign. Basically anything that goes to the UI would use the system default (and no prefix in the format string), things that communicate with a device or anything similar will likely be done with the %.; prefix. This way the user gets to see the decimal numbers in whatever way he is used to and the communication with GPIB, TCP/IP and whatever devices will work irrespective of the local country settings.
  24. Well imagCloseToolWindow() definitely is an IMAQ (Ni Vision) function and as such never could be located in the avcodec.dll. Seems like the linker is messing up the import tables somehow when told to optimize the import and/or export tables. Could be because of some wizardy in the NI Vision DLL but certainly seems a bug of the link stage optimizer in the compiler. Is this Visual C or LabWindows CVI?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.