hooovahh Posted April 15, 2013 Report Share Posted April 15, 2013 (edited) So lets say I have a large application, lets pretend it is a 50MB EXE (it's not we are exaggerating). I run this EXE and it runs for thousands of hours, then I close it. When I close it all of the "Actors" run their own cleanup, and then the UI is hidden, then I call the Quit LabVIEW primitive. If I look at the task manager the program still runs for some time after calling the Quit function but eventually leaves the task manager list. I have seen in rare occasions (where I was messing with memory in ways I shouldn't) that caused the quit function to take up to a minute to actually exit. What does this quit primitive do? Why does quitting take longer than it should? As a test I replaced the Quit LabVIEW, with a task kill operation on the EXE name. Now when I hit the close button my application does the clean up on each Actor as it should, and then kills the program. This operation now feels instant. So is there something wrong with killing my program my way, after all logs have been closed, and hardware sessions closed? EDIT: Okay so a search with Killing LabVIEW brought up this work around for killing an Actor based EXE. So does this mean there is nothing wrong with killing other LabVIEW EXEs using this method? Edited April 15, 2013 by hooovahh Quote Link to comment
Tim_S Posted April 16, 2013 Report Share Posted April 16, 2013 What initially (because these are what have bit me) comes to mind is: - Do you still have any code running at all that could take that long to terminate? - How much memory are you using? How much of the memory is swapped out? Quote Link to comment
bmoyer Posted April 16, 2013 Report Share Posted April 16, 2013 I've run into this issue too (I don't have a fix for you). I even take it a step further and close the front panel immediately, even without waiting for the loops to close. Quote Link to comment
ShaunR Posted April 16, 2013 Report Share Posted April 16, 2013 Well. Ever since LV2010 I've noticed that even the IDE takes forever to shut down. I have assumed (rightly or wrongly) that it is something to do with making sure the run-time exits elegantly, so the implications of a task-kill may be subtle and/or horrendous in some unknown scenario (DAQ config?). Suffice to say. NI have done something during the shut-down process that has increased the duration significantly. I would compare your software with one that has been compiled in 2009. I would expect it to exit far more promptly if it is down to the NI environment. Quote Link to comment
hooovahh Posted April 16, 2013 Author Report Share Posted April 16, 2013 What initially (because these are what have bit me) comes to mind is:- Do you still have any code running at all that could take that long to terminate? - How much memory are you using? How much of the memory is swapped out? There is nothing running, after all code has stopped executing, and Actors have have performed their cleanup and then I call the Quit. As for memory swapping out. This is what can be used to show the biggest time in shut down. If I force LabVIEW to release the working set of memory during normal execution, then Quit will take much longer to stop the EXE. Now this is something I don't do regularly I just noticed it made it worst. Even so why should this matter all my VIs are done running, when I say Quit why should it take a long time even if I did screwy things with memory. I've run into this issue too (I don't have a fix for you). I even take it a step further and close the front panel immediately, even without waiting for the loops to close. That's fine and all but the EXE is still running. I have the INI key of allowmultipleexecutions (it might be slightly different) set to FALSE, so if I exit (which just hides the UI) I can't restart the application until the last run is really done, which again may take a while. Well. Ever since LV2010 I've noticed that even the IDE takes forever to shut down. I have assumed (rightly or wrongly) that it is something to do with making sure the run-time exits elegantly, so the implications of a task-kill may be subtle and/or horrendous in some unknown scenario (DAQ config?). Suffice to say. NI have done something during the shut-down process that has increased the duration significantly. I would compare your software with one that has been compiled in 2009. I would expect it to exit far more promptly if it is down to the NI environment. I also noticed longer shutdown time in newer versions. But what could the run time engine be doing that is so important, where killing it would be bad if all the things I care about are closed out properly? The problem with this question is no one but NI has a real answer, or could suggest why I shouldn't just kill it. Quote Link to comment
Tim_S Posted April 16, 2013 Report Share Posted April 16, 2013 There is nothing running, after all code has stopped executing, and Actors have have performed their cleanup and then I call the Quit. As for memory swapping out. This is what can be used to show the biggest time in shut down. If I force LabVIEW to release the working set of memory during normal execution, then Quit will take much longer to stop the EXE. Now this is something I don't do regularly I just noticed it made it worst. Even so why should this matter all my VIs are done running, when I say Quit why should it take a long time even if I did screwy things with memory. What has bit me before is with running code are VIs that are "fire and forget" that I've forgotten about (particularly clones of VIs). All the VIs have stopped executing, except the dynamically called. If you're using a lot of memory that has swapped out, then Windows (I'm assuming you're using Windows) has to swap it back in to physical memory to release it. This can take a Very Long Time. You can see it happening in Task Manager/Resource Monitor. Quote Link to comment
Ton Plomp Posted April 17, 2013 Report Share Posted April 17, 2013 I don't think I have ever used 'Kill LabVIEW' in any executable the last 5 years. Why did you have to resort to it? Ton Quote Link to comment
mje Posted April 17, 2013 Report Share Posted April 17, 2013 I can't say I use the quit/exit LabVIEW primitive anymore, but I do have 50 MB executables which can take a long time to shut down. This application can be called upon to manage data sets hundreds of GB in size resulting with a memory footprint of a few GB for tracking things like indices and caches. I've observed this application take a minute to unload. I can watch the task manager tick down the memory footprint at a rate of about 100 MB/s as things get cleaned up. This is on my workstation grade system with 20 GB of RAM for an application that takes perhaps 1-4 GB of memory depending on data load. Large analyses on resource starved systems can take several minutes to unwind if page files get involved. To some extent I also think its related to the size of the VI hierarchy-- size as in number of VIs, not number of bytes. I've written very simple "quick and dirty" applications with perhaps 10 VIs which can chew up pretty impressive memory footprints if you point them at sufficiently sized data sets, and when terminating these applications are pretty snappy. By contrast the 50 MB application with a sizable hierarchy can still take 20 seconds to unload even if it has no real memory load beyond what is required to run the application. I expect just as there's overhead involved with opening each VI when starting the application, there's something going on with each VI when terminating. Quote Link to comment
hooovahh Posted April 17, 2013 Author Report Share Posted April 17, 2013 I don't think I have ever used 'Kill LabVIEW' in any executable the last 5 years. Why did you have to resort to it? Ton I don't have to resort to this I just don't know why it isn't immediate. When I call quit I want the application out of memory within human noticeable speeds. (say less than 500ms) I can't say I use the quit/exit LabVIEW primitive anymore, I'll add a case to my standard Quit if in RTE VI that has a kill as an option set to false by default. If I see any issues this will be the first thing I change back. Quote Link to comment
Daklu Posted April 22, 2013 Report Share Posted April 22, 2013 If I force LabVIEW to release the working set of memory during normal execution... I have to ask... how did you do this? Quote Link to comment
hooovahh Posted April 22, 2013 Author Report Share Posted April 22, 2013 I have to ask... how did you do this? Using a modified version of this. That VI specifically flushes out the working set memory for the application running named "SeqEdit.exe", it can be changed to any application name. Using this on any application will cause the working set memory to drop very low but as soon as you need memory again it climbs back up as expected. Using this code I've never actually seen any problems get fixed, just prolong the inevitable. Quote Link to comment
Popular Post Rolf Kalbermatter Posted April 23, 2013 Popular Post Report Share Posted April 23, 2013 There are several issues at hand here. First, killing an application instead or exiting it is very similar to using the abort button in a LabVIEW VI. It is a bit like stopping your car by running it in a concrete wall. Works very quickly and perfectly if your only concern is to stop as fast as possible but the causalities "might" be significant. LabVIEW does a lot of housekeeping when loading VIs and as a well behaved citizen of the OS it is running on attempts to release all the memory it has allocated during the course of running. Since a VI consists typically of quite a few memory blocks for the different parts of it, this amounts quickly to a lot of pointers. Running through all those tables and freeing every single memory block does cost time. In addition if you run in the IDE there is a considerable amount of framework providers that hook the application exit event and do their own release of VI resources before they even let LabVIEW itself go to start working on the actual memory block allocations. As more toolkits and extensions you have installed as longer the IDE will take to unload. Now on most modern OS systems the OS will actually do cleanup on exit of an application so strictly speaking it is not really necessary to cleanup before exit. But this cleanup is limited to resources that the OS has allocated through normal means on request of the application. It includes things like memory allocations and OS handles such as files, network sockets, and synchronization objects such as events and queues. It works fairly well and seems almost instantaneous but only because much of the work is done in the background. Windows won't maintain a list of every memory block allocated by an application but manages memory in pages that get allocated to the process. So releasing that memory is not like having to walk a list of 1000ds of pointers and deallocating them one for one, but it simply changes a few bytes in its page allocation manager and the memory page is suddenly freed per 4K or even bigger junks. Collecting all the handles that the OS has created on behalves of the application is a more involved process and takes time but can be done in a background process so the application seems to be terminated but its resources aren't yet fully claimed right away. That is for instance why a network socket usually isn't immediately available for reopening when it was closed implicitly. The problem is that relying on the OS to clean up everything is a very insecure way of going about the matter. There are differences between OS versions which resources get properly claimed after process termination and even bigger differences between different OS platforms. Most modern desktop OSes do a pretty good job in that, the RT systems do very little in that respect. On the other hand it is not common to start and stop RT control tasks frequently (except during development) so that might be not a to bad situation either. Simply going to deallocate everything properly before exiting is the most secure way of operation. If they would decide to "optimize" the application shutdown by only deallocating the resources that are known to cause problems, I'm sure there would be a handful of developers getting tied up by this to write test cases for the different OSes, and add unit tests to the daily test build runs to verify that the assumptions about what to deallocate and what not are still valid on all supported OSes and versions. It might be also a very strong reason to scrap support for any OS version immediately that is older than 2 years in order to keep the possible permutations for the unit tests manageable. And that trimming the working set has negative impact on the process termination time, is quite logical in most cases. It really only helps if there is a lot of memory blocks (not necessarily MBs) that has been allocated previously and freed later on. The trimming will release any memory pages that are not used by the application anymore to the OS and page out all the others but the most frequently accessed ones to the page file. Since the memory blocks allocated for all the VIs are still valid, trimming can not free the pages they are located in and will therefore page them out. Only when the VIs are released (unloaded) are those blocks freed but in order for the OS to free them it has to access them which triggers the paging handler to map those blocks back into memory. So trimming the memory set has potentially returned some huge memory blocks to the OS that had been used for the analysis part in the application but were then freed by LabVIEW, and will simply be reclaimed by LabVIEW when needed again. But it also paged out all the memory blocks where the VI structures are stored for the large VI hierarchy and when LabVIEW then goes and unloads the VI hierarchy it triggers the virtual memory manager many times while freeing all the memory associated with the VI hierarchy. And the virtual memory manager is a VERY slow beast in comparison to most other things on the computer, since it needs to interrupt the entire OS for the duration of its operation in order to not corrupt the memory management tables of the OS. 4 Quote Link to comment
hooovahh Posted April 23, 2013 Author Report Share Posted April 23, 2013 @rolfk Regarding the LabVIEW stop / Car into wall situation. I only use the LabVIEW stop in situations in development, when I know that all hardware and references I've opened, have been closed. Similarly I am only talking about using kill as an option after all resources have been closed (File/DAQ/Queue/Events etc). There is tons of details on how memory management works and I know very little. I just don't know what kind of affect I will see (if any). I've already released everything I care to release, and then tell Windows to remove the application from memory, terminating it and all of it's threads. (I'm guessing it executes this Windows function) What real issue could this cause? Can anyone say doing this will cause an application to be corrupted? Or for memory to not be released? I'm not advocating this method, I just think it is a pretty simply way to make an application more Windows like (responsive close) and I have not seen any side effect yet. Quote Link to comment
Rolf Kalbermatter Posted April 23, 2013 Report Share Posted April 23, 2013 @rolfk Regarding the LabVIEW stop / Car into wall situation. I only use the LabVIEW stop in situations in development, when I know that all hardware and references I've opened, have been closed. Similarly I am only talking about using kill as an option after all resources have been closed (File/DAQ/Queue/Events etc). There is tons of details on how memory management works and I know very little. I just don't know what kind of affect I will see (if any). I've already released everything I care to release, and then tell Windows to remove the application from memory, terminating it and all of it's threads. (I'm guessing it executes this Windows function) What real issue could this cause? Can anyone say doing this will cause an application to be corrupted? Or for memory to not be released? I'm not advocating this method, I just think it is a pretty simply way to make an application more Windows like (responsive close) and I have not seen any side effect yet. Well in principle when you kill an application the OS will take care about deallocating all the memory and handles that application has opened. However in practice it is possible that the OS is not able to track down every single resource that got allocated by the process. As far as memory is concerned I would not fret to much, since that is fairly easy for the OS to determine. Where it could get hairy is when your application used device drivers to open resources and one of them does not get closed properly. Since the actual allocation was in fact done by the device driver, the OS is not always able to determine on whose behalves that was done and such resources can easily remain open and lock up certain parts of the system until you restart the computer. It's theoretically also possible that such locked resources could do dangerous things to the integrity of the OS, to the point that it gets unstable even after a restart although that's not very likely. Since you say that you have carefully made sure that all allocated resources like files, IO resources and handles and what else have been properly closed, it is most likely not going to destroy your computer in any way that could not be solved by fully restart it after a complete shutdown. What would concern me however with such a solution is that you might end up making a tiny change to your application and unless you carefully test it to release all resources properly by disabling the kill option and making sure the application closes properly, no matter how long that may take, this small change could suddenly prevent a resource from being properly released. Since your application gets killed you may not notice this until your system gets unstable because of corrupted system files. 1 Quote Link to comment
Morten Mo Posted February 5, 2014 Report Share Posted February 5, 2014 i have made 2 exe with labview and running about a week (win7), logging temp and posting it to google spreadsheet and now one exe is freezed so bad that even kill task dont kill it... what seems to be the problem? how can i kill it? Quote Link to comment
GregFreeman Posted February 6, 2014 Report Share Posted February 6, 2014 (edited) i have made 2 exe with labview and running about a week (win7), logging temp and posting it to google spreadsheet and now one exe is freezed so bad that even kill task dont kill it... what seems to be the problem? how can i kill it? Normally, something like this would imply a memory leak. Use task manager to monitor your memory usage over a couple of hours; is it gradually increasing? Are you opening a reference repeatedly and not closing it? This can easily happened with "named queues" if you're not careful. Edited February 6, 2014 by GregFreeman 1 Quote Link to comment
Morten Mo Posted February 6, 2014 Report Share Posted February 6, 2014 No reference , and memory seems OK .. How can i kill that freezing window ? Only hard reset helps for now.. Quote Link to comment
hooovahh Posted February 6, 2014 Author Report Share Posted February 6, 2014 i have made 2 exe with labview and running about a week (win7), logging temp and posting it to google spreadsheet and now one exe is freezed so bad that even kill task dont kill it. If taskkill doesn't work you have bigger problems. I'm guessing you aren't performing a taskkill but instead are politely asking the application to exit. And if it isn't responding it won't exit. In task manager go to the Processes tab, find your application and click End Process. This is a taskkill and will remove the application from memory. If you are on the Applications tab and try End Task this does not perform a taskkill but instead tries to close the application more gracefully and may take longer or not work at all. Again as discussed in this thread task killing an application is not something to be taken lightly, and under the right circumstances may have no ill effects. Generally this is only used when an application isn't responding and you don't want to reboot your computer to get back to a usable state. Quote Link to comment
Morten Mo Posted February 6, 2014 Report Share Posted February 6, 2014 "end process" didn`t do anything that is what i am talking about Quote Link to comment
GregFreeman Posted February 6, 2014 Report Share Posted February 6, 2014 How large is the application. Would it be possible to post? Quote Link to comment
Rolf Kalbermatter Posted February 6, 2014 Report Share Posted February 6, 2014 "end process" didn`t do anything that is what i am talking about If "End Process" doesn't work, your application is defininitly doing something VERY low level. Do you call some drivers that use kernel device driver somewhere? Except DAQmx of course, which definitely does have such drivers, but hasn't behaved like this on me so far. Other 3rd party vendor drivers however have done such things to me in the past. Unless your application is stuck in kernel space, "End Process" really ought to be able to kill the process cleanly. Quote Link to comment
Morten Mo Posted February 6, 2014 Report Share Posted February 6, 2014 vi file tc08_google_1loger_kompost1.vi Quote Link to comment
OlivierL Posted March 26, 2014 Report Share Posted March 26, 2014 I know this is an old thread but I recently faced the exact same problem, where killing the labview task simply wouldn't work. It was caused by a driver issue (USB-RS232 Prolilfic 2303). In development environment, the application would hang forever in a VISA Read and in the executable format, the application would just appear to freeze. The only thing that would free the execution was to disconnect the USB device and then VISA Read would return an error. Everything went away once we selected a different RS232 adapter (both MOXA Uport and Startech PCIe adapter solved the issue.) We saw a few Blue Screen of Death over that development period with the Prolific IC. Since your application seems to be calling a USB instrument, it is possible that its driver is also be the root cause of your issue. If the instrument is working properly until you close your application, make sure that you call the proper functions to close the driver properly to allow it to stop executing. If you also see some strange behaviors during execution, consider getting a better device/driver. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.