Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 03/03/2014 in all areas

  1. If you ever think of taking a dip into .net drawing here is some example code (LV2013) to get you started... Forgot to mention that the example drawing vi is in the Test Folder AntiAliased Drawing.zip
    2 points
  2. Just a tiny bit more involved really! I just received my result back of my own CLA certification and passed, although barely. The comments on the sheet about what all still would need to be implemented simply sounds ridiculous to be finished in anything near to 4 hours. So prepare with the sample exam, and make sure you get your basic framework for that done in the shortest possible time. I would say if you can get your basic framework with skeleton VIs for all the sub units like GUI, Error handler etc. within less than an hour then you are more or less ready to take the exam. After that hour you can start to stamp all the requirement tags in the different VIs and start to fill in the raw diagram structures according to the requirements. One advice: rather than trying to implement real code, describe what needs to be done in text inside the various structures as much as possible. That will save you some time. And yes, if you are able to power create your basic framework without much thought, you can recreate that for the real certification almost blind, with some minor variations.
    1 point
  3. If you use windows, there are examples in the Labview help on how to use the Excel ActiveX control, You can use different worksheets without much effort. These examples also show how to fill a range of cells with a variant, which provide an elegant solution to your problem.
    1 point
  4. There are several issues at hand here. First, killing an application instead or exiting it is very similar to using the abort button in a LabVIEW VI. It is a bit like stopping your car by running it in a concrete wall. Works very quickly and perfectly if your only concern is to stop as fast as possible but the causalities "might" be significant. LabVIEW does a lot of housekeeping when loading VIs and as a well behaved citizen of the OS it is running on attempts to release all the memory it has allocated during the course of running. Since a VI consists typically of quite a few memory blocks for the different parts of it, this amounts quickly to a lot of pointers. Running through all those tables and freeing every single memory block does cost time. In addition if you run in the IDE there is a considerable amount of framework providers that hook the application exit event and do their own release of VI resources before they even let LabVIEW itself go to start working on the actual memory block allocations. As more toolkits and extensions you have installed as longer the IDE will take to unload. Now on most modern OS systems the OS will actually do cleanup on exit of an application so strictly speaking it is not really necessary to cleanup before exit. But this cleanup is limited to resources that the OS has allocated through normal means on request of the application. It includes things like memory allocations and OS handles such as files, network sockets, and synchronization objects such as events and queues. It works fairly well and seems almost instantaneous but only because much of the work is done in the background. Windows won't maintain a list of every memory block allocated by an application but manages memory in pages that get allocated to the process. So releasing that memory is not like having to walk a list of 1000ds of pointers and deallocating them one for one, but it simply changes a few bytes in its page allocation manager and the memory page is suddenly freed per 4K or even bigger junks. Collecting all the handles that the OS has created on behalves of the application is a more involved process and takes time but can be done in a background process so the application seems to be terminated but its resources aren't yet fully claimed right away. That is for instance why a network socket usually isn't immediately available for reopening when it was closed implicitly. The problem is that relying on the OS to clean up everything is a very insecure way of going about the matter. There are differences between OS versions which resources get properly claimed after process termination and even bigger differences between different OS platforms. Most modern desktop OSes do a pretty good job in that, the RT systems do very little in that respect. On the other hand it is not common to start and stop RT control tasks frequently (except during development) so that might be not a to bad situation either. Simply going to deallocate everything properly before exiting is the most secure way of operation. If they would decide to "optimize" the application shutdown by only deallocating the resources that are known to cause problems, I'm sure there would be a handful of developers getting tied up by this to write test cases for the different OSes, and add unit tests to the daily test build runs to verify that the assumptions about what to deallocate and what not are still valid on all supported OSes and versions. It might be also a very strong reason to scrap support for any OS version immediately that is older than 2 years in order to keep the possible permutations for the unit tests manageable. And that trimming the working set has negative impact on the process termination time, is quite logical in most cases. It really only helps if there is a lot of memory blocks (not necessarily MBs) that has been allocated previously and freed later on. The trimming will release any memory pages that are not used by the application anymore to the OS and page out all the others but the most frequently accessed ones to the page file. Since the memory blocks allocated for all the VIs are still valid, trimming can not free the pages they are located in and will therefore page them out. Only when the VIs are released (unloaded) are those blocks freed but in order for the OS to free them it has to access them which triggers the paging handler to map those blocks back into memory. So trimming the memory set has potentially returned some huge memory blocks to the OS that had been used for the analysis part in the application but were then freed by LabVIEW, and will simply be reclaimed by LabVIEW when needed again. But it also paged out all the memory blocks where the VI structures are stored for the large VI hierarchy and when LabVIEW then goes and unloads the VI hierarchy it triggers the virtual memory manager many times while freeing all the memory associated with the VI hierarchy. And the virtual memory manager is a VERY slow beast in comparison to most other things on the computer, since it needs to interrupt the entire OS for the duration of its operation in order to not corrupt the memory management tables of the OS.
    1 point
  5. This is by design. In Labview, child classes *never* extend the private data of the parent class--they have their own private data. Parents can make their private data available to children via accessors, but there is no way for a child class to change the set of data types the parent class contains in its private cluster. If you want the child classes to be by-ref also, the child class should have its own DVR refnum in its private cluster to maintain the child class' data. FWIW, I almost never build classes with built-in by-ref behavior. I've found I have much more flexibility if the classes are by-value, then if I need to interact with one in a by-ref way, I let the application code put the object in a DVR and pass that around.
    1 point
  6. I know you didn't want to say that, but it sounds like cars didn't produce CO2 when they were still polluting the environment in heavy ways with sulfur, and many other things. And I agree that any alternative will also come with its own problems. For me the real problem is not the particular energy we use to do what we do, but the sheer amount we use for that. No matter how you will get at that energy it will have sooner or later some impact on our environment. There can be variations in how dangerous or polluting that energy is, but I do not believe that there is any energy source that will not have some negative impacts on us if consumed in the amounts we currently do. And then to think that almost 1/2 of the human population is in fact in so called upcoming economies that strive seem to go to have the same energy consumption that we have!
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.