Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/14/2013 in all areas

  1. Basically this whole discussion of perceived differences between LV2Global, FGV, Action Engine, or IGV (Intelligent Global Variable) are a bit academic. Traditionally the LV2 style global were the first incarnation of this pattern and indeed in the beginning mostly just with get/set accessor methods. However smart minds soon found the possibiity to also encapsulate additional methods into the LV2 style global without even bothering to find a new name for this. In the over 25 years of LabVIEW use new terms have arisen, often more to just have a new term, rather than describe a fundamentally different design pattern. As such these names are in practice quite interchangeable as different people will tend to use different terms for exactly the same thing. Especially the distinction between FGV/IGV and AE feels a bit artificial to me. The claimed advantage of AE's to have no race conditions is simply by discipline of the programmer, both of the implementer as well as the user. There is nowhere an official document stating "AEs shall not have any possibility to create race conditions" and it would be impractical as that would for instance mean to completely disallow any set and get alike method altogether, as otherwise race conditions still can be produced by a lazy user who rather prefers to implement his algorithm to modify data around the AE, rather than move it into a new method inside. I would agree that LV2style globals are a bit of an old name and usually mean the set/get method, but they do not and have not excluded the possibility to add additional methods to it, to make it smarter. For the rest, FGV, IGV, AE and what else has come up, are often used interchangeably by different persons, and I do not see a good cause in trying to force an artificial difference between them. Daklu wrote: Well it is true there is a limit to the conpane, and one rule of thumb I use is that if the FGV/AE requires more than the 12 terminal conpane (that includes the obligatory error clusters and method selector), it has become to unwieldy and the design needs to be reviewed. I realize that many will say, ohh that additional work to refactor such an FGV/AE when this happens and yes it is work, sometimes quite a bit in fact, but it will also in-evidently result in refactoring parts of the project that have themselves become unwieldy. With OOP you can keep adding more and more methods and data to an object until even the creator can't really comprehend it anymore logically, and it still "works". The FGV has a natural limit which I don't tend to hit anymore nowadays and that while my overall applications haven't gotten simpler. Michael Avaliotis wrote: You bet I do! Haven't digged into LVOOP yet, despite knowing some C++ and quite a bit Java/C#. Daklu wrote: I think it has a lot to do with how your brain is wired. AEs and LVOOP are trying to do similar things in completely contrary ways. I would agree that AE's are not a good solution if you know LVOOP well, but I started with FGV/AEs loooooooong before LVOOP was even a topic that anyone would have thought about. And in that process I went down several times a path that I found to be a dead end, refining the process of creating AE's including to define self imposed rules to keep it all managable for my limited brain capacity. They work for me amazingly well and allowed me often to redefine functionality of existing applications by simply extending some AE's. This allowed to keep the modifications localized to a single component and its support functions rather than have to sprinkle around changes throughout the application. The relatively small adaptions in the interface were easily taken care off since the LabVIEW strict datatype paradigm normally pointed out the problematic spots right away. And yes I'm a proponent of making sure that the LabVIEW VIs who make use of a modified component will break in some ways, so one is forced to review those places at least once to see if there is a potential problem with the new addition. A proper OOP design would of course not need that since the object interface is well designed from the start and never will introduce incompatibilities with existing code when it gets extended . But while that is the theory I found that in OOP I tend to extend things sometimes, only to find out that certain code that makes use of the object will suddenly break in very subtle and sometimes hard to find ways, while if I had been forced to review all callers at the time I added the extension I would have been much more likely to identify the potential problem. Programming AEs is a fundamentally different (and I certainly won't claim it to be superior) paradigm to LVOOP. I'm aware that it is much less formalized, requires quite some self discipline to use properly, but many of my applications over the years would not have been possible to implement in a performant way without them. And as mentioned a lot of them date from before the time when LVOOP would even have been an option. Should I change to LVOOP? Maybe, but that would require quite a learning curve and maybe more importantly relearning quite a few things that work very well with AE but would be quite a problem with LVOOP. I tend to see it like this: Just like with graphical programming vs. textual programming, some brains have a tendency towards one or the other, partly because of previous experience, partly because of training. I trained my brain over about 20 years in programming AEs. Before I could program the same functionality in LVOOP as I do nowadays in an AE, would require me quite a bit more than weeks. And I still would have to do a lot of LVOOP before I would have found what to do and what to avoid. Maybe one of the problems is that the first time I looked at LVOOP turned out to be a very frustrating experience. For some reasons I can fairly easily accept that LabVIEW crashes on me because of errors I did in an external C component, but I get very upset if it crashes on me because I did some seemingly normal edit operation in the project window or such.
    2 points
  2. This is something I experienced some time ago on a project that really put me through the ringer. We worked with NI R&D for 6 weeks at an elevated support level to finally get to the bottom of the issue. The symptoms were my front panels (both in code and exe) would become 'detached' from the block diagram. In other words, the GUI would appear locked while the code behind was actually executing just fine. CPU was very low (3-5%) on all cores. For a long-time monitoring application it was especially problematic as the user wouldn't interact with the GUI for long periods of time; the information on the display would simply quit updating and it would be difficult to detect the condition. The solution ended up being something out of left field - the Windows Aero theme, which is default for Win 7. R&D finally stumbled across this and was able to turn the issue on off by changing the theme away from Aero. I gave a presentation at NIWEEK just recently and it seems I had 3 people in the audience that were experiencing this same/similar issue. Therefore, I thought I would post this and maybe I could help some others. All LabVIEW versions including 2012 are affected it seems. I was developing in 2010 at the time, but R&D tried many versions to see if it was version-specific.
    1 point
  3. What you're talking about is called "Chunk Size" Notice when you enable loop parallelism, you get a radio button that defaults to "Automatically Partition Iterations". You can however change it to "Specify paritioning with chunk size ( C ) terminal". If you specify the paritioning, you'll get to specify how many iterations are in each chunk. The compiler will then break them up into chunk. Each "thread" (sometimes that means processor, sometimes not) will execute it's chunk, then grab a new chunk from the pile waiting to be processed, and repeat. Obviously there is some overhead to this. That means that you may not want to set the chunk size to 1, which would result in ultra parallel execution, but you'd get hit way more by the overhead. You also don't want to set the chunk size to P (meaning the total number of iterations), that means that 1 thread will get assigned all the work, and the other will just sit there waiting! By default LabVIEW uses large chunks at first, then smaller chunks at the end. This hopes to minimize overhead, at the beginning of the process, then to minimize idle processors at the end. Honestly, 90% of the time, it is probably best to leave it like this. Here's an example to illustrate why chunk size matters. Lets say you have 2 processors (meaning P=2) and you have 10 iterations to perform. Lets say that the default chunking algorithm says first go each processor will get 3 iterations, then the remaning 4 iterations will be in two chunks. First we'll assume that each iteration takes exactly 100ms to execute. That means that each processor will first get 300ms of work to do, then go back and pickup 200ms of more work to do, so the total time to execute: 500ms. Easy Now, let's saythat each iteration takes exactly 100ms to execute, except for one randomly determined iteration. The randomly determined iteration will take 5000ms. Making the assumption that random long iteration gets into one of the first set of chunks (the ones that contain 3 iterations each). So when the processors execute their chunks, processor 1 will have 300ms of work and processor 2 will have have 5200ms of work to do. While processor 2 is chugging through it's work, processor 1 will finish it's first chunk, and ask for another, it'll get 200ms more work to do. After that 200ms, processor 2 is still chugging away (we're only 500ms from when we started) so processor 1 is assigned the last chunk, it does it's 200ms of work, and then sits idle. Finally after the 5200ms, processor 2 finishes, and our loops stops execution. so what if we specify chunk size as 1. Best case senario: processor 1 or 2 gets the long iteration on the first chunk it gets. So that processor works on it, while the other processor chugs away on all the other chunks. This means one processor will do 5000ms of work while the other does 900ms of work. They are executed at the same time and the loop takes a total of 5000ms. Now if we get unlucky and the long interation gets into one of the last two chunks, then processor 1 and two will do 800ms of work before one of them hits the long iteration, so our total time of executiong is 5800ms. obviously there are a few other ways of this situation playing out, depending on what iteration gets the long instruction and what chunk that falls into. Just some food for thought.
    1 point
  4. Good luck with it. Xmodem_VISA.zip
    1 point
  5. OK so I worked out what I was doing wrong and now dont need the intermediate dll any more. So now the C source code isn't needed any more (in fact I had lost it)... Not that the debate it triggered wasn't interseting. Turns out I had the call library function node set to use 'C' calling conventions when it should have been set to 'stdcall'. Still the same link but the content there is now updated.
    1 point
  6. Each class in the hierarchy is free to reinterpret the flag sets differently and each class may (and frequently does) shift the meanings around between LV versions as we add new features. The flag set is really convenient, so an older, not-used-so-much feature may get bumped out of the flag set and moved to other forms of tracking if a new feature comes along that wants to use the setting (especially since much of the the flag set is saved with the VI and a feature that is a temporary setting in memory may have used the flags since they were easy and available on that particular class of the hierarchy). Tracking what any given flag does is tricky. Even the three flags that haven't changed in any class for the last five versions of LV have very different meanings depending upon the class using the flag. For example, flag 1 on a DDO specifies control vs. indicator, but on a cosmetic it turns off mouse tracking. On structure nodes it indicates "worth analyzing for constant folding", and on the self-reference node it indicates that dynamic dispatching isn't propagated from dyn disp input to dyn disp output. It's pretty much pot luck. You can use Heap Peek to get some translation.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.