Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/28/2009 in all areas

  1. Name: CaseSelect Submitter: jcarmody Submitted: 08 Aug 2009 File Updated: 03 Jan 2011 Category: JKI Right-Click Framework Plugins LabVIEW Version: 8.2 License Type: BSD (Most common) Copyright @ 2010, Jim Carmody All rights reserved. Author: Jim Carmody jim@jamescarmody.com CaseSelect is a plugin for the JKI Right-Click Framework for LabVIEW and is distributed as a VIPM package. Introduction I make State Machines with many, many states; perhaps you do, too. I don't like scrolling through long lists of states and have wanted to have a scroll bar on the drop-down list. This plug-in launches a new panel with a Tree control (one that has a scroll bar) containing each of the Case names. Features Select a case in the CaseSelect window and it comes to the front in your Block Diagram JKI State Machine states are indented in the Tree control CaseSelect window floats and can be resized Open multiple CaseSelect windows at the same time to work with more than one Case Structure Select a case with the mouse or navigate the Tree using arrow keys Insert & delete states with Insert/Delete keys or context menu (Insert suggests new name based on section header) Reorder cases with drag/drop Ctrl+drag/drop to duplicate a case (suggests new name based on section header) Collapse/expand all tree elements with context menu Sort cases alphabetically (preserving the section headers of a JKI State Machine) New in 2.0.1 I'm pretty bad at keeping track, but here are a few... Renaming a case triggers a search-and-replace for all instances of the old name in all String Constants Select a String Constant in your VI and double-click a case in CaseSelect and that will be appended to the String Constant - build macros quickly New in 3.0.0.4 (currently only for LabVIEW 2010) I'm still pretty bad at keeping track, but I made a new package. Added navigation buttons to move back and forth through the states you've visited (history) ~ still buggy The original discussion of this can be found here. Click here to download this file
    1 point
  2. That is a very good point. I will endure until the kind, wonderful admins of this great forum find the problem.
    1 point
  3. The key word there is "annoyed": your bosses are paid to put up with you annoying them - the admins and moderators here at LAVA are not. The old adage of "the squeaky wheel gets the most grease" does not apply here - in fact, it may be the opposite.
    1 point
  4. If you're using typedefs for your enums then the rules are pretty much the same: they work well within a project (we can update the uses of that typedef automatically), but they can cause problems for VIs that aren't in memory when the change is made. I still think you're ok if you just add to the end, though. This is tricky in any language, really. An enum is basically a symbolic representation of a numeric value. If you change the definition of that enum such that the symbols map to different values than they did before then you may break existing code. It's just one of the things you have to keep in mind when writing a reusable API. More magic. When debugging is enabled we emit a little extra code between nodes that handles things like probes and breakpoints. We could do what C debuggers do (use the processor's breakpoint mechanism), but that's difficult to do in a cross-platform way. Our current mechanism allows us to do the same thing on all platforms, and even allows us to do remote debugging and probes with the same mechanism. It's just more flexible. We try to optimize the callers based on what we can get away with toavoid copies of data, but that means that when things change in thesubVI then it sometimes affects the callers. For instance, if you have an input and output terminal of the same type and they weren't "inplace" before (meaning the compiler couldn't use the same spot in memory for both) but they are now, then the caller may need to change. Or it could be the opposite (they were inplace, but now they're not). It could also be that an input was modified inside the subVI before but now it's not (or the other way around). If you use dynamic dispatch or call by reference then you're somewhat shielded from these concerns (we can adapt at runtime), but you lose some of those optimizations. You may end up with copies of data that aren't strictly necessary.
    1 point
  5. Magic. Actually, the .lvclass file also contains a history of edits you've made so that it can automatically mutate any old data on load. It's mostly transparent to the developer, and it makes mixing versions of classes much easier than in other languages. The only time you'll run into problems is if you have an older version of the class in memory and then later load some VIs that have data from a later version. I'm not sure exactly what happens in that case, but I think it would fail to load the VI. There are few if any changes you can make to a typedef that won't break your clients. With typedefs you pretty much have to have all the VIs that use that typedef in memory when you make the change, and then you have to save those VIs after making the change. Classes are far superior for this use case. In fact, that's the main advantage to using classes. It's called encapsulation: hiding the details of the inside of the class so that the clients won't notice when those details change. With enums I think you can add to the end of the enum, but removing or renaming an existing item will break the clients. If you really need to do something that would break a client then you could introduce a new type and a new VI which takes that type, and then deprecate the old type and the old VI. Rewrite the old VI to convert the old type into the new type and then forward the call to the new VI. Aristos explained this decently. We do compile directly to machine code, but if you're making edits then we have to recompile next time you run. Once we've done that, though, there won't be any delay next time you run it (unless you make more edits). If you then save those VIs or build an executable then next time they're loaded or when the .exe runs there won't be any compiling going on. They'll just run. We do compile and store in memory, but we also save the compiled code in your VI so that we don't have to recompile again next time you load. The one caveat to that is that sometimes changing a subVI causes the callers to need to recompile, so you might get a prompt to save a VI you never changed directly. That's because we recompiled that VI to adapt to the changes in its subVI(s), and we want to save that new code so you don't have to recompile again next time you load. As I mentioned before, dynamic dispatch VIs (and call by reference) do extra work at runtime in case the VI you're calling changed inplaceness, so that's a case where you don't need to worry as much about breaking callers. You just have to keep the possible performance impact in mind. Also, we compile directly to machine code and calls to the runtime engine. For simple functions we just compile machine code, but sometimes it's easier and more efficient to just compile machine code to call a function we wrote in C++. That function will be in the runtime engine. Almost all compilers do that, including MSVC and GCC. That's why they also need runtime libraries.
    1 point
  6. When you write a C++ program, you write it in some editor (MSVC++, XCode, emacs, Notepad, Textedit, etc). Then you compile it. If you're tools are really disjoint, you use gcc to compile directly on the command line. If you have an Integrated Development Environment (XCode, MSVC++), you hit a key to explicitly compile. Now, in MSVC++, you can hit F7 to compile and then hit F5 to run. OR you can hit F5 to run, in which case MSVC++ will compile first and then, if the compile is successful, it will run. All of this is apparent to the programmer because the compilation takes a lot of time. There's a bit of hand-waving in the following, but I've tried to be accurate... The compilation process can be broken down as Parsing (analyzing the text of each .cpp file to create a tree of commands from the flat string) Compiling (translation of each parse tree into assembly instructions and saving that as a .o file) Linking (taking the assembly instructions from several individual .o files and combining them into a single .exe file, with jump instructions patched with addresses for the various subroutine calls) Optimizing (looking over the entire .exe file and removing parts that were duplicate among the various .o files, among many many many more optimizations) LabVIEW is a compiled language, but our programmers never sit and wait 30 minutes between fixing their last wire and seeing their code run. Why do you not see this time sink in LabVIEW? Parsing time = 0. LabVIEW has no text to parse. The tree of graphics is our initial command tree. We keep this tree up to date whenever you modify any aspect of the block diagram. We have to... otherwise you wouldn't have an error list window that was continuously updated... like C++, you'd only get error feedback when you actually tried to run. C# and MSVC# does much the same "always parsed" work that LV does. But they still pay a big parse penalty at load time. Compile time = same as C++, but this is a really fast step in any language, believe it or not. LabVIEW translates the initial command tree into a more optimized tree, iteratively, applying different transforms, until we arrive at assembly instructions. Linking time = not sure how ours compares to C++. Optimizing time = 0 in the development environment. We compile each VI to stand on its own, to be called by any caller VI. We don't optimize across the entire VI Hierarchy in the dev environment. Big optimizations are only done when you build an EXE, because that's when we know the finite set of conditions under which your VIs will be called.
    1 point
  7. The dimming is showing that the class's private data control has changes which have not been applied. As you are editing the control we don't want to constantly be recompiling and messing with other VIs because you could be doing multiple things at once, changing your mind, etc. Instead we let you make your changes and then when you save the class, close the private data control window, or choose File->Apply Changes in the private data control window then we will update all the other VIs that need to be updated. While the class is in this intermediate state it's considered broken because we don't yet know whether your changes will break other VIs, so it makes no sense to allow you to run them yet. Once you apply the changes then we can do a real check to see if anything broke, and if not then those constants stop being dimmed. None of this is JIT, though. This is all compile-time stuff. The child class is only broken because the parent class is broken, and the parent is only broken because it is in this intermediate state. As soon as you apply the changes then the parent class becomes unbroken, and thus the child class becomes unbroken as well. As long as you end up with a good (non-broken) parent class then nothing you change in the parent's private data will cause a recompile of the child class VIs. They don't even have to be in memory when you make the change, and they won't notice if they come into memory after the fact. The only thing you really need to worry about with editing the parent class is making sure that the child classes still meet all the requirements (i.e., dynamic dispatch VIs have the same connector pane). If you change the connector pane of a dynamic dispatch VI then you definitely have to modify your child classes.
    1 point
  8. JIT refers to compiling right before executing starting from a partially compiled binary. For instance, .Net code compiles to a bytecode format (not directly to machine code), but instead of a bytecode interpreter they compile the bytecode right before you run it into machine code optimized for your processor. LabVIEW doesn't actually do this kind of JIT compiling. The LabVIEW runtime engine doesn't do any compiling whatsoever, so if you have an interface parent class and a bunch of implementation child classes, and all of those are compiled and saved, then you don't have to worry about the runtime engine trying to recompile them. The only thing you might need to worry about is what we call "inplaceness". This is an optimization our compiler uses to allow wires to share the same place in memory, even if they pass through a node or subVI. For instance, if you have an array wire that you connect to a VI that just adds 1 to every array element, then it may be possible (depending on how you wrote it) for that subVI to use the exact same array as its caller without any copy being made. Dynamic dispatching (and call by reference) complicates this a bit because it could turn out that the specific implementation you end up calling at runtime has difference inplaceness requirements than the one you compiled with. We do handle this at runtime if we find a mismatch, so it can add some overhead. I think some people solve this by always using an inplace element structure (even for your empty parent class implementation) for your dynamic dispatch methods where you really want a certain input/output to always be inplace. This just prevents the mismatch from occurring.
    1 point
  9. Here is a basic SMNPv1 implementation I wrote years back. This code can be improved a bit but for basic communications it works well.http://lavag.org/topic/9682-help-for-useing-snmp-in-labview/page__view__findpost__p__65199
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.