-
Posts
3,183 -
Joined
-
Last visited
-
Days Won
204
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Aristos Queue
-
Secret features: ethics of developing them
Aristos Queue replied to Aristos Queue's topic in VI Scripting
QUOTE (BobHamburger @ Apr 5 2009, 07:12 PM) If you exclude them, the answer is still yes. It's just not as easy. But it can be done. -
QUOTE (flarn2006 @ Apr 4 2009, 03:56 PM) Ok... this should be fun. I'm going to spawn a new thread to respond to this challenge so that this thread can return to a discussion of typedefs, if there's anything further to discuss.The new thread is here: http://forums.lavag.org/Secret-features-et...hem-t13742.html
-
QUOTE (flarn2006 @ Apr 4 2009, 03:56 PM) I accept your challenge. I believe that unrevealed features serve more benefit to users than harm, and I suggest that software that does not have such features is likely not innovating sufficiently.First, let us differentiate "unrevealed features" from both "undocumented features" and "undocumented behavior". Undocumented features are publicly exposed features, such as items in the palettes or menu items, that are undocumented. I am not defending developers who fail to communicate with their tech writers or publishing errors that result in documentation being dropped. It is poor practice to leave the feature hanging out there with no reference for the user who might invoke it. Similarly, undocumented behavior is behavior of public features in edge and corner cases for which no help is available. Common instances of undocumented behavior include the value of the output if the function returns an error, or the behavior of a menu item when the application is in a non-standard mode, or differences in behavior on non-dominant operating systems. These are hard to track sometimes, but nonetheless, they should be documented. No, when I speak of unrevealed features, I mean things that you need either a special config token or a special license key to access, features that require an input file that can only be generated by hand-editing the file, or features that require a long series of keystrokes to activate (Up, Up, Down, Down, Left, Right, Left, Right, B, A, Select, Start). Some of these features are what we call "easter eggs" -- little surprises that have no effect on how you use the software but are fun to discover, such as "SuperFunkyPrivateSpecialSecretForumStuff=true" in LV 8.5 only. Those are fun, but are not true unrevealed features. No, what I'm talking about is truly useful functionality of software. In LabVIEW, the first item that everyone will list is VI scripting, the ability to make calls into the LV editor that let you make programmatic modifications to a VI. But there are smaller features: custom wires for classes that go way beyond what the dialog will let you edit; plug-ins to the Project Window; password access to certain block diagrams. These are the sorts of features that I am suggesting it is right, well and good* for software to include. I believe, first of all, that unrevealed features can add to the usability and stability of software. To say that I have made a feature unusable by users for the sake of making the software more usable is not a contradiction of terms. There are plenty of tricks, backdoors, hacks, etc, that can cause myriad problems. For example, LV has an event structure. With this structure, you can catch events fired by the UI. There are private events that are not listed in the structure's configuration dialog. Why? Because some of those events look useful, but if you don't use them right (put too long a process in those event handlers, or use them in parallel with some specific other process), you end up hanging all of LV and having to kill it and restart. Should a regular user see such events listed in the config dialog? I argue no. They are system specific events that are only needed in the most rare of occassions. By exposing them privately, LV is able to write more of its functionality in G code, which means that higher level functions can be easily exposed as subVIs for users' general consumption. If customers are having problems and come to us absolutely unable to solve a problem without a particular event, we may give that particular customer a customized handler for that event. But there is no point to having regular users think they are using something useful, having it hang on them, and then having them believe that LV is somehow inherently unstable. Even burying such things in an "Advanced" pull-right is no help. 19 years of programming has taught me that users will open every menu, will click on every button, and then blame the developer when things go awry. Telling a user "Sorry, you're not advanced" seems to just upset them. It is far better for the collective user experience to to have those users that actually are advanced go hacking through files, trolling through forums, and wheedling insiders for secrets than to put generally useless junk in the interface, even if that junk is extremely useful in the .01% of cases. Unrevealed features also foster innovation. When a feature is not complete, should LV not ship it until it is finished? Or is it better to ship it behind a config token and then call a few key users up and say, "We think this works, would you give it a try and let us know?" We've gotten a lot of feedback over the years by doing this. Some of you who are alliance members or LV Champions may be reading this and thinking, "What? I've never been asked to test secret features, and that's part of my role as alliance/champion!" Don't get mad -- most of the time it is an individual developer who drops such half-finished features in so they can be used completely within National Instruments. You're not being left out of any secret beta. And, to be perfectly honest, most of the time you guys find such features anyway. It is a hard balance to strike -- users want stable versions of LabVIEW, but they also want development of new features, and they want some way of knowing which features are safe and which are not. On the flip side, NI doesn't want to be maintaining multiple versions of LV simultaneously if we can possibly avoid it -- that's very expensive. So for certain features, this "hide it and take the feedback of those who find it" approach is extremely valuable. If you had to enter a config token by hand, or if a certain node only worked when you dropped it at least 16,000 pixels away from the origin, or if you had to adjust your contrast way way down to even see something in the palettes**, you can be pretty sure that it is not considered to be on the "safe and trusted" feature list. Finally, unreavealed features show good use of developer time. There are certain operations in LabVIEW that you can trigger from the menus that are complex sequences of operations, and there may not be any way to trigger one of those substeps by itself, or to stop the process midway through. If we took the time to put a nice clean UI on every bit of LV functionality, we'd never ship anything. The rendering system of LabVIEW may use the OpenGL library under the hood, but that doesn't mean we take the time to wrap every OpenGL feature and expose it as a LV VI. The LV Project may be able to register with the OS to get updates when a file changes on disk, but that doesn't mean we take the time to provide a way for a user to register interest in files of theirs. It might be done eventually, or maybe not. Developers spend time on the features they think will be most valuable to users, and along the way they may include any number of smaller features that users have no access to. They may put all sorts of notes in the save format of files to support those features, bits that are not intended for users to twiddle. Providing twiddling access means supporting error conditions, means documentation, means tech support, means time lost to other projects. Sometimes it is better to just leave the feature hidden. In summary, I believe that unrevealed features support the values of usability, stability, innovation and accelerated development of more valuable features. I would be very interested to hear flarn2006's position on why such features are some combination of wrong, bad or evil*, and what value propistion he/she feels is gained by deliberately minimizing the existence of such features. * Right/Wrong: is this thing acceptable by some observer's subjective standard; well/bad: is this thing acceptable by its own internal standards; good/evil: is this thing acceptable by some objective standard agreed to by the debating parties. ** To the best of my knowledge, there are no easter eggs hidden at 16000 pixels from the origin or in nearly the same color as the palette background. These are just ways I've thought of over the years to hide easter eggs.
-
QUOTE (shoneill @ Apr 4 2009, 01:32 PM) To the best of my knowledge, no.
-
who can resolve this?
Aristos Queue replied to weng2008's topic in Application Design & Architecture
Here you go: Functional Global When in doubt, check the wiki... it's probably in there. :-) -
A) No, the wire pattern is not stored with the typedef. B) It is hardcoded into LV that if a typedef has a particular name "Matrix.ctl" then draw an alternate pattern for the wire. I'm pretty sure you can do it by naming any typedef "Matrix.ctl" C) There is no feature in LV for establishing a custom wire pattern for control VIs, so don't bother hunting for it. If you want custom wire patterns, LV classes are your answer.
-
shift register in sequence structure
Aristos Queue replied to psychomanu's topic in Development Environment (IDE)
This suggestion has been explicitly rejected by LV R&D multiple times over the years because it would contribute to the ease-of-use of stacked sequence structures, thereby encouraging their use. QUOTE No -- the reason stacked sequence structures exist is that we didn't have flat sequence structures at the dawn of LV and taking them away now would upset people. -
QUOTE (shoneill @ Apr 1 2009, 08:33 AM) It would not be unreasonable to assume that we might be working on something like that.
-
QUOTE (Aristos Queue @ Mar 29 2009, 04:42 PM) I would just like to say at this point that I could not resist the temptation to set this up as an April Fool's joke. My apologies to those who spent an evening right-clicking. Since the original poster had a hex editor, I just assumed he'd set the count to 9875 and see what happens. For the record, I don't know why LV would be counting mouse clicks, but my guess is that it is a common strategy for marking objects that have been visited -- instead of using a flag that you have to clear after every traversal, use an integer and just increment it when you visit for a given traversal, so you can tell whether you've visited on this traversal or not. Just a guess.
-
Disconnect all typedefs in a CTL
Aristos Queue replied to Vladimir Drzik's topic in Database and File IO
QUOTE (Vladimir Drzik @ Apr 1 2009, 12:53 AM) Lots of documentation... it's one of our "six major reasons why you should use LV classes" that I presented at NI Week 2008. I linked to a bunch of stuff in http://forums.lavag.org/LabVIEW-Class-Data-Preservation-Details-t11259.html' target="_blank">this forum post. -
It would be reasonable to assume that LV R&D might be working on something like that.
-
View cluster FP controls & BD constants as icons
Aristos Queue replied to mje's topic in LabVIEW General
QUOTE (neBulus @ Apr 1 2009, 05:28 AM) But if you change that constant into a hidden control, it is not a constant any more. And LV can't assume that it is a constant even when it isn't on the conpane because LV has no way of knowing that you won't use VI server to set its value or to unhide it or something like that. -
Using a Shared Library (VxWorks compiled C code)
Aristos Queue replied to fillalph's topic in Calling External Code
QUOTE You say "they are set as follows." Does this mean that LV filled in those values for you or you configured them this way? I ask because that configuration is obviously wrong. The data type for your return type and both args should be double, not integer. -
QUOTE (MJE @ Mar 31 2009, 10:00 AM) Actually, what's unclear is the meaning of "typedef". :-(Yes, this is intended behavior. Any default value you set in a typedef control applies *only* to a control. The type definition is not defining a type of data, it is defining a type of a control. The data type -- for the purposes of determining default value at runtime, as, for example, the value of an output tunnel of a For Loop that executes zero times -- is the type without the typedef. The typedef is really only meaningful when talking about how it displays on the front panel or if the data type underlying the typedef changes, for the purposes of block diagram constants.
-
QUOTE (rgodwin @ Mar 30 2009, 03:38 PM) Thank you. I looked up 112627. It is on someone's list to repair -- just not my list. ;-) Usually I know about bugs with LVClass features, but I (according to the logs) handed it off to someone else to fix and (apparently) forgot about it. It is on the priority list to be fixed.
-
QUOTE (flarn2006 @ Mar 29 2009, 04:23 PM) You might be surprised at the easter eggs you discover somewhere around the 9,876th click (in a single session of LabVIEW).
-
QUOTE (Matthew Zaleski @ Mar 27 2009, 02:43 PM) Good idea. We agree, and that's why NI has an entire department dedicated to nothing but this. :-)
-
QUOTE (Matthew Zaleski @ Mar 24 2009, 09:30 AM) Partially, yes. Enough so that when you hit the run button, we can do the last bits right then and then run it. Try this ... have a big hierarchy of VIs, 100 or more. Then hit ctrl+shift+left click on the Run arrow. This is a backdoor trick to force LV to recompile every user VI in memory. You'll spend a noticable amount of time. That gives you some idea of just how much compilation is going on behind the scenes while you're working.From a CS standpoint, LabVIEW has two cool aspects: 1) Dataflow 2) Graphical The dataflow we talk about a lot -- magic parallelism and automatic memory allocation without a garbage collector. But its the graphical that gives us a real leg up in the compilation time. We have no parser. Our graphics tree is our parse tree. That's 1/3 of the compile time of C++ sliced off right there. The other two parts are code gen and linking. Codegen we do pretty quick from our parse tree. Linking is taken care of when we load the subVIs. When you recompile a VI after an edit (something that is usually only done when you Save or when you hit the Run arrow), it is compiled by itself, and the only linking that needs to be done is to copy the call proc addresses of the subVI into the caller VI. Optimizations across VIs are only applied when going through AppBuilder or to one of the non-RT targets, such as FPGA or PDA. QUOTE The debugger's visualizations of wire flow and retaining values is not something I'd expect from fully compiled code. If you go to VI Properties dialog, in the Execution tab, there's a checkbox for "Allow Debugging". A VI will run faster if you turn that off because we will actually compile in less code. Our debug hooks are compiled into the VI. Notice that the "after probes" is an option that you have to turn on on a block diagram, not something that is available by default, since we have to recompile the VI to remove all of our memory optimizations in order to be able to preserve the value of every wire. QUOTE Out of curiosity, are you treating the VI akin to a .c file with a pile of functions (1 function per chunk) or are these chunks handled in a more raw form (pointer arrays to code snippets)? (I'm going to gloss a few details here, but the structure is generally correct... at least through LV8.6.) The block diagram is divided into clumps of nodes. The clumping algorithm considers any structure node or any node that can "go to sleep" as a reason to start a new clump (Wait for Notifier, Dequeue Element, Wait Milliseconds). It may also break large functional blocks into separate clumps, though I don't know all the rules for those decisions. Each clump is a chunk that can run completely in parallel with every other chunk, so if a node Alpha has two outputs that go to two parallel branches and then come back together at another downstream node Beta, you'll end up with at least four clumps -- the code before and including Alpha, the top branch after Alpha, the bottom branch after Alpha, and Beta and downstream. In your head you can think of these clumps as meganodes. A node runs when its inputs are all available. Same for a clump. The number of inputs to a clump is called the "fire count." Each clump is optimized as tight as it can be (register spilling, loop unrolling, etc). The call address for any clump with a fire count of zero is put into an execution queue. When you hit the Run arrow, a bunch of threads each dequeue from that execution queue and start running their clump. When they're finished, the clump has instructions that say "decrement the fire count for these N clumps." Any of those clumps that hits a fire count of zero is put into the execution queue. The thread then grabs the next clump at the front of the queue. That clump may not be a part of the same VI -- it may not even be the same VI hierarchy. Whatever clump comes out next gets executed. When the execution queue is empty, the threads go to sleep waiting for more clumps to be enqueued. Clearly, since clumps never know how long it will be between when they finish and when the next clump in line will start running, each clump writes the registers back to memory when it finishes running, at memory addresses that the next clump will know to pick them up. Thus LV tries to build large clumps when possible, and we take advantage of every cache trick for modern CPUs so that the hardware takes care of optimizing the cases of "write to mem then read right back to the same registers" that can occur when successive clumps actually do run back-to-back. The actual execution engine of LV is pretty small -- small enough that it was reproduced for the LEGO Mindstorms NXT brick. Most of the size of the lvrt.dll is not the execution engine but is the library of functions for all the hardware, graphics and toolkits that NI supports. All of the above is true through LV 8.6. The next version of LV this will all be mostly true, but we're making some architecture changes... they will be mildly beneficial to performance in the next LV version... and they open sooooo many intriguing doors... QUOTE I still feel that the main points of my argument stand (since you can compile Python, Matlab and Java to native code). The value proposition from LabVIEW (since it isn't "free") is enhancing my productivity and allowing me to ignore the gritty details of C++/assembly most of the time. Oh, most definitely. I just know that many programmers have a nose-in-the-air opinion that "scripting is for kiddies." Those are the folks who say, "Well, it might work for some people, but I write real code, so there's no way it could work for me." Thus I prefer everyone to be aware that LV really is a compiler, not an interpreter. :ninja: QUOTE (jdunham @ Mar 26 2009, 05:31 PM) If you get a slow enough machine running, you can see the recompiles as you work. The Run Arrow flashes to a glyph of 1's and 0's while compiling. You can also force recompiling by holding down the ctrl key while pressing run, but it's still too fast on my unexceptional laptop to see the glyph (or maybe they got rid of it). You can also do ctrl-shift-Run to recompile the entire hierarchy, but I still don't see the glyph, even though my mouse turns into an hourglass for a short while. You only see the glyph while that particular VI is recompiling. These days you'd need either a reaallllly old machine or a HUGE block diagram.
-
Dynamic Dispatched VIs staying in run mode
Aristos Queue replied to mje's topic in Object-Oriented Programming
What version of LabVIEW? -
QUOTE (Jim Kring @ Mar 26 2009, 01:01 AM) Jim: In situations where there are several valid choices in behavior, why not make sure that whatever you ship allows all possible behaviors and then improve access in the future if some of the hard-to-reach behaviors turn out to be in high demand? With LabVIEW's current behavior, if you want to rename all the VIs, you can, and if you want to rename only one, you can. All behaviors accounted for. If LabVIEW renamed everything when you renamed one, you wouldn't be able to rename just a subset. Functionality loss. Why didn't we just add both mechanisms initially? Consider the issues this feature raises: Adding the rename all would be additional code to implement the feature plus new menu items which I am guessing there would be severe pressure against since it would push up the already mind boggling complexity of the Save As options dialog (you'd now have two different kinds of SaveAs:Rename, which is, currently, the one easily comprehendable option in Save As). Suppose you chose SaveAs:Rename:RenameAll on a child implementation VI. Do we then rename the child and descendants or do we go rename the ancestor as well? Or do we make more options in the Save As dialog so you have three options for Rename instead of two? What if some portion of the inheritance tree was currently reserved or locked because it was read-only on disk... does the entire Rename operation fail or do we rename those that can be renamed? Very quickly this turns into a significant feature requiring a significant slice of developer and tech writer time. Most editor improvements do. The path of least code provides all the functionality albeit without the pleasant UI experience. We go with that unless there is a compelling usability reason to strive for the better UI -- which generally means either we predict the feature will be unusable without the improved UI or after release we get customer requests for a particular improvement. I have a list of something close to 100 purely editor UI improvements that we could make specific to LabVOOP (LabVIEW itself has thousands upon thousands). I have around 20 major functionality improvments requested. In general, my team has time to implement between 3 and 5 per release as the rest of the time is generally given to 1 or 2 functionality improvements (functionality improvements are generally much harder than editor improvements because you have to implement the functionality and then you still have all the UI problems that editor improvements have). When you make requests, please phrase the priority of those requests accordingly. It helps us decide what to work on next.
-
First VI to open in project takes long time
Aristos Queue replied to John Lokanis's topic in Development Environment (IDE)
QUOTE (Ton @ Mar 26 2009, 03:07 AM) Some SCC VIs do load with the project, but not all. Those for status load immediately. Those for checking in/out wait until there's something open that you could edit.