Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Everything posted by Aristos Queue

  1. G already has "RUN_TIME_ENGINE" as a setting. That already exists to do most of this task, and the world has not fallen. The problem with that setting is that it doesn't work with packed project libraries (because those have to be able to run back in the IDE or the RTE) and it doesn't allow for source libraries to be distributed with debugging disabled but available. If that's what a user wants to use it for, there's no reason they couldn't. Why would you object to someone writing their own code that way? If you have enough access to the VI to toggle enabled to disabled then you have full access to the block diagram. Go find any Cond Disable structs that they put broken code in, flip them to the other frame and then use Remove Structure. Putting broken code in the debugging frame isn't even a stumbling block. There's no impact on the subVIs. They are compiled however they are compiled. The settings of the caller do not affect how subVIs are compiled. This is both LV's blessing and its curse when it comes to optimal compilation. Every VI is a silo. If you know of some impact on the subVIs from editing the caller, that's news to me. Now, there could be subVIs that aren't called in release mode that are called in debug mode. Those subVIs would vary whether they are loaded or not, just like any other compiled-out subVI. Maybe that's the problem you're talking about? If so, ok, but how they compile isn't changed. G code does have RUN_TIME_ENGINE, as I said, but that doesn't really go far enough when working with source in the IDE. G is literally the only language I've ever worked in which DOESN'T have this option to compile user code in differently for release versus debug, regardless of build target. To me, this is a massive hole to fill, not a maintenance nightmare. Indeed, the ability to have debug code compiled in optionally helps a lot with code maintenance. It does mean you need to do two nightly builds, one with debug enabled and one with it disabled, just to make sure the code stays useful, but it's a major godsend. I did a quick scan -- in the LV C++ code, there are 808 functions that change behavior between debug and release mode. Mass compile is about 20x slower in debug mode because of all the extra checking, but it is well worth it when trying to assert the correctness of VIs or untangle some sort of memory corruption. Trying to add all that instrumentation in every time a new bug report gets raised would be impossible. There are a lot of idealists in the world who will tell you, "Oh, you should have been able to put all of that in through mocks and other indirections, keeping with the O principle of S.O.L.I.D. programming," but that O idealism often has a serious impact on performance. ("Well, then you're doing it wrong," says the principle idealists. Maybe. I support code that works with SOLID as much as possible, but there are limits to how complex I'm willing to make code, and when everything hides behind three layers of dynamic programming, I find that the debugging code meant to help with debugging starts needing its own debugger.)
  2. That is *exactly* my initial question. Is there an other option in the VI for which you commonly wish you could change out the code in the block diagram? Hooovahh has put forth a possible use case (pending more details). Are there others?
  3. That doesn't sound like you're editing the VI itself... it sounds like you're creating a test harness around the VI. Maybe what you need is a way to toggle reentrancy whenever debugging is enabled? If you really are editing the VI's code when you disable reentrancy, can you describe the kinds of code that you add to the block diagram?
  4. This is a valid concern, but I think it is already a concern. You can get a LOT of performance boost by turning off debugging today. As a LV user, I honestly don't have an answer for you other than, "When they complain, hopefully someone will tell them." As part of LV R&D, my answer is nor much better: "I hope someday we change the default to turn off debugging during a build, including a source distribution made for deployment." But that's my personal call, and others on the team felt we should listen to the customers. A whole lot of people complained about the build times being long. "It doesn't take that long to mass compile VIs when I'm in the IDE, why does it take so long to build?" Easy: the build is doing more work. There wasn't a lot of wasted work in the app build process... the only way to make build times shorter was to do less optimization. So that's now the default. It really irks me. Not my call. So, I'm not knocking your concern, and it might be a reason not to do the feature. I'll include it in the analysis if I decide to propose this to the team. But I think the concern is already there, so to me, that makes it not something that should restrain improving this aspect of LabVIEW. But, again, others may disagree. :-)
  5. > and toggles and all the stuff that needs to be in place to push meaningful information to the debugger. mje: Let me put it to you this way -- how is it any different from changing the inplaceness model or enabling the constant folding or any of the thousand other things that LabVIEW does when you toggle "Enable Debugging"? To me, it's just a way for a user (me or you) to add to the set of things that should be skipped in a VI when it is ready for release. As it stands today, we have no way of adding to that set. NI has added many of those over the years. A VI doesn't have separate checkboxes for "enable constant folding" and "enable loop invariant code motion". Why should the user-defined optimizations have to have a separate switch from the built-in ones? To me, that doesn't make any sense.
  6. PS: There are also regions of the code base that I know and own and other regions that I don't. Getting into per-build config settings is one of those areas that goes beyond my typical domain. When the team as a whole makes a feature a priority and asks me to work on it, I generally get resources that help with changes that need to be made in far away sections of the code. When I just want to spend my own time to add to LabVIEW, I'm more limited in the options I have, and I try to scope accordingly (and I try not to let that distort the solutions that I come up with). Per-build symbols would require a much wider coordination than "just me"... that would require that feature becoming a priority for development overall. And it kind of is... LabVIEW NXG is being designed with that option in mind, but there aren't plans to do the same for LabVIEW 20xx.
  7. smithd: I frequently have code in the block diagram to assert X is true or to log information to a file while running. I don't want any of that code in the final product. Right now, it's a major pain to turn it on or off. I toggle debugging on and off all the time. A universal debugging symbol would not be useful for me most of the time. I don't want to turn on debugging for every VI I use in my own code -- at best it is a per-library setting when I need to debug into subsystem X. This crops up even in deployed code situations. For example, there's an existing conditional disable structure in the Actor Framework code. Just because I'm debugging a particular actor doesn't mean I want to debug the AF itself... most of the time, I wouldn't want all the AF stuff showing up in my logs or slowing down while it logged its own stuff. In other words... even if we get a universal setting, or a per-module setting, I think a per-VI setting will still be useful. I don't think those are mutually exclusive ideas. And I know I'm not alone in this request... it's come in to me from many different users over time, so I figured I'd look into the feasibility.
  8. Hm.... well... that's frustrating. I've tried to get as many VIs as possible out from under password lock. I thought that one was clear. I'd suggest you just save over it, but it looks like it handles quite a bit. I'm guessing this one is still password protected because there's some bits in there that could be easily unstable if used under different conditions. Sorry to get your hopes up.
  9. The TL;DR is in bold. I want to add to the Conditional Disable structure a new built-in symbol that toggles when the VI's "Debug Enabled" setting is toggled. I've wanted this for a long time, but I finally got around to opening up the code to see what's possible. Let me take care of one question right away: I have no idea what version of LabVIEW this might or might not go into. So don't get excited. :-) Anyway... it turns out, what I'd like to do isn't straightforward. All existing conditional disable symbols are global to the application instance. The three built-in symbols -- bitness, target, and run-time engine -- are universal to the running environment. The symbols that users can define are in the project and target. In other words, if I want to know the value of any symbol, I just need the reference to the application instance, not to a particular VI. And, guess what, all the C++ code is therefore oriented around application refnums not VI refnums. *sigh* Now, it turns out that I can see a way to add the debug-enabled flag in a sort of sideways manner, and I would just hardcode that symbol as a special case in some parts of the code. RED FLAG! Alert! Developer has used the phrase "special case"! One of my colleagues likes to refer to the "0, 1, Infinity Rule". That is, there are either zero cases of something, one case of something, or an infinite cases of something. There's never just 2. Or 3. Or any other finite number. Basically, as soon as you are prepping the second special case, your code needs to be architected to accept an infinite number of such special cases because otherwise, you're going to build up a huge number of unmaintainable hacks over time. If there really is just one special case, then the code generally remains maintainable. One special case isn't a hack... it's just the special case of the problem you're code is solving. The second special case is a hack. I try to avoid hacking up the LV code base and developing more stuff that needs refactoring in the long-term, so I'm hesitating about working on this feature. With this in mind, I went to a couple of other dev and asked a question, and I want to ask the LAVA community the same question: Besides "enable debugging," are there any other VI-specific settings that would cause you to want to change the actual code compiled into the block diagram? I honestly cannot think of any. Reentrancy? I don't typically change the reentrancy of a VI on a regular basis. A VI is either designed to be reentrant or not... flipping that switch usually changes how the node is used entirely. Thread affinity? I can't think of any VI where I have ever wanted to sometimes have the VI in UI thread and sometimes have it in Exec System One, and where I wanted different code in those cases. No one else I've asked has been able to come up with any scenarios for that sort of thing either. Oh, the hypothetical COULD exist, but hypothetically, I could want the font on all my numeric controls to become italic whenever the VI is set to "run when opened". It ain't a real use case. :-) One person suggested that there might not be VI-property-related settings that would want to change out code on the diagram, but maybe there is a case for user-defined symbols at the VI level? Maybe. But, honestly, the Diagram Disable structure pretty much covers the "I want to change out the behavior of this VI". Yes, we all know cases where we have code on the left half of the screen and code on the right-half of the screen, and we need a separate disable structure around those regions... but that's just as probable as having that right-side of the screen dropped into a subVI. The setting then is not a per-VI setting, but it is either a per-project setting (which we have today) or a per-library setting... and that I can easily imagine a use case for. Having a per-VI user-defined setting just seems problematic if you dropped the code into a subVI and lost the link between those two VIs. I hate adding any feature that discourages people from creating subVIs as needed! So I'm rejecting that use case. And the library use case, while a good one, is outside the scope of what I'm asking about today. Go put it on the Idea Exchange if you want to promote it. :-) Which brings me back to the "debug enabled" setting. That setting can be programmatically toggled when doing a build (if you go into the custom per-item settings because NI doesn't have an easy single-checkbox for you) and frequently we do things while debugging that would just slow down a release build. Which means it makes sense to write code into the VI that could be either compiled in or compiled out based on the VI's debug settting, unlike any of the other settings. So... brainstorms anyone? And please be honest. Even if you want this specific feature, and you think, "Well, if I present this valid use case, it means he won't put in his hack, so I'm cutting my own throat," trust me... for the long-haul, you'd rather LV develop as cleanly as possible, for the sake of your own trust in it!
  10. I think you could do this by modifying <LabVIEW>\resource\plugins\lv_new.vi I'm telling you how to cut into LabVIEW's internal operations. If you break that VI, you could lose functionality you care about. Make a backup copy first.
  11. PS: what you lost from the XNode implementation you gained in performance and not randomly crashing! :-)
  12. The convert to standard is the only mechanism. I have intentions of opening up the instance VI, but I've run into some *fascinating* historical code... malleables will get a nice power kick in 2017 SP1 and some potent assist features in 2018, but opening the instance won't be in 2018 at this rate (seeing as how I need to be feature complete on the 18th). In the meantime, convert to standard and then hit ctrl+z to put the malleable back. Suboptimal but it does work.
  13. I'm unfamiliar with the saying. The big problem is what happens when someone says "This XNode will be inline safe" and then scripts non-safe code. It's so late in the compile sequence, I'm not sure what the behaviors would be.
  14. The problem with auto detection is that XNodes don't generate their code during type propagation, only during compilation, so we would need a new tag/VI/something that declares up front "I am an XNode that will script only inlinable functions." Ok. Mouse has cookie. I'm going back to improving malleable VIs for LV 2018.
  15. I hardcoded that one into the compiler after multiple customers asked for it... pointing out that the common use case was to encapsulate an error in a VI so it can be returned in several parts of the code. I wanted to add a way to tag any XNode but there was aversion to risk at the late date I slipped that in. After that one was fixed, there wasn't any internal pressure to do a more general fix. And XNodes aren't public features, so external pressure doesn't work in feature priority meetings. NI has largely moved away from XNodes in our own work -- much to my personal frustration. The XSFP VI is the "Save For Previous" VI if you save back to an old version before the XNode was shipping.
  16. False. That assumes that the messages can do things that the existing messages could not do. Since the only operations available are those that were already available, the only thing a new message class can do is different combos of those operations -- the operations the class already defines as being safe to do (otherwise those wouldn't be exposed as public methods).
  17. The Americas Certified LabVIEW Architect Summit 2017 is now open for registration and for submitting presentations. Hosted at National Instruments HQ in Austin, TX September 18-20, 2017 (Monday, Tuesday, Wednesday) http://forums.ni.com/t5/Certified-LabVIEW-Architects/Americas-CLA-Summit-2017-Invitation-To-A-New-Style-Summit/m-p/3665098#M1751 Registration deadline: Monday, September 4, 2017
  18. Classes on the terminals of XNodes are somewhat unstable and crashy. The key way to avoid a crash is to put a "Request Deallocation" primitive wired with a constant TRUE on the block diagram of every VI that has a variant that could contain a LabVIEW class. But you then also have to be careful not to use your XNode in two separate user contexts at the same time unless the class is defined identically in both user contexts. The two features were never designed to work with each other (they came into existence at the same time and weren't really aware of what the other's needs would be). One or the other would have to be completely redesigned to accommodate the other. There are some safe paths, but I can't give much guidance there... I have to derive them special for every XNode I work on... which is why I rarely use XNodes. YMMV. But, seriously... add the Request Deallocation nodes. You'll save yourself a lot of pain.
  19. ShaunR: You can still program procedurally in LabVIEW. That's what DVRs+methods allows.
  20. Most languages (Haskell, Racket, Erlang, Lisp) that are successful in taking full advantage of multicore parallel programming do not allow references. References are a holdover from serial processing days. Computer science has had a better way to program since the 1940s, but Van Neumann saddled us with the IBM hardware architecture, and direct addressing of assembly and then C and its successors won performance at a time when such things were critical. Hardware has moved beyond those limitations. It's time for software to grow up. "Procedural programming is a monumental waste of human energy." -- Dr. Rex Page, Univ. of Oklahoma, 1985 and many times since then. (And in this context, "procedural" is "method and reference-to-objects oriented".)
  21. A) Compiler cannot apply most optimizations around references. To name a few examples: There's no code movement to unroll loops. Constant folding cannot happen. Look-ahead is impossible. Many more. B) Lots of overhead lost to thread safety (acquiring and releasing mutexes) to prevent parallel code from stepping on itself. C) Time spent on error checking -- verifying refnums are valid is a huge time sink in many ref-heavy programs, despite LV making those checks as lightweight as possible. D) Unnecessary serialization of operations by programmers. Even programmers consciously watching themselves sometimes forget or get lazy and serialize error wires rather than allowing operations to happen in parallel and then using Merge Error afterward. E) A constant fight with dataflow. Making one piece of data by-ref often means other pieces become by-ref in order that the programming models remain consistent, so things that could be by-value are changed to match the dominant programming paradigm. Sometimes this is because programmers only want to maintain one model, but often it is because they need data consistency and cannot get it with a mixed model. ---------------------------------------- You mentioned "identity objects" as a category of objects. Although you are correct, that is a good division between objects, the identity objects category is one that I see more and more as an anti-pattern in dataflow. Doing synchronous method calls on a by-ref object comes with so many downsides. Functional programming languages don't allow that sort of programming at all, even when they have a class inheritance type system... it's harder and harder for me to see any reason to have it in LabVIEW. We won't take it out, for the same reason that we don't take out global variables -- it's conceptually easy and a lot of people think they need it. But I no longer believe it is necessary to have at all and, indeed, I believe that programs are better when they do not ever use by-reference data. Communications with processes should be through command queues, preferably asynchronous, and operations on data should be side-effect-free by-value operations.
  22. All I ask is that my bugs be obscure enough that I find them before my users do and few enough that I have time to fix them before my users catch up. :-) References: the last weapon in the arsenal of the G master but the first refuge of the novice. Except Mikael. I cannot explain Mikael. Karma must love him because, by rights, his reference-heavy G code should be both more buggy and less performant than I know it to be. If any among us do seek to walk The Path of the Referenced Object, you would do well to step in Sensei Mikael's footprints for he has found the safe route.
  23. I see. While I concede the method works, I'm willing to wager that splitting the necessary parts of the class in parallel and then bringing them back together -- not forking the class and then merging!* -- will have better performance and be less error prone for the *vast* majority of applications. I'd definitely keep shoneill's method in my back pocket as an option, but it wouldn't be the first tool I'd reach for. * I think we'll all agree that forking the class and then merging is messy and error prone. I've never seen that be a good strategy, as the original question at the top of this thread discovered.
  24. Why would you bring DVRs into this problem? (That question goes to shoneil... I'm surprised that's the tool he would reach for in this case. Yes, it can work, but it is an odd choice since it forces the serialization of the parallel operations.) Consider what you would do if you had a cluster and you wanted to update two of its fields in parallel. You'd unbundle both, route them in parallel, then bundle them back together on the far side. Do the same with your class. If the operations are truly parallel, then the information inside the class can be split into two independent classes nested inside your main class. Write a data accessor VI to read both nested objects out, call their methods, then write them back in. Or use the Inplace Element Structure to read/write the fields if you're inside the class' scope. If there are additional fields that are needed by both forks, unbundle those and route them to both sides.
  25. Wait... I think I understand what you mean... I took "LabVIEW.NET" to mean a .NET execution engine and requiring .NET programming. You just mean that eventually everyone will be using LabVIEW NXG, right? Ok. That's true. It's years away, but, yes, that is the migration path. But the current platform VIs can be migrated by users. And your suggestion about allowing snippets to drop directly is a good one.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.