Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. You'll find more delights regarding that primitive and the structure if you join the LV 2018 beta program... http://ni.com/beta along with some other major advances in malleable VI technology.
  2. @Mads Inelastic... I like the economic implications. "High maintenance" is right vector, but still implies a particular threshold... and "higher maintenance" sounds weird. @shoneill I've used "viscous" in the past to describe practices "that resist [data]flow". I'm tempted by it. "Inelegant" has a good balance of subjective/objective judgement. "Brittle" does seem like the idea many people keep coming back to. I like its factual aspect (sure, you can use it, you can make it work, but it is still brittle) and the fact that it is not a negation of some other term, which keeps us out of the "which way do you mean for that negation to work?" problem (which is the heart of "unmaintainable").
  3. I'm afraid that "unscalable" runs into the same problem, doesn't it? "Well, I'm at scale X and I keep ours working fine." :-( I think I need a term that means "harder than it needs to be." That's kind of why I like "paintainable." It implies, "Yes, you can do it this way. It works. You may not even realize how much energy you're exerting keeping it working compared to what you could be spending." In terms of real words, I'm leaning toward "inefficient". Usually that word is reserved for actual run-time performance of the code, and I hate to muddy it, but it is the term that describes the developer's effort as well.
  4. I kind of expect NI marketing to do that in a few years. Maybe it won't ever happen for the current LabVIEW platform, but LabVIEW NXG is being designed for this brave new software world. It's not my call to make, and it is really hard to push back against the idea when the whole software industry, including our own vendors and many downstream dependents, are moving in that direction. Like ShaunR, I have reservations about the whole idea, but I am observing it as a successful pattern in many other companies, both vendors and users, so I'm keeping an open (if doubtful) mind.
  5. ShaunR: I agree. But within context, the term "unmaintainable" is used (by many presenters and book authors and professors) to describe any pattern of action that contributes to the overall unmaintainability of the system as a whole because that pattern is known to break down at scale. The problem is with the term itself -- it implies an impossibility that is clearly rejectable with a single counterexample. Therefore, I'm looking for a different term of art.
  6. SO TEMPTING. Last I checked, I was a member in good standing with the Illuminati Unseen Templar Knights Of The Oxford English Dictionary, licensed to coin new English terms.
  7. I'm writing a shipping example for next LV. I have a series of VIs: one shows an incorrect solution, one shows an inefficient solution, one shows an unmaintainable solution, and one shows a good solution. I'm naming the VIs accordingly. The problem is with "unmaintainable." At several NIWeek presentations, I have described some X as "unmaintainable" and gotten arguments from audience that, "Hey, that's my architecture and I maintain it just fine." And they're right -- it is their architecture, and they are successful with it. When I (or most of the "Room G" presenters) use the term "unmaintainable" we mean "when part A of the code changes, the process to update part B is error prone, either because it is hard work or it is easy work that is easily forgotten." Is there a better shorthand term that I should/could be using other than "unmaintainable"? In many cases, I can explain what I mean by the term, but in slide titles or -- as today -- in VI names, it would help a lot to have a singular term, without triggering the kickback reaction. Something like "Uneasilymaintainable"? :-)
  8. It's a factual statement: the trend of the last several years is the main release having fewer new bugs and the SP1 release having more new features. I expect these trends to continue, with the probability of a new bug arising eventually being equal between the two releases. NI is being pushed by industry to shorten the overall release cycle considerably. Most software is moving in that direction. Even systems as complex as MS Windows have moved to continuous delivery. I am not arguing that you should do any particular thing. I am saying that your stated argument for waiting for the SP1 is becoming less and less valid each year. You might have other reasons for waiting, but the bug level isn't as solid a delimiter as it used to be. I am personally somewhat uncomfortable with software having a permanent "level of instability", but that does appear to be the trend across the industry.
  9. CEIP data shows for a wide swath, it isn't just "buy"... it's also "use."
  10. Frequently, and with increasing frequency. There really is little difference between a full release and an SP1 release at this point -- the cadence is about 6 months apart, and each has bug fixes you might need/want. With the binary-backward-compatibility of 2017 and later, more barriers to upgrading disappear. It is more likely that the main release will have new features, but the various drivers/modules might add support in the SP1, which, from customer perspective, is new features. So, while I understand the hesitancy to upgrade until the SP1, the reasons for that hesitancy have decreased dramatically over the last five years, dropping to zero hesitancy for some segments of the LV user base.
  11. I'd buy that argument except that a) a lot of you do buy the next version b) I take the time to report bugs against Visual Studio and MS Windows and lots of other software so that eventually those issues get out of my way If nothing else, filing a sufficiently outraged bug report, complete with a rub-your-face-in-it reproduction case (so the bastards at Microsoft can't claim I'm making it up) is really cathartic. Substitute "NI" for "Microsoft"... I bet it can help your morale. :-)
  12. Details of the CEIP program are here: https://www.ni.com/pdf/legal/us/CEIP.pdf The answer to your question is here: “…navigate from the Windows Start Menu to All Programs » National Instruments » Customer Experience Improvement Program.” You should also be prompted the first time you run a fresh install of LabVIEW (assuming you don't copy in an existing config file).
  13. planet581g: Have you contacted NI tech support? That's exactly the kind of thing that we'd like to hear about. I'm not sure why so few people escalate those sorts of issues to us, but I've noticed over the years that if users can restart and keep working, they never report the issue. Please, for your own sanity, take a moment to report those things. It's the only way we'll know about them and possibly fix them. I know it is a pain to pack up your code and get it into a service request, but posting about it on LAVA doesn't help! :-) (Ok, I admit... it does help sometimes when you happen to catch attention from someone at NI and it is something concrete to report, but a hang caused by "something" in a large hierarchy isn't one of those!) Are you using malleable VIs? There's one infinite compile in 2017 that we found recently that I've fixed in the upcoming 2017 SP1 release, but that's the only one I know of like what you're describing.
  14. Please start now. Only squeaky wheels get grease. Not Variant To Data... that has full checking on it for coercion legality. It isn't a raw Type Cast. I suppose you could get a backdoor Type Cast using the Unflatten From ____ nodes... on refnums. Casting refnums is the heart of the problem. I should have phrased it more broadly -- the two things you can do to crash LV that aren't R&D's fault are moving refnums onto wires that don't match their real types and calling outside of LabVIEW (to DLLs, Sys Exec, etc) in ways that inject back into LabVIEW. If a user writes a piece of LabVIEW and gets behavior that contradicts intended design, including failing to get an error when they should, that's a bug. Some are small enough compared to the effort to fix such that we let them go without being fixed, but they're still a bug. And changing the one line of code from "FileName()" to "QualifiedName()" is a pretty low-hanging fix.
  15. There should be exactly two of those things in released features: the Type Cast primitive and the Call Library node. And maybe System Exec.vi. But otherwise, no, we treat anything that can crash LabVIEW as something we messed up on. Thank you for reporting that bug. Found and fixed in LV 2018.
  16. Surely that isn't unique to 2017... the breakpoint manager hasn't changed (to the best of my knowledge) at all in several versions. Are you sure that is a new issue? (Not that I'm saying that the Breakpoint Manager *should* be eating CPU at all, but if it is the cause, I would expect it to go back at least to 2015 and probably to 2013.)
  17. This, by the way, is why NI does not generally advertise version X as being more stable than version Y. I can tell from bug reports coming in that 2017 is way more stable than 2015, especially when I filter for CARs filed specifically against new features, but all it takes is one bug that happens to be in a user's personal common code path for them to see it as massively unstable.
  18. As a matter of fact, I fixed a crash yesterday as a result of accumulated NIER reports. If we get enough stack crawls, sometimes we can triangulate these impossible-to-reproduce crashes. Sometimes even a single stack crawl is enough, as it lets us desk-check that code and imagine, "what if X happened? what if Y happened?" It's tedious, but, yes, it does help. If nothing else, it let's us flag a given code path and we can add more debugging logic along the route so that the *next* time someone crashes, it includes more information in the crash log. Eventually, we can trap these things. @MarkCG Please do send in the NIER reports. And -- I know this is more controversial -- please enable the Customer Experience feedback. If you're in a high-security industry, I understand, but for everyone else, the usage data about what features you're using really does guide priorities, both for new features and for fixing CARs. It does help.
  19. No problem. It helps me to know how customers see the world... sometimes we can restate things in more intuitive ways. :-)
  20. You have it backwards. In order to enable debugging, we have to turn OFF all that stuff. That's what allows debugging is doing away with all the optimizations of the code. The optimized code looks nothing like the block diagram and isn't amenable to exec highlighting, probing, etc. These aren't orthogonal things... enabling debugging MEANS turning off all the stuff that prevents debugging. The documented reduction in memory and performance improvement? Turning off debugging allows all those things to kick in. The only error in the documentation to my eyes is the word "slightly".
  21. Right, but I'm asking for things that you could change at *compile* time. Anything that toggles at run-time is irrelevant to my question in this thread.
  22. That's explicitly the question... "is there any use case for anything other than Enable Debugging?" My hope -- which this conversation is convincing me is true -- is that the answer is "no." There are people questioning whether "Enable Debugging" is itself a use case, but so far nothing beyond that. You're leaving to run time a compile time decision? And you're compiling in a first-call dependency on a requires-UI-thread property node? Ewww. Gross! That's exactly the kind of code smell the Cond Dis struct is there to prevent! Exactly what code would you be guarding with that property node check? That's the use case I'm asking about.
  23. I don't think I understand the question. "with i.e. without" is confusing me. If you for some reason wanted to compile in your own code based on your own criteria, you would do exactly what you do today: define your own symbol. There would be no change to that. This is only for code changes that you want to enable/disable with the master debugging switch.
  24. A bug fix is something you change to the VI AFTER you've debugged it. I'm talking about code that you would include in the shipping VI that says, "Whenver reentrancy is turned off, compile this code in". Something like...
  25. This comment is really bugging me. To me, toggling debugging is the same as moving from Desktop to FPGA. The entire code base is compiled differently for the new target. I cannot see anything about the existing Enable Debugging option that is an existing hack. That checkbox is the toggle used to change between two entirely different execution environments. The fact that that target is a bit amorphous (it doesn't have dedicated hardware/any platform might become a debugging target) doesn't change the fact that it is an entirely different environment. Your objection hits my ears like saying, "Dragging a VI from Desktop to FPGA is too easy. Users should have a whole list of checkboxes to enable VHDL compilation on a function-by-function basis." Can you expand your thoughts for me here?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.