Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by shoneill

  1. shoneill


    Yes, I see this "Running but broken". I also see cases where a VI LV thinks is deployed is a different version from the one actually deployed. Or VIs bein broken because of errors in VIs which aren't even used on the target..... I've had RT VIs claim they were broken due to an error in a VI which is only used on our FPGA targets..... Something somewhere is very weird with the whole handling of these things. I've been harping on about it for years and I'm glad (please don't take that the wrong way) that others are starting to see the same problems. Maybe eventually it'll get fixed. Of course, the problem is tat in making a small example, the problem tends to go away nicely. NI really should have a mobile customer liaison (a crack LV debugger) who travels around visiting customers with such non-reproducible problems in order to get dirty with debug tools and find out what is going on. There are so many issues which will never be fixed because the act of paring it down for NI "fixes" the problem.
  2. I like the term viscosity. But it's not a generally applied term int his area. Increasing viscosity (resistance to flow) is not a problem until you need to adapt to a new surroundings. Then I searched for some syntonyms of Viscous and fould "malleable". Hmm. Sounds familiar. Or how about a currently totally underused word in Software Development: "Agile"
  3. Well, the idea meant was that fixing a bug or implementing a new feature changes the environment for other code so both may or may not have an effect on the prevalence of any other given bug to raise its head at any given time. The larger the change to code, the more disruption and the more disconnect between tests before and after the change. Intent is irrelevant. Features and bugs may overlap significantly.
  4. BTW. I just spent the entire day yesterday trying to find out why my project crashes when loading. No Conflicts, no missing files, no broken files. Loading the project without a certain file is fine, adding the file back when the project is loaded is fine but trying to load with this file included from the beginning - boom crash with some "uncontinuable exception" argument. I have NO idea what caused it but eventually, at 7pm I managed to get the project loaded in one go. Yay labVIEW. I don't care how often LV is released as long as bugs in any given version is fixed. To find the really awkward bugs we need a stable baseline. Adding new features cosntantly (and yes, I realise fixing a bug is theoretically the same thing but typically a much smaller scope, no?) raises the background noise so that long-term observations are nigh impossible. I for one can say that the time I spend dealing with idiosyncracies of the IDE is rapidly approaching the time I spend actually creating code. Crashes, Lockups, Faulty deploys to RT...... While LV may be getting "fewer new bugs" with time, it's the old bugs which really need dealing with before the whole house of cards implodes. Defining an LTS version every 5 years or so (with active attempts at bug fixes for more time than that to create a truly mature platform) would be a major victory - no new features, only bug fixes. Parallel to this, LV can have a few "feature-rich" releases for the rest. Bug fixes found in the LTS can flow back into the feature-rich version. I've proposed it again and again but to no avail because, hey, programming optional. I'm annoyed because I want to be productive and just seem to be battling with weirdness so much each day that a "permanent level of instability" just sounds like a joke. At least my exposure to this instability is icnreasing rapidly over the last years, or so it feels. And yes, I'm always sending crash reports in.
  5. Any correlation between the segment of the LV user base who adopts non SP1 versions and the ones targetted with the "Programming Optional" marketing?
  6. mje, to make things worse regarding the "Enable Debuggery" flag, what it ACTUALLY does is not only secret but also varies depending on the actual code complexity (real complexity, not what LV pretends it is) and the setting for "Limit Compiler Optimisations" in the Environment settings....
  7. AQ, what if I want (or need) my debugging code to run with optimisations enabled i.e. without the current "enable debugging"? Think of unbundle-modify-bundle and possible memory allocation issues on RT without compiler optimisations...... This would require a seperate switch for "enable debugging" and my own debug code, no?
  8. AQ, I'm not arguing against the idea of needing debug code per se (I put copious amounts of debug paths into my code), only on the coupling of the activation to the already nebulous "enable debugging". I see a huge usability difference between RUN_TIME_ENGINE and toggling debugging on a VI. It's my 2/100ths of whichever currency you prefer.
  9. OK it's been mentioned a few times: "user optimisations". Are we really naive enough to believe that's all this will be used for? How about making non-debuggable VIs? Broken wires as soon as debugging is enabled? How does this tie in with code being marked as changed for source control? I have BAD experiences with conditional disables (Bad meaning that project-wide changes can actually lead to many VIs being marked as changes just because a project-wide value has changed which will be re-evaluated every time the VI is re-loaded anyway.... ) I presume since this will be a per-VI setting it will have at least a smaller scope. But what about VIs called from within the conditional structure? They are then at the mercy of the debugging enable setting of the caller VI...... I mention this because we tra quite a lot of code reuse across platforms and this problem rears its ugly head again and again. I have no problem introducing a vi-specific conditional disable structure, but linking it to a completely different setting seems just wrong. Sure, "enable debugging" is a pandora's box anyway, but at least for a given LV version it's not a moving target. Imaging "enable debugging" doing something different for each and every VI you enable it on..... That sounds like a maintainance nightmare.
  10. Yes, the really crappy propagation of registration refnums is perhaps the major obstacle to the 1:1 command:event organisation. That and the Event Structure mixing up the order of events whenever I add another event to the API......
  11. While I see the pacticality of this, it makes me feel all icky doing it.......
  12. +1 for multiple User Events (why do away with the strict typing of LV)
  13. That's exactly what I was referring to, but apaprently there are additions coming which will make things even easier.......
  14. I believe a LV version coming soon (but not very soon) will not address this directly but may give us tools to allow this kind of operation to be performed. I'm hopeful.
  15. There's another way to achieve (almost) memory-mapped files. https://msdn.microsoft.com/en-us/library/windows/desktop/aa364218(v=vs.85).aspx If you read from (or write to) a file, Windows automatically memory-maps that portion of the file as long as RAM is available for it. Although this mapping is not forced (it can be negated by other processes requesting RAM - then disk write and read is via disk and a lot slower) it can still be of great use. If you need very fast write speed or read / write speed, pre-write (or pre-read) the file before your actually important work. Chances are that Windows will already do this in memory but with the added benefit of eventually persisting it to disk. If you want to purposefully AVOID persisting to disk, then just ignore my entire post.
  16. I've found adding, subtracting, multiplication and division to be quite useful...... But on a more serious note. I'm sure there are areas of mathematics I don't even know exist which may or may not be helpful. But if you want to go into pure mathematics as opposed to applied mathematics, your chances are greatly different to be able to apply when doing productive work. I studied statistics, calculus and so on at University. I've rarely needed to understand more than the basics. Trigonometry helped a bit due to the fact that we use a lot of modulators and demodulators but the effect was minimal. Also the numerical theory behind filters (Kalman, Butterworth and so on) can be very useful. Beyond this, I'm simply not qualified to answer.
  17. The version with a loop won't work in a SCTL on FPGA. The version with Feedback node will AFAIK.
  18. This is true. One would think that with proficiency, this problem-trading (old ones replaced with new ones) would shift in our favour. My experience is that the number of problems stays approximately constant but the newer ones become more and more obscure and hard to find. This is a bit of a pessimist (realist?) view I will admit. Truth is that we just keep looking until we find problems. And if we can't find any problems, then we convince ourselves we've missed something.
  19. How the DVR is structured, whether the DVR is encapsulated or not is a design choice based on the requirements (one of which could be the parallel operation AQ points out). The DVR is simply a method to remove the awkward requirement of "branch and merge" mentioned in the OP. I've done some similar UI - Model things in the past and I've found using by-ref Objects simply much more elegant than by-val Objects. DVRs are the normal way to get this. Whether we use a DVR of the entire class or the class holds DVRs or it's contents is irrelevant to the point I was trying to make: Instead of branching, modifying and merging, just make sure all instances are operating in the same shared space.
  20. Logman, don't forget that immediately after writing a file, Windows will most likely have a complete copy of that file in RAM so your read speed will definitely be affected by that unless you're somehow RAM-limited or are explicitly flushing the cache. Always approach read speed tests with care. Often the first read will take longer than the second and subsequent reads due to OS file caching. Just for completeness.
  21. For simple atomic accessor access, splitting up actual objects and merging MAY work but once objects start doing consistency checks (perhaps changing Parameter X in Child due to setting of Parameter Y in Parent) then you can end up with unclear inter-dependencies between your actual objects. When merging, the serialisation of setting the parameters may lead to inconsistent results as the order of operations is no longer known. When working with a DVR, you will always be operating on the same data and the operations are properly serialised. Of course it's of benefit to have some way of at least letting the UUI know that the data in the DVR has changed in order to update the UI appropriately.... but that's a different topic (almost).
  22. Instead of splitting and merging actual object data, split and share a DVR of the object to the UI and have both the UI and the caller utilise the DVR instead of the bare object (Yes, IPE everywhere can be annoying). That way you can simply discard all but one (it's only a reference, disposing it is only getting rid of a pointer) and continue with a single DVR (using a Destroy DVR primitive to get the object back) after the UI operation is finished.
  23. right-click the Tab when it's on the correct tab and select "Set Current Value to Default".
  24. So fine control of the buffer (setting it to 1 or two messages) would force synchronous messaging on the TCP driver level? That's rather useful.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.