Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Everything posted by ShaunR

  1. Lets assume I have cheated to save space and complexity by using an event refnum and have inadvertently exploited a bug - which has been "fixed". Using your "technically correct method" should I not be able to register and unregister events at will? evnt4.vi
  2. So you can confirm the change in behaviour between all previous versions and 2017?
  3. There are several workarounds, one of which I highlighted in the example. The issue is that it is a change in behaviour from previous LabVIEW versions which breaks compatibility
  4. That is static registration. Yes it is about the constant refnum which is required for dynamic registration. A real world example of dynamic registration might be registering for the mouse_move event only after the left mouse button has been clicked (register mouse_move during mouse_down) and then de-registering on the mouse up (e.g. moving a window when the left mouse button is held down). To achieve this one has to supply a prototype of the event (the refnum constant) to the left hand terminal as I have shown, otherwise the VI will be broken.
  5. So. You can confirm the behaviour has changed between versions?
  6. Maybe others have seen this but I only became aware of it recently so I apologise if there is already a CAR for it. The following demonstrates a difference in behaviour between LabVIEW 2017 and previous versions (back as far as 2009, maybe further). The VI registers an event when it starts (in the timeout case) and generates a user event when the Increment button is pressed. The expected behaviour is that the counter will increment by one every time the button is pressed. This is the case for LabVIEW versions prior to 2017. In 2017, the user event is never fired, nor is there an error emitted by the generate user event. To get the VI to operate as expected in 2017; change the event refnum tunnel to a shift register. This seems to indicate that the refnum prototype is stomping on the dynamically allocated reference whereas in previous LabVIEW versions it would not. Note also, that when using the shift register, the cases do not need to be "wired through" as would be expected with similar functionality in a normal case statement. evnt.vi
  7. The MDI toolkit uses a "container" to place the panels. In the example which is shipped with the toolkit there is also a limited area (below the controls) however it is transparent. The container area is defined by the "Create Container" VI. You required a more visible container. So one simply creates a VI with the required appearance and supplies the "Container" input of the "Create Container" VI with the VI reference. I used the attached VI to override the default container which is just a blank VI with a decoration. Untitled 2.vi Don't forget to also change the offset rect for the resize event otherwise when you resize the panel, it wont scale along with the panel.
  8. Well. An area to place them is trivial. As for "we displace indicators and orders". I'm not sure what you mean. You can rearrange the window order, move them around, tile and cascade them, dock and undock. But you'd have to give more detail on exactly what you mean.
  9. Hardly "rusty". Any software that is still at 1.0.1 after years of use is made of stainless steel People just don't know how to release fully featured and tested software any more. Users have been conditioned to think that software that is continuously updated and fixed can only be good when, in reality, it is a testament to how poor the design, implementation and testing is.
  10. Many will not like the Handbrake licence.
  11. I've been doing C++ recently after a very long time. I'd almost forgotten how much pain is circumvented with LabVIEW case structures. With no multi-line literals (can't just paste a key into a page). No "switch" [case statement] with strings.....period. Case sensitive whether you like it or not. Thank god it's almost over and then I can control it with LabVIEW.
  12. Do you remember the "Callbacks" demo that I once posted? You can hook the application or VIs events (eg VI Activation) by injecting your own event into the VI externally. I don't recall there being an event for front panel open, but the VI Activation should fire and you can use the callback to filter for specific criteria (panel visible etc)
  13. Indeed. In fact. I've worked at places where whoever breaks the trunk buys pizza for everyone. The "wisdom" you speak of can be stated thus: "You can break a branch and the tree will still grow but if but if you break the trunk the whole tree dies". This is still true for distributed SCC where the staging is effectively an enforced branch. There's a lovely description using the nature/tree analogy for SVN which applies, IMO, to all SCC.
  14. I've written 8 drivers that supported slaves for various companies. The last 6 supported master as well. (Wow. Was the first one really in 1999?) You are able. You are just finally forced to do what smithd and I have already suggested.
  15. +1. You scan the bytes as they come it and start parsing when a byte equals the address. If the CRC doesn't check out, you discard. If it does, then pass it up. This method is very robust.
  16. I agree. There have only been two times that I have thought about upgrading my dev environment from 2009 - when the native JSON primitive came out and when I was given Linux versions of 2012. UX doesn't factor into any of this, for me. I moved to Websockets years ago and haven't looked back, being freed from that particular straight jacket. That's a fair comment. But the rule of thumb is "never update until SP1". The premise being to let everyone else find the bugs and work-arounds as they are not costed or planned for. Experience shows that SP1 is generally more stable than the initial release (evidenced from the known issues and bug fix documents). NI may be changing that but that wasn't the case for 2009 and I still argue that that version is more robust, less buggy and more performant than any of the later versions. Indeed. But here is the rub. I now use other languages for a UI (yields TCPIP segregation). I'm using dynamic libraries for many of the drivers (both in-house and external) and LabVIEW is no longer doing much of the heavy lifting. It has become more of a swappable middleware development choice, albeit with many advantages. My only real requirement is that LabVIEW doesn't break the library interface or TCPIP stack and the prevalence of open source alternatives to NI specific ones is getting better every year. For prototyping it's still, at the moment, the first tool I reach for. My toolkit built over many years means that I already have most "components" I need to do develop industry standard stuff so most of the little tweaks and new features just make me ask myself "do I want to refactor that to take advantage?" Most of the time the answer is "no" since there is no real change (e.g. conditional tunnel) and the current stuff is optimised, proven and robust. On this front I get the distinct impression that at NI; the left hand has no idea what the right hand is doing. AQ recently asked me to converse with some people over there about issues installing LabVIEW.NET (yeah, I know they prefer to call it NXG). Now. I'm not particularly interested in that platform (for obvious reasons) but I can certainly outline the problems I had so said "Sure. Get them to talk to me through Bascamp". For those that don't know. Basecamp is a 3rd party project management platform that partners and toolkit developers are forced to use when communicating with NI. Several tumble weeds go by, then I get an email from someone at NI asking about it so I send them a Basecamp invite (to join the myriad of people at NI already signed up) in case they don't know how it works either (AQ had used Basecamp for a personal project, but never used it for NI communications). I never heard from them again I also had an issue in the past where I was advised to contact NI Support by my NI Bascamp handler ( ). Support wouldn't talk to me because I didn't have an SSP. It was only after further intervention by the handler that they got onto it. We are lucky to have AQ and friends peruse this forum and interact, but customer facing NI systems are set up against them to be effective sometimes. I've probably said this before, or something similar. If you find an excellent applications/sales engineer, get their direct number and hold on to it like a limpet to a rock because you need them to circumvent the systems' barriers. Considering there are known bugs going back to 8.x that still aren't fixed (last time I looked), this is my view too. Even so. It's not good enough to "fix bugs". I need my bug fixed, even if I'm the only one reporting it. That is why I'm leaning more and more to open source solutions because if they won't fix my bug, I will - and I don't care what language it is written in
  17. You need to us gacutil.exe. to get it into GAC. It's like with the old ActiveX where you used to call regsrvr32 (but different ).
  18. The probability of side effects with new features is orders of magnitude greater than fixing a bug that has been localised to a single function. The amount of effort is also disproportionate as a bug will already have regression tests and only require a test (or maybe a couple after error guessing) for that particular case. A new feature will also have more bugs in its test cases because that's also new code.
  19. No. Because that is where it "becomes a factor". Think of it like trying to go faster. Sure you can increase the speed by increasing the power but at some point you get diminishing returns. At 20Km/h you can quadruple your speed with double the power (say). At 200Km/h, if you double the power you get just a couple of Km/h increase. It is not a finite tipping point but a scale of diminishing returns above a certain criteria (when the wind resistance becomes a significant factor, in the analogy). add to that crap tires, out of spec parts and no service history and you maintaining top speed is impossible. This is why it's difficult to get across in a single VI because once that is added to the other factors the whole system becomes a big black hole of effort when in isolation, pointing to any one aspect may seem fine. So OK. How about "Diminishing Returns.vi"?
  20. Philosophically speaking. The whole software industry is going to get a big kick in the gonads in the not too distant future for abusing what was actually a fairly good idea. NIs history of lagging the trends by about 10 years means it will probably roll it out when the first strike hits home. The first red flag is the "customer experience" feedback which immediately saw me reaching for process manager, regedit and firewall rules. M$ have abused it so much that there are now calls within the EU to legislate against it and it has already become synonymous with spyware amongst users. It won't be too long until the software industry is viewed with the same contempt as banking.
  21. You have answered your own question then Your VI is an example of unscalable but it may well be maintainable for small or even medium sized projects. You just need to quantify the "size" where it becomes a significant factor.
  22. Unmaintainable is an amalgam of factors including coupling, architecture, readability and compromises. I fail to see how you would demonstrate that in a succinct way with one VI. Any one on their own is maintainable with effort but as the factors compound the effort required increases exponentially until it is "unmaintainable". That is why many people struggle after the first couple of new feature requests when they grow their software organically.
  23. Indeed. That is one [of a couple] of reasons why I still develop in 2009 which is rock solid and arguably more performant than most, if not all the later versions. I only recompile and deploy in later versions as the customer requests (with one notable exception due to event refnums not being backwards compatible). Those later versions will always be SP1 for the stated reasons. If I hit a project stopping bug during recompilation and deployment in the later versions, I always have the option to go back to a point before the bug was introduced. So far that has happened 5 times which isn't much but I would have been screwed if I didn't work this way.. If you are saying that there will be no difference between SP1 versions and release proper, then you should just drop the pretence that SP1 is a "service pack" and rename it to a "feature pack". Suppliers want to sell new stuff, bugs and all (software always has bugs, right? We just have to accept it as a fact of life!). Developers want both new and existing stuff to just work (new stuff has bugs but they will be identified and addressed in time until there are none). My development Windows boxes have automatic updates turned off. In fact, it has the all services associated with updates disabled-that's how much I distrust them. I now have to re-enable the services,find out which updates are being offered, then read the M$ website and manually select the updates to apply. If there is a "rollup" (aka SP) then it will be installed. The SP never went away, you were able to just get it piecemeal as it became available. But by god, do I have to jump through hoops to prevent "Father M$ knows best". Just take the latest furore with "Game Mode" as a recent example. Of course.. That issue only affects those who jumped right in to get the latest and greatest version. Time has passed. M$ have admitted there is a problem. It will be fixed and then I might (only might, not definitely) update the one and only windows 10 box. The experienced will wait. The impetuous tempt fate. No. A feature is an implementation of a requirement (new code). A bug is a failure of an implementation (existing code).
  24. I see you have you marketing hat on The reason for waiting until SP1 is for others to find all the bugs and for NI to address them. It is a function of risk management and it is far better if no new features are added so that no new bugs arise. 6 months is actually a good time-frame to see what falls to pieces in a new version, what work-arounds are found and if NI will fix it. For this reason I encourage everyone to get the latest and greatest as soon as it hits the ground If a driver/device isn't in a new version, then there is little reason to upgrade at all if you are dependent on it - so I don't buy that. If a project critical feature is only added in SP1, then it is usually a case of wait for SP2 (for the aforesaid reasons). I've never seen one of those from NI but you do get bug fix updates occasionally so it is a wait for one of those. The "wait for SP1" isn't a LabVIEW thing, by the way. It applies to any new versions of software toolchains (and used to apply to Windows too). Your argument for SP1 is feature driven when, in fact. the hesitancy for the arrival of SP1 is performance and stability driven irrespective of features.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.