Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Many will not like the Handbrake licence.
  2. I've been doing C++ recently after a very long time. I'd almost forgotten how much pain is circumvented with LabVIEW case structures. With no multi-line literals (can't just paste a key into a page). No "switch" [case statement] with strings.....period. Case sensitive whether you like it or not. Thank god it's almost over and then I can control it with LabVIEW.
  3. Do you remember the "Callbacks" demo that I once posted? You can hook the application or VIs events (eg VI Activation) by injecting your own event into the VI externally. I don't recall there being an event for front panel open, but the VI Activation should fire and you can use the callback to filter for specific criteria (panel visible etc)
  4. Indeed. In fact. I've worked at places where whoever breaks the trunk buys pizza for everyone. The "wisdom" you speak of can be stated thus: "You can break a branch and the tree will still grow but if but if you break the trunk the whole tree dies". This is still true for distributed SCC where the staging is effectively an enforced branch. There's a lovely description using the nature/tree analogy for SVN which applies, IMO, to all SCC.
  5. I've written 8 drivers that supported slaves for various companies. The last 6 supported master as well. (Wow. Was the first one really in 1999?) You are able. You are just finally forced to do what smithd and I have already suggested.
  6. +1. You scan the bytes as they come it and start parsing when a byte equals the address. If the CRC doesn't check out, you discard. If it does, then pass it up. This method is very robust.
  7. I agree. There have only been two times that I have thought about upgrading my dev environment from 2009 - when the native JSON primitive came out and when I was given Linux versions of 2012. UX doesn't factor into any of this, for me. I moved to Websockets years ago and haven't looked back, being freed from that particular straight jacket. That's a fair comment. But the rule of thumb is "never update until SP1". The premise being to let everyone else find the bugs and work-arounds as they are not costed or planned for. Experience shows that SP1 is generally more stable than the initial release (evidenced from the known issues and bug fix documents). NI may be changing that but that wasn't the case for 2009 and I still argue that that version is more robust, less buggy and more performant than any of the later versions. Indeed. But here is the rub. I now use other languages for a UI (yields TCPIP segregation). I'm using dynamic libraries for many of the drivers (both in-house and external) and LabVIEW is no longer doing much of the heavy lifting. It has become more of a swappable middleware development choice, albeit with many advantages. My only real requirement is that LabVIEW doesn't break the library interface or TCPIP stack and the prevalence of open source alternatives to NI specific ones is getting better every year. For prototyping it's still, at the moment, the first tool I reach for. My toolkit built over many years means that I already have most "components" I need to do develop industry standard stuff so most of the little tweaks and new features just make me ask myself "do I want to refactor that to take advantage?" Most of the time the answer is "no" since there is no real change (e.g. conditional tunnel) and the current stuff is optimised, proven and robust. On this front I get the distinct impression that at NI; the left hand has no idea what the right hand is doing. AQ recently asked me to converse with some people over there about issues installing LabVIEW.NET (yeah, I know they prefer to call it NXG). Now. I'm not particularly interested in that platform (for obvious reasons) but I can certainly outline the problems I had so said "Sure. Get them to talk to me through Bascamp". For those that don't know. Basecamp is a 3rd party project management platform that partners and toolkit developers are forced to use when communicating with NI. Several tumble weeds go by, then I get an email from someone at NI asking about it so I send them a Basecamp invite (to join the myriad of people at NI already signed up) in case they don't know how it works either (AQ had used Basecamp for a personal project, but never used it for NI communications). I never heard from them again I also had an issue in the past where I was advised to contact NI Support by my NI Bascamp handler ( ). Support wouldn't talk to me because I didn't have an SSP. It was only after further intervention by the handler that they got onto it. We are lucky to have AQ and friends peruse this forum and interact, but customer facing NI systems are set up against them to be effective sometimes. I've probably said this before, or something similar. If you find an excellent applications/sales engineer, get their direct number and hold on to it like a limpet to a rock because you need them to circumvent the systems' barriers. Considering there are known bugs going back to 8.x that still aren't fixed (last time I looked), this is my view too. Even so. It's not good enough to "fix bugs". I need my bug fixed, even if I'm the only one reporting it. That is why I'm leaning more and more to open source solutions because if they won't fix my bug, I will - and I don't care what language it is written in
  8. You need to us gacutil.exe. to get it into GAC. It's like with the old ActiveX where you used to call regsrvr32 (but different ).
  9. The probability of side effects with new features is orders of magnitude greater than fixing a bug that has been localised to a single function. The amount of effort is also disproportionate as a bug will already have regression tests and only require a test (or maybe a couple after error guessing) for that particular case. A new feature will also have more bugs in its test cases because that's also new code.
  10. No. Because that is where it "becomes a factor". Think of it like trying to go faster. Sure you can increase the speed by increasing the power but at some point you get diminishing returns. At 20Km/h you can quadruple your speed with double the power (say). At 200Km/h, if you double the power you get just a couple of Km/h increase. It is not a finite tipping point but a scale of diminishing returns above a certain criteria (when the wind resistance becomes a significant factor, in the analogy). add to that crap tires, out of spec parts and no service history and you maintaining top speed is impossible. This is why it's difficult to get across in a single VI because once that is added to the other factors the whole system becomes a big black hole of effort when in isolation, pointing to any one aspect may seem fine. So OK. How about "Diminishing Returns.vi"?
  11. Philosophically speaking. The whole software industry is going to get a big kick in the gonads in the not too distant future for abusing what was actually a fairly good idea. NIs history of lagging the trends by about 10 years means it will probably roll it out when the first strike hits home. The first red flag is the "customer experience" feedback which immediately saw me reaching for process manager, regedit and firewall rules. M$ have abused it so much that there are now calls within the EU to legislate against it and it has already become synonymous with spyware amongst users. It won't be too long until the software industry is viewed with the same contempt as banking.
  12. You have answered your own question then Your VI is an example of unscalable but it may well be maintainable for small or even medium sized projects. You just need to quantify the "size" where it becomes a significant factor.
  13. Unmaintainable is an amalgam of factors including coupling, architecture, readability and compromises. I fail to see how you would demonstrate that in a succinct way with one VI. Any one on their own is maintainable with effort but as the factors compound the effort required increases exponentially until it is "unmaintainable". That is why many people struggle after the first couple of new feature requests when they grow their software organically.
  14. Indeed. That is one [of a couple] of reasons why I still develop in 2009 which is rock solid and arguably more performant than most, if not all the later versions. I only recompile and deploy in later versions as the customer requests (with one notable exception due to event refnums not being backwards compatible). Those later versions will always be SP1 for the stated reasons. If I hit a project stopping bug during recompilation and deployment in the later versions, I always have the option to go back to a point before the bug was introduced. So far that has happened 5 times which isn't much but I would have been screwed if I didn't work this way.. If you are saying that there will be no difference between SP1 versions and release proper, then you should just drop the pretence that SP1 is a "service pack" and rename it to a "feature pack". Suppliers want to sell new stuff, bugs and all (software always has bugs, right? We just have to accept it as a fact of life!). Developers want both new and existing stuff to just work (new stuff has bugs but they will be identified and addressed in time until there are none). My development Windows boxes have automatic updates turned off. In fact, it has the all services associated with updates disabled-that's how much I distrust them. I now have to re-enable the services,find out which updates are being offered, then read the M$ website and manually select the updates to apply. If there is a "rollup" (aka SP) then it will be installed. The SP never went away, you were able to just get it piecemeal as it became available. But by god, do I have to jump through hoops to prevent "Father M$ knows best". Just take the latest furore with "Game Mode" as a recent example. Of course.. That issue only affects those who jumped right in to get the latest and greatest version. Time has passed. M$ have admitted there is a problem. It will be fixed and then I might (only might, not definitely) update the one and only windows 10 box. The experienced will wait. The impetuous tempt fate. No. A feature is an implementation of a requirement (new code). A bug is a failure of an implementation (existing code).
  15. I see you have you marketing hat on The reason for waiting until SP1 is for others to find all the bugs and for NI to address them. It is a function of risk management and it is far better if no new features are added so that no new bugs arise. 6 months is actually a good time-frame to see what falls to pieces in a new version, what work-arounds are found and if NI will fix it. For this reason I encourage everyone to get the latest and greatest as soon as it hits the ground If a driver/device isn't in a new version, then there is little reason to upgrade at all if you are dependent on it - so I don't buy that. If a project critical feature is only added in SP1, then it is usually a case of wait for SP2 (for the aforesaid reasons). I've never seen one of those from NI but you do get bug fix updates occasionally so it is a wait for one of those. The "wait for SP1" isn't a LabVIEW thing, by the way. It applies to any new versions of software toolchains (and used to apply to Windows too). Your argument for SP1 is feature driven when, in fact. the hesitancy for the arrival of SP1 is performance and stability driven irrespective of features.
  16. It's not a feature. It is an internal class mechanic that you were never supposed to realise was there.
  17. You have all that time? They are not working you hard enough Minor Updates ship roughly every six weeks. These updates may include new features, bug fixes, and changes needed to reflect platform changes (e.g. changes in Windows). You’ll be able to tell which minor update you’re running by opening the Help, About and reading the second digit of the version number, for example 15.1 or 15.2. Servicing Updates are very targeted releases that typically contain bug fixes and ship more quickly. These servicing updates can ship often (e.g. weekly). You’ll be able to tell which servicing update you’re running by opening the Help, About and reading the third string in the version, for example 15.1.x, 15.2.y. source Not even the same ball-park, I'm afraid. Here's the thing with a). Changing versions is a huge project risk. You may get your old bug fixed (not guaranteed, though) but there will be other new ones and anyone who converts mid-project is insane. In fact. I would argue that anyone who upgrades before SP1 is out is also insane. Requiring customers to buy a new version to fix a bug is, IMO, bordering on predatory. I expect you will find that most people that you say buy the new version are on a SSP so they get it anyway even if they are using an older version. In fact. Unless you have an SSP you can't even get to talk to anyone about bugs as you cannot get past the gate-keepers. New entrants won't be looking for old bug fixes-they will expect (rightly or wrongly) for it to be bug free and will be seen in these forums when they encounter one as they can't wait 6 months.
  18. Because the release time-frame for bug fixes and LabVIEW itself doesn't help with the project at hand and it may quite possibly be rolled up into a new LabVIEW release requiring a new purchase. So people ask some questions on the forums, find a work around and move on. We are working on weekly timescales and NI is 6 monthly. We need to resolve bugs within days, not months. A bug fix in 6 months time might as well be never, especially if we have to buy a new LabVIEW version to get it.
  19. There seems to be two aspects to this discussion. 1. An inbuilt conditional case activated for enabling/disabling user debugging code predicated on existing VI settings and finding other vi specific settings to bolster this one case. 2. An argument for adding one (or more) additional, built-in, conditional disable structure defines in the face of resistance to changing anything in the (probably fragile) codebase. If you win #2, you can do #1 but #1 is a very specific case that I certainly have no need for. However. I have another non vi specific one that I would like to see the conditional disable structure support (ammunition for #2). I would like to see a case implemented for the "LabVIEW version" since I have several pieces of code that have to determine this at run-time and use a normal case structure to switch the code.The run-time gymnastics of this is disproportionate to the simplicity of a conditional disable for this purpose. In a couple of extreme cases it has required a post-process VI to be run on the target system and scripting to make modifications. This has meant that the VIs cannot be password protected and therefore a licence cannot be attached to them (I have had to ask for an NDA).
  20. I guess there is no interest in this as there were no respondents, so I have decided to not publicly release it and the HackRF one.
  21. In synchronous mode the function will return once the data has been received (or a timeout) by sitting in the polling state until the data has been received (the function is blocked). In asynchronous mode it will return almost immediately and not wait for the polling state (non-blocking). The threading is just the mechanism to achieve this so I don't know what you are trying to say with that statement.
  22. Choosing between Synchronous and Asynchronous So synchronous is a blocking operation.
  23. Delete the "Bytes at Port" and rewire. Right click on the "Serial Read" and select "Synchronous I/O Mode>Synchronous" (the little clock in the corner of the Read icon will disappear). It will then block until the termination character is received and return all the bytes or timeout. This is the simplest way of getting up and running and you can look at asynchronous reading later if required.
  24. Bullcrap. This is the real reason. Has the TFS sales rep visited recently?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.