Jump to content

ShaunR

Members
  • Posts

    4,914
  • Joined

  • Days Won

    301

Everything posted by ShaunR

  1. I think if you move to a DB, it will supply the decoupling and still give you all the same capabilities. A while ago I added a "Settings" library to the SQLite API which, I think, does exactly what you're describing - almost direct replacement. You just place the VI on the diagram and you can load and save the FP controls to the DB. It also gives you "Restore to Default" and has a "query" function so you can return as many or as few parameters as you like (see the Panel Settings Example). I was also thinking about adding the "Update" event broadcast similar to your system since for Data-aware xcontrols it would be a very nice feature to have them all update on a change and would only be a couple of minutes to add.
  2. The issue I have with most unit test frameworks is that they effectively double, triple or even quadruple your code base with non-reusable, disposable code - increasing effort and cost substantially and often for minimal gain - more code = more bugs. This means you are debugging a lot of code that isn't required for delivery; for a perceived peace of mind for regression testing. I skeptical that this is a good trade-off. I couldn't install the VIP since it is packaged with VIPM 2014+ (I'm still livid at JKI for that). I did, however, extract the goodness and look at it piecemeal but couldn't actually run it because of all the dependencies that were missing.So if I've missed some aspects, I will apologise ahead of time It looks like early days as you find your way through. Some things you might want to look at...... 1. Instead of creating a VI for every test with a pass/fail which you then scour for that indicator Look at "hooking" the actual VIs front panel controls' (there is an example on here I posted a long, long time ago) and comparing against expected values and limits. This will move the "pass/fail" into a single test harness VI, much like a plug-in loader, and for non iterative tests-will give you results without using a template or writing test specific code.. 2. Consider making the "Run Unit Tests" a stand-alone module. This will enable you to dynamically call it from multiple instances to run tests in parallel, if you so desire. 3. Think about adding a TCPIP interface later on so that you can interact with other programs like Test Stand (forget all that crappy special Test Stand Variable malarkey ). This means that your test interface is just one of a number of interfaces that you can bolt on to be the front end and doesn't even limit you to LabVIEW. (how about a web browser ? ) With a little bit of thought, this can be extended later into a messaging API to cater for iteration, waiting and command/response behaviour. This, along with #2 is also the gateway to test scripting (think about how I may have tested the Hal Demo as both a system and as individual services). Edited: 'cos I can
  3. OK. so what am I missing?
  4. ShaunR

    Smash Call

    I merely meant that scientist and engineers sat in laboratories really don't care about this sort of stuff. They just want their experiment/machine to work. They choose LabVIEW because they don't have to worry about pointers, memory, stacks and registers - all the crap other languages have to deal with - so the mindset of "how do I break/exploit this?" isn't there. With the introduction of the academic licences - not that long ago - LabVIEW has been exposed to more of those with exactly that mind-set and many are polyglots - not only on Windows but on Linux. I'm not saying you are in academia, just that because of the academic exposure we should see more black, white and grey hats targeting LabVIEW where previously there were next to zero.
  5. ShaunR

    Smash Call

    d) Nothing to do with LabVIEW. I think this is very valuable work but useless from a LabVIEW for Engineers point of view. This is really just showcasing a (zero day?) exploit that will hopefully be addressed in the next update and the "feature" will disappear. I would have preferred a responsible disclosure of the exploit and I expect this will get very little love from the community as I would guess that outside NI R&D the number of people that even understand it, let alone could leverage it, can probably fit in a small family car. It is, however, a splendid example worthy of DEFCON and CeBIT for demonstrating attacks on the niche language called LabVIEW. Hopefully we will see DEFCON videos of LabVIEW RT and PXI boxes being pwnd and machines taking peoples arms and legs off. Maybe then the malaise and complacency around security in the LabVIEW community might be eroded somewhat. To be honest, I expected this sort of thing a while ago but I guess we are only now reaping the benefits of the academic licences Keep exploiting while I go for some more popcorn . I'm sure there are a couple of eyes on this work just itching to marry it with self expanding VIs
  6. Oooooh. Memory mapped files as a DVR?
  7. You are not really talking about "Style", rather "State". There are two basic schools of thought. Try to keep state locally in a model or let the device keep the state. The former is fast but more error prone (desynchronisation between your model and the actual device) while the latter is slower but more robust and easier to recover. There are also shades in between the two which tend to be dependent on the device itself. If a device has multiple programmable config memories then a single command to switch recipes and read a value continuously may be needed (e.g. motors with profiles). If you have 1,000 UI controls to turn an LED on, then you may have to selectively see what the user has changed and send the command appropriately and hope they didn't press the reset button on the device. My preference is device 1st, local model 2nd (when performance becomes an issue). This is mainly because you usually end up re-inventing the wheel, that is, recreating the logic of the device firmware in a local model and some of them can be very complicated indeed. Some devices even have their own controllers so it becomes more of a distributed control system. Then I like to chop the hands off of operators and leverage advanced features of the device for configuration (recipes) if it has any.
  8. The dialogues use "Root Loop" and the message pump is probably halting before sending the message once the first dialogue is shown. Use your own dialogue or set the string to "Update value while typing" to separate the events.. This should give you the result you are expecting.
  9. There's probably a few improvements that it could benefit from. The Hunspell library only checks individual words so the regex I used to split text may need tweaking, for example. As long as there are no show stoppers; leave it a month for people to play and make suggestions like yourself. then I'll revisit.
  10. 1.0.1 has been released with the dependency removed. You should be good to go!
  11. Not really. There isn't much too it. The build environment was imported from a VC++ workspace into CodeBlocks so I'll go through it so that external dependencies aren't required - I thought that was too easy...lol. In the meantime you can get a 32 bit binary from here to play with (rename it to libhunspellx32.dll) or with a bit more google-fu you can find others. They are all 32 bit, though. If there were 64 bit available, I wouldn't have bothered distributing them with the API.
  12. Mission accomplished!
  13. No. It's my bad. The thread is about IDE spell checking through a right click plugin so my use case is a super-set. It was just the only Lavag thread that was about spell checking. There is an external spell checking library (scroll up for link). It works very well. The problem is it's LGPL.GPL or MPL (have a choice) but they all burden the distribution of binaries with source (for a couple of years). So I'm umming and arring about not supplying the binaries and just supply the LabVIEW code as BSD3.Then whoever wants to can build their own binaries or find pre-built ones elsewhere . The problem there is that it will make it almost unusable for many LabVIEW people who are using 64 bit if I don't supply them..
  14. I have four use cases. 1. Add spellchecking to "Project Probe" and integrate the "VI Documenter" with auto correct. 2. Spell check readme and changelog files. 3. Create an Xcontrol (string control) with "as you type" spell checking built in. 4. Add spell checking to a report generator someone has asked me to create. So far. looking good.
  15. No. I want a spell checker that I can use in applications on any text - not just in the LabVIEW IDE.That's very short-sighted. Time to write a wrapper for Hunspell, I suppose
  16. Not a version issue. Maybe I put my surprise in a wrong thread since the only references my foul-fu could find were for VI analyzer and this was the only one I could find on Lavag.org for spell checking. What I meant was "do we still not have a spell checking function we can use in LabVIEW?". Shipping VI anayzer with the application just for spell checking is a bit silly, IMO.
  17. Hmmm. Do we really not have a spell checking solution apart from VI Analyzer? (which isn't really a solution ).
  18. Nope. I just said all SCCs are crap for LabVIEW including Git. Rolf was comparing Git and SVN features. That comment was a friendly dig at odoylerules who loves Git and thinks my SCC system is copying zip files This is the same problem that came up with people wanting to know how to attribute the OpenG software for licencing. Various people suggested similar things to you but I'm not sure if an authoritative answer was ever proffered.
  19. The DFIR was introduced in 2009 and LLVM in 2010, I think. Anyway. Enough off topic shenanigans . Back to the discussion "my SCC could take your SCC in a fight".. I think the answer as to whether things should move to Git is probably not down to features, but preferences. If 90% of people prefer Git because all their other projects are there, then why not?. If no-one cares apart from 1 or 2 zealots then there is little point. Maybe a straw poll? There is one input that has not been considered, and it is the amount of effort required by the OpenG team to actually move it and if it would have a negative impact on the existing contributors who may be reluctant to learn a new platform.
  20. I don't see why all VI terminals can't have an option to adapt to type. We have them for CLFN which is very useful. It'd be great if it was just a checkmark on the compane like the "Required" or "Dynamic Dispatch". Then it would be easy to fix the type of controls/inidcators and mix-n-match adaptive and static types and we wouldn't even need a special type of VI/Xnode.
  21. Well. A bit off topic but deserves a response. Two points here. . They have already figured out how to identify and merge differences. In fact they figured it out a long time ago, then stopped. That's 80% of the problem already solved, IMO. I've said before and I'll say it again. I don't care that the features I want are hard. It's closed source, paid-for software so it's up to NI to figure out how to make it happen; not for you or I to be technical apologists for them. Anything is possible with enough resources, time and the will to do it. The only thing lacking is the last one. As far as I can tell,. NI stopped doing hard or interesting stuff in LabVIEW around 2009 (maybe even 8.x) and have been serving us IDE placebos ever since - just look at whats been implemented from the idea exchange!
  22. This is a bit different in that LabVIEW generates an exact representation of the LabVIEW panel by scanning the VI FP and essentially converting controls and indicators to HTML and Javascript - the panel is automagically created for any application. A pure HTML/JS UI to a LabVIEW back end is much easier but you have to create the page for the application.
  23. I suggest you count to 10 then re-read what I wrote.
  24. That's just regurgitating the spiel. The power of Git isn't in forking or cloning - you can do that by copying a zip file. It is in having a local repository that you can make incremental commits and checkouts that can also be synchronised with the main trunk - offline AND online commits or "staging" as it is called. Other SCC all have to be on-line to make commits. The point I (and I think Rolf) are making is that all SCC including Git are not fit for purpose when it comes to LabVIEW so why change from one inadequate source control to another inadequate source control? It's not as if we have never used Git. It's just that is it as crap as all the others for LabVIEW. I, certainly, end up just using them purely for backup and restore due to LabVIEW peculiarities and Git is a very complicated backup and restore. Whats really needed is for NI to get their finger out and give us a proper LabVIEW SCC system instead of faffing around with IDE cosmetics
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.