-
Posts
4,882 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
Best Practices in LabVIEW
ShaunR replied to John Lokanis's topic in Application Design & Architecture
Possibly Perhaps it was worded ambiguously since I did not mean to imply that the developer should never write any code to verify his software. But that it should not be used as the formal testing process. Most developers want to develop "bug free"software and it's useful for them to automate common checks. But I am promoting that this is for the developer to have confidence in his code before proffering it for formal acceptance. The formal acceptance (testing) should be instigated by a 3rd party that designs the test from the documentation, and that reliance on the developers test harness for formal acceptance is erroneous for the previously stated reasons. I think this is probably where we diverge My view is, "that" set of tests is irrelevant.. It is always the "customer" that designs the test (by customer I mean the next person in the deliverables chain - in your case, I think, production) The tests are derived from the documentation and it is the principle that you have two separate and independent thought processes checking the software. One thought process at the development level and - after RFA (release for acceptance) - one at the acceptance level. I think I should point out that when I'm talking about acceptance in this context, I just mean that a module or identifiable piece of code is marked as completed and ready to proceed past the next gate. If the the test harness that the developer produced is absorbed into the next level after the gate, then you lose the independence and and cross check. If it didn't pass the developers checks (whether he employs a test harness or visual inspection or whatever) then it wouldn't have been proffered for acceptance - the developer knows it passes his checks. -
Thats probably why we disagree so often. Complex requirements cam be broken down into many simple solutions that together solve the complex one.
-
Or just pass the refnum since they are all the same (only have to be similar).
-
VISA read raw data from USB board Analog Devices ADISUSBZ
ShaunR replied to eric_lmi's topic in LabVIEW General
Raw USB in LabvIEW is very "trixie". There is no real defined standard and the process for actually getting something usable is fraught with problems. If you need USB it's much better to go for one that supports a virtual serial interface. Raw USB in LV is (IMO) to be avoided at all costs. However. I think from what you re saying that you have read the NI tutorial (you talk about creating a driver in the wizard) and I will add the caveat that generally. a USB driver cannot exist side-by-side with VISA (i.e you must uninstall and completely remove the vendors driver). But your problem has been discussed here before. I'm not sure that a resolution was every found but here is the link in the hope it provides something useful. Thats about all I can offer, I'm afraid. -
Filling a cluster with strings and arrays dynamically
ShaunR replied to jbone's topic in Calling External Code
I find it far too slow It's a shame it's an xnode.. It's password protected and I wanted to find out how it determines the length of a string before it dereferences (iterates 1 char at a time and checks for null?) They don't mention how to do that in the move block documentation (dereference a variable length string). -
Naaaah. Dispatcher 1.0 does 90% of that. Dispatcher 1.2 does 100% (including to and from browsers after a very nice little thread a couple of weeks back )
-
Best Practices in LabVIEW
ShaunR replied to John Lokanis's topic in Application Design & Architecture
Indeed. It is more risk management than a no-bugs solution. The mere fact that you are writing more code (for the purpose of test) means that even your test code will have bugs so you can consider that software, testing software, actually introduces the risk that you will expend effort to find a solution to a non-existent bug in the main code. Unit testing (white-box and black-box) has it's place. But it is only one method of a number that should be employed. Each to a greater or lesser extent. We mustn't forget systems testing which tests the interaction between modules and the fitness for purpose, rather than that an individual module actually does what it s designed to do. The main issue for any testing though is that the programmer that created the code under test "should" never be the person who tests, or writes any code that tests it. The programmer will always design a test with an emphasis on what the module is supposed to achieve, to prove that it meets the design criteria - that's his/her remit. Therefore the testing becomes weighted to proving the positive rather than the negative (relying on error guessing alone) whether it's a software testing solution or not. It's the negative (unanticipated) scenarios where the vast proportion of bugs lie and, to expect the programmer to reliably anticipate the exceptions when she/he is inherently focused on the operational aspects, is unrealistic and (probably) the biggest mistake most companies make. -
Wouldn't bother me Although it's a bit worse than that because its an extinction level event. In that case It just means I'll die a few months after those in the US And those in Australia a little after that. But if things work out right it might counter Global Cooling when they decide thats the next money maker and companies are paid to pump CO2 into the atmosphere
-
And here's the events version (stealing JCarmodys boolean logic ). The advantage of Jcarmdys is that it works anywhere on the screen whereas the events version only works on the FP of the VI that has the code. The events versions only advantage is that it is a little more efficient in terms of CPU.
-
The really old way (before events, queues etc) might be easier to visualise. It used 2 global variables (data pools). The UI would write to one of the globals to configure and control the acquisition and all the acquisition stuff would write to the other to update the data in the UI. (Completely asynchronous. Non blocking and damned fast - not to mention built in system wide probe..lol) So the UI was completely decoupled from the acquisition spending most of its time just polling the UI global to update the screen. But basically all it means is removing execution dependency between the UI an other parts of the code usually via an intermediary interface. The inverse I would imagine would be something like a sequence structure with the acquisition in the first frame an the indicators in the last frame.
-
Best Practices in LabVIEW
ShaunR replied to John Lokanis's topic in Application Design & Architecture
This is going to be painful . Not so much that you are re-factoring code (many of us do that all the time). But you are switching paradigms so it's going to be a complete rewrite and you won't be able to keep perfectly good, working and tested code modules (even the worst programs have some). But the good news is. There will still only be 1 person that understands the code only it won't be the other guy I usually find one of the hardest initial steps is where to start. I strongly recommend you don't do it all in one go, but rather use an iterative approach. Identify encapsulated functionality (e.g a driver) and rewrite that but maintain the same interface to the rest of the code (to begin with). This way you will be able leverage existing test harnesses and, once that module is complete; still be able to run the program for systems tests. Then move to the next. At some point you will eventually run out of single nodal points and find that you need to modify the way the modules interact to enable to realise your new architecture. But by that point you have gotten over the initial learning curve and will be confident enough to make much riskier changes whilst still having a functioning application. The big bonus of approaching it this way is that you can stop at virtually any point if you run into project constraints (run out of time/budget, another project gets higher priority, you contract a serious girlfriend etc) and still have a functioning piece of software that meets the original requirements. You can put on the shelf to complete later but still sell it or move it into production or whatever you do with your software. -
I'm still waiting for the 2009 SP2 Go Paul.
-
Thats what the MCL should be as a control (your picture) without us having to jump through hoops and use hacks to emulate proper controls.. It's about time NI stopped faffing with blue sky stuff and put more effort into the core stuff that's been needing development for the last 5 years that everybody uses (controls, events, installer, more integrated source control support (svn, mercurial) .... et al.)
-
I don't want to hijack Daklus thread.(too important). I have to do acquisition stuff all the time. I used to do the classic acquire-process-display or producer-consumer. Now I just run acquisitions daemons direct to the DB and completely decouple the acquisition from the processing and reporting. Much more flexible and means you can swap reporting modules in and out (even while live acquisition is taking place) without having to manage the transitions or worrying about filling queues, memory or synchronising. However, direct streaming is only feasible up to about. 500 samples per second, reliably, without (as you rightly say) some intermediary processing and buffering. But that is part of the daemon so you don't lose any of the benefits. I think your proposal would work extremely well since you can decide how much processing you want to put on either side of the divide (it could be staged and split).
-
Thats fairly painless with a database. Log all the raw data to the DB. Make your filter re-entrant (notch/bandpass filter?).and just operate it on Selects from the database as many times as you like - in parallel with different settings. Any channel, any number of data points, any frequency - 1 re-entrant filter (simple case of course). Compare channels, cross correlate,,,you name it. You also don't need to keep state information or bung up your memory with a huge history. Just a thought.
-
Just curious....... In terms of code metrics. How much is framework and how much actually does something in the physical world (e.g read/writes from an device, updates a display, plays Mozart on a xylophone etc) Any program needs it's infrastructure (of course). I'm just wondering what the trade-off is (you don't want to write 100 VIs just to turn an LED on/off for example).
-
Why does it take so long for LV to respond after dropping a VI?
ShaunR replied to george seifert's topic in LabVIEW General
Text Output Object -
You could attach a callback to the status event and invoke your dialogue with the one of the status changes. Here's an example of hooking the status event. You might have to browse for the WMM activeX control again if you are using LV32 (I'm using x64)
-
Why does it take so long for LV to respond after dropping a VI?
ShaunR replied to george seifert's topic in LabVIEW General
Ditto -
Well. If "idle" has a plethora others (which I strongly suspect -how many?). I'd call it a mess But from the list I'd go for SM with event structure in the same sense that I'd go for a SM with an emergency stop, SM with foot switch and, indeed, any input stimulus to the actual state-machine. I know. 'twas a bit below the belt. Especially because there is a very good example of it in the SQLite examples.
-
Why does it take so long for LV to respond after dropping a VI?
ShaunR replied to george seifert's topic in LabVIEW General
I found a mass compile of the Labview directory sorted out most of this problem in 2009 and, generally, the 32bit to be a complete slug compared to the 64 bit - on a 64 bit machine. Mass compile doesn't do anything for 2010. It's still a slug on valium. -
An event structure isn't a state machine Oops I forgot. You ab use it as one by firing off ValSig (Been known to do that myself on occasions....lol). But Linked tunnels work on it too
-
Not really off topic. The OP did ask about state-machines and what other people use. I'm with you on the sexually transmitted disease STD. I was generally referring to those that save a file using 20 case frames instead of just having "save" (because a certain tool makes it easy to do ). I've never found a need for more than 10 in control systems (well, I think there was a 12 once, but I eventually got it to less than 10). Simply because I do as you. The state-machine goes across the diagram and INTO the diagram (I think that's what you are saying). OK some state transitions aren't "kosha" (to get back the next level it might rattle through a few basically bypassing and doing a doing a NOP). But thats data-flow for ya. I actually find Labview pretty good for realising multi-planar machines where I equate each plane as a level in the VI hierarchy - Seems intuitive to me. Sure a few implementation problems. But much easier to debug a single branch in isolation than a fat, wide one with 100 states.
-
TCP/IP parameters
ShaunR replied to Bobillier's topic in Remote Control, Monitoring and the Internet
Simple? Define simple The IP address you can get using the function in the palette. The others (including the IP) you will find in the registry under "SOFTWARE\MICROSOFT\WINDOWS NT\CURRENTVERSION\NetworkCards". However the "proper" way is to call WSAIOctl with the SIO_GET_INTERFACE_LIST flag or it's .NET equivalent. (there's probably a WMI equivalent too).