-
Posts
4,881 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
Ditto. In fact, my implementations of state machines have only 1 piece of data passed from case to case; the next state to execute (a single enum). Anything else is either gleaned from functional globals or from files. No clusters whatsoever are used to transfer info from one state to another. If a particular state is reliant on previous state information, then it is highly probable that they can be serially linked the good old fashioned way.
-
Let me get this right in my limited stack. Correct me where I've got the wrong end of the stick. You have a Master Test System (that is written by a 3rd party..not yourself )that manages a whole raft of tests including yours. This system communicates its desires (Test No./Name, number of times to execute, pass/fail criteria) that your "Executive" should execute via some sort of "translation" interface. Your executive goes away and tests the sub-system and then returns the results back to the Master System. A discrete test that the Master system must wait for. Where the Master System knows all and your sub-test just takes parameters and returns results. This is how it appears after reading and is fairly straightforward except what you mean by "Executive". Many people use "Executive" and "Sequencer" synonymously. I tend to see the difference as an "Executive" manages "sequences" soin my mind, your Master Test System would be an "Executive" and your sub-test(s) would be a sequence(s) and by that definition, you only ever have one "Executive". But the above seems a little oversimplified (I only get a sense of this) since anyone going to the lengths of incorporating multiple languages and defining XML interfaces to sub-tests, probably have a more flexible system in mind. Certainly in similar systems I have worked on, the "sub-test" defined in the Master Test System is an entry point into a number of sub-tests. So in your example the entry point would be "RF Tests" and there would be "sub-sub-tests" like Output Power,Carrier Drift, Power Density etc. The question here is where the partitioning is and how simple you want the configuration of the "Executive/Master Test System" to be. Do you still want all the parameters defined in the Master System (huge amount of configuration information for EVERY test), or a simplified alias system where parameters are handled locally by each test. The latter is the preferred topology in production environments where different tests are maintained/designed by different teams, keeping the "Executive" simple and distributing maintenance workload across teams.
-
Sounds like a good excuse to get a SSD.
-
Seems a bit arse about face to me. Call me old fashioned, but..... I start with a word document (detailed design spec) and define flow/transition diagrams for each state machine then code from that with plenty of comments and references back to the original spec. If you want you can copy and paste the transition diagrams into the vi, but I don't bother since anyone modifying it should at least be able to read a spec.
-
We are using the labview OCR on a machine at the moment. As long as you have high contrast and train each character/number for several lighting levels (or flood the area with strong lighting to counter ambient); it is quite robust.
-
Change the rotate to a logical shift.
-
QUOTE (crelf @ Jun 5 2009, 06:07 PM) Semantics. If we want to be technically correct, I think I could have said "Binary Distributions". But the sense was correct. I would (tentatively) suggest that your "distributions" come under the heading "Tools for programmers" anyway. That is a small market in comparison to "compiled and configured" executable/binary distributions, especially in an environment which has historically been very open with source contributions.
-
how to change specific color of an image
ShaunR replied to horverno's topic in Machine Vision and Imaging
I'd use a different colour than turquoise! -
QUOTE (hooovahh @ Jun 2 2009, 10:46 PM) It means there are too few smiley's on this forum
-
Post the DLL/source and test harness.
-
QUOTE (PaulG. @ Jun 2 2009, 09:28 PM) I know what you mean. I guess it depends on what is meant by verbose. add r2, r3, r4 is less verbose than (say) MyEnormouslyLongResultName := MyEnormouslyLongVariableName + MySecondEnormouslyLongVariableName. Thats why pictures are better :beer: + :camera: =
-
QUOTE (jlokanis @ Jun 2 2009, 06:35 PM) Like this one QUOTE (JCFC @ Jun 2 2009, 05:36 AM) Hi to all I read this in Slashdot: http://developers.slashdot.org/article.pl?sid=09/05/31/1423203' rel='nofollow' target="_blank">Comparing the Size, Speed, and Dependability of Programming Languages I have a question: Can Labview beat those Programming Languages?, How Labview perform doing that tasks? Interesting. Most of the languages I've never heard of and quite a few common ones missing. I would like to have seen Assembly in that mix since its one of the least verbose and fastest, so I guess it would be close to the "ideal". I think the title is a bit misleading though. since really it is a test of the compiler optimization rather than language.
-
QUOTE (PaulG. @ Jun 1 2009, 05:16 PM) This also works with DAQ tasks .
-
QUOTE (jdunham @ Jun 1 2009, 07:27 PM) Thanks for identifying the exact location (the PC I wrote the reply on didn't have Labview so couldn't check). QUOTE Currently I am using 7.0 (so that may be an issue), but I will head over to my other computer and check out 8.0. I've been using it since LV version 2.x so it will be in 7.0.....somewhere (foolow JD's path). It really is all you need (unless you are going to go to parallel comms) and works on all windows versions.
-
QUOTE (Val Brown @ May 30 2009, 10:21 PM) Because it is the programmers that argue in budget meetings to maintain the SSP's. Programmers that buy the latest versions to take advantage of new technologies (do you really think non-programmers would write OOP labview?). And it is programmers that non-programmers rely on to help them understand Labview (like this board). It is also programmers that NI rely on to beta test, so they can take advantage of a "Free" resource" and to be considered a second priority I (quite frankly) find insulting. This is typical "Microsoft Mentality". Most non-programmers only use Labview if it is already there and rarely get past modifying an example to achieve a specific result. That sentiment belongs 10 years in the past and QA should know better than to state it even if he thinks it. QUOTE (PeterB @ May 31 2009, 06:54 AM) I can't see the light .... I'm not as excited as everyone else seems about the possibilities of scripting yet and I've been an enthusiastic LabVIEW for 15 years now. How many text based programmers write code that writes its own code (let alone salivate over the thought of being able to do so) ? Perhaps they take the idea for granted and have never put their imagination to work. Certainly LabVIEW is now one step closer to being able to do what C++ can, but I'm not yet sold on the idea. What are folks really intending to do now with scripting that has such value ? You can talk about what you plan to do, but if you never allocate time to implement the idea then scripting isn't really all that valuable to you after all. Enough talk, how about you show me your really useful creations ! (I'll accept Auto wiring scripted tools only if they produce a similar wow factor to the BD cleanup feature !) I bet C++ could write a faster Hello World Program than LabVIEW could. Its OK if I don't get too excited over scripting. regards Peter I kind of agree. Scripting only has value within the tool chain. You can't use it for in your distributions so there is no commercial value added other than tools for programmers (and I've been using Labview so long now, I don't really need anything that Labview doesn't provide). I don't see much benefit for anything I do, that a VI in the pallet set to "Place Vi Contents" can do more easily and in less time. Admittedly there are "cool" things like the voice control etc. But I view those as "finding a purpose to fit the feature" . If we could create our own controls at run time using scripting, then this would be a boon. But as it stands, no commercial value, no interest.
-
QUOTE (Aristos Queue @ May 30 2009, 05:26 PM) This in an horrific statement.
-
This would be a good starting point. LabVIEW 8.6\examples\apps\tankmntr.llb
-
QUOTE (Warren @ May 29 2009, 09:41 PM) Ok. Couple of points. 1. Most parallel ports nowadays have internal pull-up resistors (4K7). Whilst this makes input a lot safer/easier, it means that drive capability is severely restricted. This might be limiting your current to about 1mA (a DVM will tell you if this is the case or not). 2. To use the parallel port as digital IO in Labview, you need to make sure it is set ti "Standard" in the bios (Not SPP,ECP etc). 3. There is a parallel pot example of using it as a digital IO in the examples directory (Port IO) and I would suggest using this (as it is a direct port write) rather than VISA to get it going.
-
QUOTE (Jim Kring @ May 29 2009, 06:27 PM) The fact that you need a license unlike any other language maybe??
-
I don't really know what problem its trying to overcome, but all the methods stated work for me You might also look at the "Many To One" queue example if you are looking to combine multiple single plots from other VIs..
-
QUOTE (Vladimir Drzik @ May 25 2009, 07:54 AM) I've just been looking again at the VB. In fact, you seem to be right (I haven't used the VB for years so thought I would refresh my memory). The "Calculator" does seem to have a Labview editor built in with an extremely reduced palette. I've no idea what they are doing since you don't have to have Labview installed to use it (but you do to Export). I'm guessing that they've cut out the LV editor core and implanted it in the exe. Let us know the feedback from NI.
-
QUOTE (Black Pearl @ May 25 2009, 08:40 AM) Thats probably because everyone here uses the same topology. (centralised error handling). I use local error handling, since a lots different stuff has to happen if there is an error (not just tell the user) and that would make a centralised error handler a bit of a pig. The only common denominator is that I have to put a dialogue on screen and halt other processes execution ("Launch Error Dialogue.vi") while the operator decides what to do. In the meantime the process that threw the error tries to recover to a safe/stable state. The "Launch Error Dialogue" loads and runs (yup, you guessed it - the "Error Dialogue.vi) which logs to a file and filters the error to provide different options to the user (if required). It can be called from anywhere in the code and can remain on-screen, not show at all (i.e just log) or time out after n seconds (depending on the error level). It also does other things like set off a siren, change traffic light indicators etc. Nice and simple and just plonk it in your error case of the state machine. One thing that hasn't been discussed so far is error levels. In my system(s), I have severity/priority levels for errors (Information, System, Critical, Recoverable, Process and Maintenance). What do other people do to prioritise errors (if anything)?
-
How to use IMAQ Extract ColorPlanes?
ShaunR replied to lovemachinez's topic in Machine Vision and Imaging