-
Posts
4,937 -
Joined
-
Days Won
304
Content Type
Profiles
Forums
Downloads
Gallery
Posts posted by ShaunR
-
-
<snip>
Hmmm. It seems you have picked a rather "special" action engine to demonstrate. I'd even go so far as to saying it's not one at all. Perhaps if you could put it in context with something simple (I like simple) and I'm more familiar with (e.g a list) I might be able to see the benefit.
A list will have things like Add, Remove, Insert, Get Value etc. At it's heart will be an array of something. It will basically wrap the array functions so that you have the operations exposed from a single VI. There are two inputs (a value to do the operation on and an index if required for the operation) and one output.
With this AE, how is the DVR method more robust, simpler, less coding et al?
Here she is.
-
1
-
-
I<snip>
Nope. Still not with you here....
An action engine "by default" is a singleton, however, you are using a class cluster which is not. So if your action engine is identical to your 2009 version then the cluster at the heart of it is the same........but you don't have to use a DVR to make an AE a singleton, because it already is.
Now. To make it usable for other developers you still need to expose the different parts of the internal clustersaurus (which Is what all the type-defs are presumably for in the AE and what the poly is for so that you don't have one huge in/out cluster) but in the second example you also have to de-reference the DVR too. So are you saying that in the 2009 version you de-reference and expose one huge cluster to the external software (1 x State Class/Cluster TypeDef), or lots of VIs to de-reference and output the cluster parts (class accessors)?
What I'm not getting is that you want to break the singleton feature of an action engine (by using a class cluster?) then re-code it back again (by using a DVR) and somehow that means less typedefs for identical functioning code
What am I missing here?
-
but there is a lot less code.
I don't think you are comparing like-with-like there.
The "lot less code" piccy would equate to a single frame in the AE (or as I usually do, a case statement before the main frames) and would look identical to the class version except it would have the cluster instead of the class blob (without the for loop of course).
I also suspect that your first piccy is equivalent to a shed-load of VIs.
The only difference (in reality) between a class and an AE is that an AE uses case frames to select the operation and a class uses voodoo VIs . An AE will be 1 VI with a load of frames and a typedef to select, whereas a class will be a load of VIs with more VIs to read/write the info and selection will be what object you put on the wire. (Just because you have a wizard to generate them doesn't mean less code). In this respect by wrapping the AE in a poly, you are merely replicating the accessor behaviour (figuratively speaking-in reality you are filtering) of a class and (should) incur the same amount of code writing as writing class accessors. But you will end up with 1 VI with n accessors (for an AE) rather than m VIs with n accessors (for a class). Of course, you don't HAVE to have the accessors for an AE, it's just icing on the cake
Everything else that is different is just anal computer science.
-
This is specific to AEs:
the "typical" usage of an action engine is to use it as a SINGLETON and that the AE itself IS the API used (have fun scoping that without another abstraction layer.) Using it as a CBR node call destroys any ease of maintainability associated with AEs (typdefing the connector pane, re assigning etc.) the only alternative at this point is to make the input/output a variant to get around the octopus connector that results, performing to/from variant. I think if you are good enough to do that, you might as well just step up to classes and type-protect your inputs/outputs, and reap all the run time performance benefit of dispatching instead of case-to-variant steps.
I think singletons are inherently not extensible and lead to "god functions." Using a by-ref pattern using SEQs or DVRs is a better way to go. If you really want to go completely nuts with software engineering and maintainability, use the actor framework or some other messaging system.
FGs: I don't even consider them useful. Once again, they are a singleton; The only time I'd be happy with them is as a top level application object for immutable objects (config data?)
Maybe this is because I'm always making something work with one unit, then down the road I end up testing 6 at a time, and all the FGs are completely useless.
Try writing a logging function without a singleton.
I will pull you up on a couple of points, if I may.
1. API? Possibly. That's the developers decision. Not the fault of Action Engines.
2. Typical usage as a singleton. (I'm going to throw a controversy into the mix here
)
They are not typically used as an equivalent (in OOP terms) of a Singleton, just that by default they are. They are typically used as the equivalent of an object (with minor limitations). The enumerations are the "Methods" and the storage is the equivalent of the Private Data Cluster. The terminals are the properties and if you want it not to be a singleton, then just select "Clone" from the properties (although I've found limited use for this).
If a designer stuffs so many "Methods" into an action engine, that is a partitioning issue, not a problem with Action Engines per se. It is the same as adding all properties and methods to one God Object.
Of course all the "Object Happy" peeps will cry about inheritance and run-time polymorphism. But like I said. Minor limitations (as far as LabVIEW is concerned-due to the constraints).
3. Variants as inputs.
I hate variants. There. I've said it
The "variant to data" primitive is the function that never was.Variants were the one feature that could have given native LabVIEW run-time polymorphism. Instead it's no different to using flatten to string since you still need to know the data-type and coerce it to the right type when recovering the data.
The elegant approach instead of using variants (IMHO) is to use wrap the AE in a polymorphic VI to simplify the connector pane for the end user and enable it to "Adapt To Type". Then the user doesn't need to know the data type and cast. It also exposes the "Methods" as select-able drop-downs so for purists, there is no leakage of the type-def.
In summary.....
I don't think any of the points you make are attributable to the Action Engine philosophy, more of your experience with those designers that use it. All the arguments can equally be applied to by-val Objects and from the statement "Using a by-ref pattern" I'm thinking (rightly or wrongly) that the issue is with the dataflow paradigm rather than AEs.
I'm doing to change my signature back to the old one...lol
-
I'd prefer that functional globals (actually any USR) died a slow death. It is probably because I see people abuse them so easily (access scope, callers are more difficult to track than other by ref mechanisms.) As far as AEs, if I have to look at another octo-deca-pus I might just lose it.
It's not only Action Engines that suffer from God syndrome or, indeed, scope. It's like the argument between private and protected
-
There are several main advantages of FGs over the in-built Labview global, none of which are particularly awe inspiring enough to call it a pattern IMHO.
1. They solve the read/write race condition problem.
2. They can have error terminals for sequencing.
3. They can be self initialising.
4. They are singletons (unless you set the VI to clone).
A couple of comments about the blog mentioned.
I think the author missed the point about race conditions and FGs or at least didn't explain why they aren't a magic bullet to variable races.
He states "
The “read, increment, write”, then, is a critical section".
And in the original example using global variables it is intended to be. But the feature that protects the variable by using a FG is that it is placed inside a sub-vi, not that it is a FG. The exact same protection could be obtained by selecting the globals and increment then selecting "Create Subvi" and putting that in the loops.
However. In all examples it is impossible to determine consistently what the output of Running total 1 or 2 would be since that would be dependent on the order which Labview executes the loops which may change with different compiles or even different execution speeds of the loops. So in reality, by using an FG we no longer have a race between the read and write operations, but we still cannot predict (reliably) the indicator values (Although we can say that they will always increase). We now have a race condition between the loops themselves.
The thing to bear in mind about FGs are that they are a programming solution to the global variable read/write race condition only and therefore an improvement over the in-built globals.
Many, however, would argue that global variables are evil whether a FG or not. But they are simple to understand, easy to debug and, marginally more complex than the in-built global. You can also put error terminals on them to sequence rather than cluttering up your diagram with frames.
That said. It is quite rare to find a FG as an exact replacement for a normal global. People tend to add more functionality (read, write, initialise, add, remove etc) effectively making them "Action Engines".
-
ShaunR,
You've hit the nail on the head. It is one of those "twiddle these until you get that" problems, only 800 times over! I'll check out the Taguchi Analysis. Sounds like it may hold some promise. Thanks.
It's worth stating, that for something like a voltage (which theoretically has infinite levels) I usually use the max, mid and min values from the spec as an initial start point. Later (once you have run a few through) you will almost certainly find that there will be a much smaller range that will cause the optimisation to converge much more quickly and enable you to reduce the levels to two optimum start points (these will be dictated by the component tolerances). It then just becomes a money vs time trade-off. 1 PC doing 800 in 5 days to 800 PCs in 10 mins. If you have to allow for settling times, then you can get really cheeky and do multiple devices on a single machine in parallel
-
I'm a great fan of Taguchi Analysis for this type of optimisation. This method is an empirical method where you design experiments (i.e set variables) for interconnected, multi-variable systems and iteratively derive the optimum variable settings that satisfy the criteria. It is ideal for situations when full factorial optimisation is impractical due to the number of parameters and their dependence on each other.
An example used in anger is here.
I have had great success with this in the past for things like PID auto-tuning, RF amplifier setup and waveguide tuning (the sorts of places where the engineer will say "twiddle these until you get that!"). Take a look and see if it will fit with your scenario.
-
The arithmetic parser doesn't handle bitwise operations, or at least it didn't when I last looked. May have to look again...
Like I said....
It can easily be modified to include boolean evaluation
-
There's an arithmetic parser in the Labview examples. It can easily be modified to include boolean evaluation (just search for "expressions" in the example finder)
-
I'm curious, why are you supporting older versions of the protocol? What is auto negotiation in Websockets?
Because (for example, currently) Chrome uses Hibi 10, Firefox uses Hybi 9, I.E 10 (will use) Hybi 10, IE.9 (with HTML5 Labs plugin) supports Hybi 6 and Safari (different flavours) supports Hixie 75,76 or Hybi 00.
The specs are still fluid and browsers are having difficulty keeping up. If it's to be robust, then really I have to support all of them and besides, some of my favourite stock tickers are still on the old versions
Auto-negotiation:
In the latest specs, there is a negotiation phase whereby the client requests a connection and the server replies back with the supported protocols. That's part of it (for the server). The other part is the brute force client connection whereby you iteratively try each protocol until one works. This is one of the reasons why I needed to move to a class since with multiple connections all (potentially) talking different protocols, the reads and writes need to switch on each call and maintain their state (i.e closed, connecting etc). Rather than writing a connection manager, it made sense to bundle that info with the class data. Besides. this is TCPIP over HTTP so performance is secondary
-
There once was a program, LabVIEW.
Out of which fast programs did spew.
Then along came objects,
to confuse and perplex.
Now compiles take a week or two.
There once was a LabVIEW coder.
Who could debug programs by their odor.
One particular smell,
he thought was nouvelle.
But was reported in 8 and older.
-
No way, say it's ain't so. In other news, the sky is still blue. Well on Earth anyways, most of the time...
I'm still pumped about this topic. Granted I got to spend zero time on implementing something like this this year (still very disappointed about that), but 2012 will be different. Yeah, that's it, different. What can I say, there's still a bit of foolish youthful optimism in me.
Yup. It's not as clean as I usually like (can't use polymorphic VI's with dynamic terminals) but I needed to maintain state on a per-connection basis. So this is one of the rare instances where it is the better choice. It won't happen again, I promise
(Sky's always grey over here
)
-
ShaunR, Will you be posting your fixed code
Not sure what you mean by "fixed code". But I won't be releasing it until the api is completed and tested (it's a bit of a mess at the mo'). I'm currently up to supporting Hybi 17, 10,8, 6 and 00 (looking at Hixie and auto negotiation now) and have a prototype library for exporting events and commands (i.e. for remote monitoring), but still need to figure out the reciprocal methods. And, (this'll surprise a few people) some of it is LVOOP.
. Oh. And finally, you get events for errors, status, messages etc
-
IMHO, websockets will make all of them obsolete. Firefox 7 is available on android and Ipad/Iphone comes shipped with websocket enabled Safari (iOS 4.2). There has already been a proof of concept implementation on this thread.
-
1
-
-
By coincidence I'm working on a similar thing right now: Message objects via TCP. Like you, I've mostly done two VIs on the same machine (except for one brief proof-of-principle test between England and California which worked fine). The one issue I can add is the rather large size of flattened objects, especially objects that contain other objects (which might contain even more objects). Sending a simple "Hello World" as one of my Message objects flattens to an embarrassing 75 bytes, while the "SendTimeString" message in my linked post (which has a complex 7-object reply address) flattens to 547 bytes! I've just started using the ZLIB string compression (OpenG ZIP Tools) and that seems to be a help with the larger objects (compresses the 547 bytes down to 199). I've also made a custom flattening of the more common objects to get the size down ("Hello World" becomes 17 bytes).
-- James
If you use the transport.lib in the cr, it will give you transparent zlib compression (as well as encryption, send timestamps and throughput) across tcpip, udp and Bluetooth.Might be useful.
-
especially Model-View-Controller
You mean the "Muddled-Verbose-Confuser". Every website is being built on that model and every website built with it requires huge amounts of caching to make it barely usable ... that's why I wrote this.
-
Nice mirror.
Yup. This is what she looks like, face-on.
-
I've really got the web-socket bug now
http://screencast.com/t/TqFOatnbfmC
(frame rate is due to Jing, not the apps)
-
2
-
-
Why does it need to be disconnected? I'm asking (this seemingly inane question) because if you have to have a wire running out to a wireless router/DAQ, that won't help if you have to disconnect at the sensor terminals. Additionally, you will still have to have a power lead to the wireless device so it just complicates and moves the problem.
If moving the problem further up the cable is OK (e.g you have to have a cover over the hole), then perhaps just cut the cable and put a male to female connector in the covering or a connector just outside the hole enabling you to disconnect it.
The only other (non-cable) alternative is using a battery powered device (like the Arduino as mentioned by François). You could use bluetooth or wireless (bluetooth is better for battery life, but wireless will give you a better range).
-
I didn't say it was a problem. I said it needed to be checked. You would need a different resistor value and what's considered "high" would be different.
Parallel ports are TTL compliant. So that means anything between 2.7v and 5v is considered "high" (conversely 0-0.5v is low). A diode only needs a forward voltage of about 2v so it's not a problem. The resistor isn't there as a potential divider. It's there to limit the current so you don't fry the port and/or the LED. For this purpose, the lower the voltage, the less current->good thing! A 4k7 resistor (470 with pull-ups if you want to be ultra safe) will give you about 1ma@5v with no pull-ups, or, if you like 0.7ma@3.3. If you find its not bright enough then a 1K will give you 5/3ma but I wouldn't go any lower without buffering.
But it's not hard. You can forget the maths. A 10k pot and an ammeter will give you the perfect values for your port. Just twiddle it (the technical term) until you get the brightness you want whilst keeping an eye on the current. You can then measure it and find a preferred value for when you "productionise" it
You'll be wanting to drive LCD displays in no time
3.5 Using the Parallel Port of the Computer (click on "more" to expand the article)
And when your motherboard gets damaged because you connected something improperly, or did not properly ground, or did not properly account for potential overvoltages or voltage spikes, then we'll see what's cheaper: buying a new computer or buying a cheap digital I/O module.
P.S. I always use a screwdriver as a hammer.
You'll only blow the port. If you never use it, you won't miss it
-
I saw the Hello World example and hit Back.
The scariest thing about languages like this is that someone has to invent them ...
The use of encryption is just obnoxious, though.
The scariest thing (IMHO) is that people go to the effort
This is a fun one though.
HAI CAN HAS STDIO? PLZ OPEN FILE "LOLCATS.TXT"? AWSUM THX VISIBLE FILE O NOES INVISIBLE "ERROR!" KTHXBYE [/CODE]
-
1
-
-
This would actually need to be checked. If a computer comes with a parallel port it's likely to be 3.3V, not 5V. I know the old Dells we still have in the lab are 3.3V parallel ports.
Why is 3,3v a problem?
In the end, it's probably better to go with an off-the-shelf cheap USB-based digital I/O module. There's tons of these on the market.
Chicken
Seriously though. This is kindergarten stuff. But if you've never used a screwdriver as a chisel, then maybe it's better to just throw money at it.
Not one of mine by the way
-
Can Labview control the LED's via the PC parallel port (old printer port)?
Much better than serial (8 bit bi-directional, i.e digital inputs OR outputs
). My favourite low cost digital IO. Fantastic for foot-switches and system monitoring and essentially free. Unfortunately, not many PCs come with them nowadays.
http://digital.ni.com/public.nsf/allkb/B937AC4D8664E37886257206000551CB
There are also a couple of examples in the "Example" finder.
You have to check whether your motherboard already has pull-up resistors (most do, some don't). Then you can connect 5V LEDs directly or just short them to ground (if using as digital in). Note that logic is reversed since you sink to ground THROUGH the IO line to light an LED. I always stick a transistor in there too to be on the safe-side, since if you get it wrong...you blow the port. It also inverts the logic so I don't get confused (happens regularly).
-
2
-
Robustness of Functional Global pattern at large scales?
in Application Design & Architecture
Posted
OK. I see what you are getting at here (great documentation post, want to write my websocket help files
). The thing is though, they are not a fair comparison. and this is why......
In the second example a DVR is used purely because it is the only way for you to create a singleton (maybe I'm still hung up on classes but you wouldn't be able to unbundle so easily without it). Secondly (and more importantly) it allows you to un-type the inputs and outputs to one generic type.
In your first example, you don't "un-type" the inputs and outputs, preferring instead to provide all permutations and combinations of the types for export. This has nothing to do with singletons. Just the strict typing of LabVIEW.
I've attached an "equivalent" classic AE of your 2009 API based on a method I've used in the past (My apologies to John, I think I now understand what he was getting at with variants-without using a poly wrapper, that is). There is very little difference apart from the features that I have outlined previously. Arguably potato, potAto as to variants vs DVRs. But the (major) effect is to push the typing down into the AE thereby making the accessors simpler than equivelent DVR methods (and if those god-dammed variants didn't need to be cast, you wouldn't need the conversion back at all!)
So back to the case in point. I think that the example I have provided is a fairer comparison between the super simple 2009 API and a classic AE. Which is more robust? I don't think there is a difference personally. Which is less coding? Again. I don't think there is much in it except to point out that changes are concentrated into the 1 VI (AE) in the classic method. You could argue that to extend the classic AE you have to add a case and an accessor rather than just an accessor, but you don't actually need accessors in the AE (and they are trivial anyway since they are there just revert to type).