Jump to content

ShaunR

Members
  • Posts

    4,883
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. I think the bit of info you are missing is that it is an Event Queue. By using the val (sig) you are simply placing which cases to execute at the tail of the queue. It's not like event branching (as you would expect from, say a PLC). So when you execute two val (sig) within a case, you are adding two instructions to the queue. It is not until the loop goes round again, that the queue is re-read and the case at the head of the queue is taken and executed.
  2. Software released in the CR obviously has its Copyright status displayed. But what about the example code etc posted in threads - not only here, but on any other boards (like NI)? There are a lot of very useful code snippets and packages around, but is software distributed without a license at all, actually usable without recrimination as we (I at least) have assumed in the past? Are we (technically and legally) actually able to post other peoples non-licensed code or even use it without permission of the original author (who may not be available or even known)? Should sites state that any "donated" code in forums (outside the CR) becomes public domain and they forfeit their copywrite claims or make it clear that the authors original rights are entirely preserved to clarify?
  3. +1 But we all do it . I have had mixed success with using Event Structures as state-machines (UI becoming unresponsive due to lengthy code execution, getting UI updates because it's locked in another frame somewhere etc). I tend now to use the event structures only for select user events and FP events, and message a proper state-machine via queues and/or notifiers (stop, start, pause etc). Additionally, I tend to dynamically load state-machines because if all else fails, you can just crowbar it without your app freezing.
  4. Your data will (should) be split over multiple tables (whether SQLite , MySQL, or access etc). So your DUT header info will be in one table and results in another, maybe the blobs in another. They will all reference each other via IDs. It depends how you want to set up your schema, but group properties and results would be different tables.
  5. I tend to make a distinction here. An accessor (for me) will be of the ilk "do one thing, do it properly" (get name, get value set name, set value etc). But a wrapper would be a simplifier of a more complex function or "wrap" several "Methods" to yield a new function. Of course there is. It's the IPE, (un)bundles, case structure and the "get ref". That's why all the accessors look identical at first glance and why you can use a wizard to create them (or a template vi from the palette or your Save As (copy) gets a thorough workout). That strikes me as a bit odd, for you to say, since since overrides (and overloading) are the epitome of input sharing. I disagree. It has nothing to do with encapsulation. "What-ifs" don't feature much in my designs nowadays. If there is an immediate "genericism" then I will most likely code it that way. Otherwise it will be to spec and no more. I exclusively use an iterative (agile) development cycle so changes will be factored in on the next cycle and costed accordingly. If you don't need to worry about impacts of changes on other functions, then regression testing is a thing of the past, right? The fact is, most changes affect most things to a greater or lesser extent. With linear coding (which is what this is), you've just got more to check. Yup. Coupling is a bad thing. Oh hang on. I have to get info from here...to here. How do I do that if I uncouple them? Coupling, as has been discussed here before, is a balancing act the same way as efficiency vs function. Yup. I like that. I'm still warming...not red hot yet though. Now re-use it for integer types. Linear programming is fine, but tends to lead to bloat, very little re-use and, if used ad-nauseam inflexible code. If you want software to create software, then it's great because it is crank the handle and you get one for everything, all the same, with a slight variation. But the cost is long term maintainability, increased code-base, compile times and re-use. This is the reason I like polymorphic VIs, but think very carefully before using them. They, in themselves are re-usable and give the ability to adapt-to-type making it easier for users. But they are code replicators. Copy and paste managers. Hierarchy flatteners. And that doesn't sit well with me. Back to topic
  6. Ok so I'm clear on that now. That's not what I said. I said it wasn't a fair comparison (your original AE and the super slim one) and that there is little difference in effort for the more equivelent one I supplied. It's different? Wasn't intentional. I did save As (copy) a few times so that I didn't have to re-invent the wheel. Maybe something got messed up when it recompiled. No 2. With a slight variation (I know you will pick me up on moving the typedef cluster outside the AE, but in the long run it's worth the "potential" pitfall. If we are supplying accessors, then it's only a pitfall for us, not the user). So I am deliberately sacrificing a small bit of robustness for a large gain in flexibility. Accessor AE I don't think it's any different to the boiler plate code that you have to use with a DVR. But there is a big motivation for doing this as I hope you will see a bit further down. Not strictly true. You still have to create the bundle and un-bundles in the accessors (and the extra controls/indicators etc) the same as I do in the above images (if changing a current implementation). If you are adding new "Methods" then yes. It only affects the new VI. Wheras I (may) have to create the new VI and add the cases to the AE.this is the point I was making about selection via frames or via VIs. This, however is both a strength and a weakness for the DVR method . Point of interest/view etc. I don't consider AE=API.An API might be composed of many AEs which are self-contained sub-components. (Maybe that's just in my world though) Considering what I just said, the Cmd enum is not private in an AE, it is public. Why should it be private? (what was I saying earlier about anal computer science?). We want the user to be able to choose the methods and there is nothing in the AE that he shouldn't be accessing (unless you've not partitioned correctly and have loads of methods that are internal only-a state machine is an abuse of an AE!). If it's not on the enum, then he can't access it so why go making it harder for him to do so? You wouldn't go making a drop-down on a front panel private, would you? I like the DVR method, now I know more about it and will certainly be looking at some of my current implementations to see if this type would be better. But it has one weakness which (as I stated earlier) is also it's strength. So here's the kicker. It has one accessor (VI) for each and every method! We've covered the ground with different inputs and (I think) there is little in it (but the DVR is definitely in the lead at this point). What if we have multiple methods but the same input type? Lets say we have in our AE example the boolean input but we can do AND, OR, XOR, NAND, NOR, NXOR etc. Thats 6 more accessors (VIs) for the DVR all looking remarkably similar except for the boolean operation. That's not promoting code-reuse and and inflates the code-base. This is the (single) accessor for the AE with the 6 extra operations (1 extra type-def). I have to modify the case structure to include the extra operations, but I only have 1 "vi" and 1 typedef ("boolean Method") to maintain regardless of the number of boolean operations. The codebase also doesn't change i.e there is zero increase in the number of VIs for increasing boolean operations. This is why partitioning is so important. If you can partition your engines so that they are grouped "by function" then maintenance is easier and code re-use is increased . The DVR code-base seems to increase linearly with the methods and there also seems to be zero opportunity for re-use of the accessors.(not mentioning the compile time here ).
  7. OK. I see what you are getting at here (great documentation post, want to write my websocket help files ). The thing is though, they are not a fair comparison. and this is why...... In the second example a DVR is used purely because it is the only way for you to create a singleton (maybe I'm still hung up on classes but you wouldn't be able to unbundle so easily without it). Secondly (and more importantly) it allows you to un-type the inputs and outputs to one generic type. In your first example, you don't "un-type" the inputs and outputs, preferring instead to provide all permutations and combinations of the types for export. This has nothing to do with singletons. Just the strict typing of LabVIEW. I've attached an "equivalent" classic AE of your 2009 API based on a method I've used in the past (My apologies to John, I think I now understand what he was getting at with variants-without using a poly wrapper, that is). There is very little difference apart from the features that I have outlined previously. Arguably potato, potAto as to variants vs DVRs. But the (major) effect is to push the typing down into the AE thereby making the accessors simpler than equivelent DVR methods (and if those god-dammed variants didn't need to be cast, you wouldn't need the conversion back at all!) So back to the case in point. I think that the example I have provided is a fairer comparison between the super simple 2009 API and a classic AE. Which is more robust? I don't think there is a difference personally. Which is less coding? Again. I don't think there is much in it except to point out that changes are concentrated into the 1 VI (AE) in the classic method. You could argue that to extend the classic AE you have to add a case and an accessor rather than just an accessor, but you don't actually need accessors in the AE (and they are trivial anyway since they are there just revert to type).
  8. Hmmm. It seems you have picked a rather "special" action engine to demonstrate. I'd even go so far as to saying it's not one at all. Perhaps if you could put it in context with something simple (I like simple) and I'm more familiar with (e.g a list) I might be able to see the benefit. A list will have things like Add, Remove, Insert, Get Value etc. At it's heart will be an array of something. It will basically wrap the array functions so that you have the operations exposed from a single VI. There are two inputs (a value to do the operation on and an index if required for the operation) and one output. With this AE, how is the DVR method more robust, simpler, less coding et al? Here she is.
  9. Nope. Still not with you here.... An action engine "by default" is a singleton, however, you are using a class cluster which is not. So if your action engine is identical to your 2009 version then the cluster at the heart of it is the same........but you don't have to use a DVR to make an AE a singleton, because it already is. Now. To make it usable for other developers you still need to expose the different parts of the internal clustersaurus (which Is what all the type-defs are presumably for in the AE and what the poly is for so that you don't have one huge in/out cluster) but in the second example you also have to de-reference the DVR too. So are you saying that in the 2009 version you de-reference and expose one huge cluster to the external software (1 x State Class/Cluster TypeDef), or lots of VIs to de-reference and output the cluster parts (class accessors)? What I'm not getting is that you want to break the singleton feature of an action engine (by using a class cluster?) then re-code it back again (by using a DVR) and somehow that means less typedefs for identical functioning code What am I missing here?
  10. I don't think you are comparing like-with-like there. The "lot less code" piccy would equate to a single frame in the AE (or as I usually do, a case statement before the main frames) and would look identical to the class version except it would have the cluster instead of the class blob (without the for loop of course). I also suspect that your first piccy is equivalent to a shed-load of VIs. The only difference (in reality) between a class and an AE is that an AE uses case frames to select the operation and a class uses voodoo VIs . An AE will be 1 VI with a load of frames and a typedef to select, whereas a class will be a load of VIs with more VIs to read/write the info and selection will be what object you put on the wire. (Just because you have a wizard to generate them doesn't mean less code). In this respect by wrapping the AE in a poly, you are merely replicating the accessor behaviour (figuratively speaking-in reality you are filtering) of a class and (should) incur the same amount of code writing as writing class accessors. But you will end up with 1 VI with n accessors (for an AE) rather than m VIs with n accessors (for a class). Of course, you don't HAVE to have the accessors for an AE, it's just icing on the cake Everything else that is different is just anal computer science.
  11. Try writing a logging function without a singleton. I will pull you up on a couple of points, if I may. 1. API? Possibly. That's the developers decision. Not the fault of Action Engines. 2. Typical usage as a singleton. (I'm going to throw a controversy into the mix here ) They are not typically used as an equivalent (in OOP terms) of a Singleton, just that by default they are. They are typically used as the equivalent of an object (with minor limitations). The enumerations are the "Methods" and the storage is the equivalent of the Private Data Cluster. The terminals are the properties and if you want it not to be a singleton, then just select "Clone" from the properties (although I've found limited use for this). If a designer stuffs so many "Methods" into an action engine, that is a partitioning issue, not a problem with Action Engines per se. It is the same as adding all properties and methods to one God Object. Of course all the "Object Happy" peeps will cry about inheritance and run-time polymorphism. But like I said. Minor limitations (as far as LabVIEW is concerned-due to the constraints). 3. Variants as inputs. I hate variants. There. I've said it The "variant to data" primitive is the function that never was.Variants were the one feature that could have given native LabVIEW run-time polymorphism. Instead it's no different to using flatten to string since you still need to know the data-type and coerce it to the right type when recovering the data. The elegant approach instead of using variants (IMHO) is to use wrap the AE in a polymorphic VI to simplify the connector pane for the end user and enable it to "Adapt To Type". Then the user doesn't need to know the data type and cast. It also exposes the "Methods" as select-able drop-downs so for purists, there is no leakage of the type-def. In summary..... I don't think any of the points you make are attributable to the Action Engine philosophy, more of your experience with those designers that use it. All the arguments can equally be applied to by-val Objects and from the statement "Using a by-ref pattern" I'm thinking (rightly or wrongly) that the issue is with the dataflow paradigm rather than AEs. I'm doing to change my signature back to the old one...lol
  12. It's not only Action Engines that suffer from God syndrome or, indeed, scope. It's like the argument between private and protected
  13. There are several main advantages of FGs over the in-built Labview global, none of which are particularly awe inspiring enough to call it a pattern IMHO. 1. They solve the read/write race condition problem. 2. They can have error terminals for sequencing. 3. They can be self initialising. 4. They are singletons (unless you set the VI to clone). A couple of comments about the blog mentioned. I think the author missed the point about race conditions and FGs or at least didn't explain why they aren't a magic bullet to variable races. He states " The “read, increment, write”, then, is a critical section". And in the original example using global variables it is intended to be. But the feature that protects the variable by using a FG is that it is placed inside a sub-vi, not that it is a FG. The exact same protection could be obtained by selecting the globals and increment then selecting "Create Subvi" and putting that in the loops. However. In all examples it is impossible to determine consistently what the output of Running total 1 or 2 would be since that would be dependent on the order which Labview executes the loops which may change with different compiles or even different execution speeds of the loops. So in reality, by using an FG we no longer have a race between the read and write operations, but we still cannot predict (reliably) the indicator values (Although we can say that they will always increase). We now have a race condition between the loops themselves. The thing to bear in mind about FGs are that they are a programming solution to the global variable read/write race condition only and therefore an improvement over the in-built globals. Many, however, would argue that global variables are evil whether a FG or not. But they are simple to understand, easy to debug and, marginally more complex than the in-built global. You can also put error terminals on them to sequence rather than cluttering up your diagram with frames. That said. It is quite rare to find a FG as an exact replacement for a normal global. People tend to add more functionality (read, write, initialise, add, remove etc) effectively making them "Action Engines".
  14. It's worth stating, that for something like a voltage (which theoretically has infinite levels) I usually use the max, mid and min values from the spec as an initial start point. Later (once you have run a few through) you will almost certainly find that there will be a much smaller range that will cause the optimisation to converge much more quickly and enable you to reduce the levels to two optimum start points (these will be dictated by the component tolerances). It then just becomes a money vs time trade-off. 1 PC doing 800 in 5 days to 800 PCs in 10 mins. If you have to allow for settling times, then you can get really cheeky and do multiple devices on a single machine in parallel
  15. I'm a great fan of Taguchi Analysis for this type of optimisation. This method is an empirical method where you design experiments (i.e set variables) for interconnected, multi-variable systems and iteratively derive the optimum variable settings that satisfy the criteria. It is ideal for situations when full factorial optimisation is impractical due to the number of parameters and their dependence on each other. An example used in anger is here. I have had great success with this in the past for things like PID auto-tuning, RF amplifier setup and waveguide tuning (the sorts of places where the engineer will say "twiddle these until you get that!"). Take a look and see if it will fit with your scenario.
  16. There's an arithmetic parser in the Labview examples. It can easily be modified to include boolean evaluation (just search for "expressions" in the example finder)
  17. Because (for example, currently) Chrome uses Hibi 10, Firefox uses Hybi 9, I.E 10 (will use) Hybi 10, IE.9 (with HTML5 Labs plugin) supports Hybi 6 and Safari (different flavours) supports Hixie 75,76 or Hybi 00. The specs are still fluid and browsers are having difficulty keeping up. If it's to be robust, then really I have to support all of them and besides, some of my favourite stock tickers are still on the old versions Auto-negotiation: In the latest specs, there is a negotiation phase whereby the client requests a connection and the server replies back with the supported protocols. That's part of it (for the server). The other part is the brute force client connection whereby you iteratively try each protocol until one works. This is one of the reasons why I needed to move to a class since with multiple connections all (potentially) talking different protocols, the reads and writes need to switch on each call and maintain their state (i.e closed, connecting etc). Rather than writing a connection manager, it made sense to bundle that info with the class data. Besides. this is TCPIP over HTTP so performance is secondary
  18. There once was a program, LabVIEW. Out of which fast programs did spew. Then along came objects, to confuse and perplex. Now compiles take a week or two. There once was a LabVIEW coder. Who could debug programs by their odor. One particular smell, he thought was nouvelle. But was reported in 8 and older.
  19. Yup. It's not as clean as I usually like (can't use polymorphic VI's with dynamic terminals) but I needed to maintain state on a per-connection basis. So this is one of the rare instances where it is the better choice. It won't happen again, I promise (Sky's always grey over here )
  20. Not sure what you mean by "fixed code". But I won't be releasing it until the api is completed and tested (it's a bit of a mess at the mo'). I'm currently up to supporting Hybi 17, 10,8, 6 and 00 (looking at Hixie and auto negotiation now) and have a prototype library for exporting events and commands (i.e. for remote monitoring), but still need to figure out the reciprocal methods. And, (this'll surprise a few people) some of it is LVOOP. . Oh. And finally, you get events for errors, status, messages etc
  21. IMHO, websockets will make all of them obsolete. Firefox 7 is available on android and Ipad/Iphone comes shipped with websocket enabled Safari (iOS 4.2). There has already been a proof of concept implementation on this thread.
  22. If you use the transport.lib in the cr, it will give you transparent zlib compression (as well as encryption, send timestamps and throughput) across tcpip, udp and Bluetooth.Might be useful.
  23. You mean the "Muddled-Verbose-Confuser". Every website is being built on that model and every website built with it requires huge amounts of caching to make it barely usable ... that's why I wrote this.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.