Jump to content

drjdpowell

Members
  • Posts

    1,964
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by drjdpowell

  1. This comment got me wondering... Maybe 10% of my app-specific classes use dynamic dispatching to provide the application's functionality. What's the going rate for others?

    I've only been using OOP inheritance for less than two years, and my job has never been more than 30% programming, so I don't have meaningful statistics. But I have had one program that involves look-up and display of data records from seven very different measurements from a database. This led to a complex "Record" class with seven children. I also developed my messaging design for this project; it has seven "Plot" actors to display the datasets of the seven record types in a subpanel. Making a matching change in seven subVIs (actually eight, including the parent) is a pain, regardless of what the change is. It only takes one project to learn that lesson.

    -- James

  2. Hello,

    Not entirely following the latest part of the discussion, but there may be some confusion between what Daklu was doing, extending the message identifier with "Slave1:", and the common method of adding parameters into a text command (as, for example, in the JKI statemachine with its ">>" separator). Parameters have to be parsed out of the command when received, thus the need to decide on separating characters and write subVIs to do the parsing. But Daklu's "Slave1:ActionDone" never needs to be parsed, as it is an indivisible, parameterless command that triggers a specific case in his "Master" case structure (see his example diagram).

    Now, if he did want to separate the "Slave1" and "ActionDone" parts (for example, to use the same "ActionDone" code with messages from both slaves using the slave name as a parameter) then he could parse the message name on the ":" or other separator. But there is an alternate possibility that avoids any possibility of an error in parsing (Shaun, stop reading, this is getting OOP-y; there's even a pattern used, "Decorator"). Instead of using a "PrefixedMessageQueue", use a variation of that idea that I might call an "OuterEnvelopeMessageQueue". The later queue takes every message enqueued and encloses it as the data of another message with the enclosing messages all having the name specified by the Master ("Slave1" for example). When the Master receives any "Slave1" message, it extracts the internal message and then acts on it, with the knowledge of which slave the message came from.

    You can use "OuterEnvelopes" to accomplish the same things as prefixes, though it is more work to read the message (though not more work than parsing a prefixed message) and less human-readable (unless you make a custom probe). It may useful, though.

    -- James

    Note: I put an OuterEnvelope ability in my messaging design, but I've never actually used it yet, so I'm speaking theoretically.

  3. Why not use non-classed VIs for create, execute and destroy and make them reentrant? Is classing "create" and "execute" just a handy way to bundle up the input queue, output queue, object, and mediator's queue?

    Not every use of LVOOP is some complicated thing that can't be done with "regular" LabVIEW. Quite often it is just a clean way to make a type-def-cluster with some handy extra features and it's own distinct "type" identity. Don't downplay the value of giving your cluster it's own, unique, pretty coloured wire that will break if you accidently wire it to the wrong thing.

    -- James

    • Like 1
  4. The front panel is for matching the conpane, the BD is to organize whatever makes the most sense. Sometimes that ends up matching the conpane, but usually only in simple VIs.

    Uhh... that's a block diagram. Why do you *care* where they connect?

    I like the visual connection between block diagram, front panel, and subVI connections. They are different facets of the same entity. Note that most of my LVOOP subVIs are both quite simple and have relationships with multiple other subVIs. Other methods in the same class will have matching connections for the object and error terminals; override VIs in related classes will match all connections. Mimicking these relationships visually on the block diagram is helpful (particularly as LVOOP means you have a lot of related subVIs).

    I remember posts on the dark side suggesting matching the general location of a terminal on the bd, fp, and connector pane was "good style." I tried it for a while but it got to be a pain rearranging all my wires and block diagram if I changed a conpane terminal connection. I'll try to put fp controls in the same general location as the conpane terminals, but I don't worry about matching the block diagram terminals. I'll put them wherever they need to be to make a clear block diagram.

    I'll make exceptions to the pattern if it doesn't flow naturally into the code. But find I seldom need to do any rearranging. I try and design conpane locations for a class before doing much coding, so I seldom change a terminal connection (changing the conpane on several dynamically-dispatched versions of the same subVI is a real pain). And with small subVIs, with not that many inputs (as Paul pointed out), there is less need for careful organizing.

    Connector pane locations also carry information to me: top left and top right terminals are the "subject" of the subVI (a relationship formalized in LVOOP, but common in other LabVIEW APIs); bottom left and right is the error cluster (OK, duh); remaining two left inputs are...uh..."consumed parameters" of the method; left two are "results returned". Top and bottom connections are for options, or things need to perform the action that are in some way... "independent" of it. So, to me, the "Observer Registry" input of my example naturally goes on a top input of the "Receive..." VI, not in the front.

    Hmm, OK, maybe it's just that my brain is wired funny.

    Personally I think most style debates are much ado about nothing. The goal is code clarity, not style for the sake of style. There are lots of ways to write clear code and communicate intent. We're "advanced" developers, right?

    True. I was motivated to start this conversation by reading an idea to eliminate nearly all customization of terminals; not just icon/no-icon, but the ability to place the label in different locations, in order to standardize how things are done. I don't think this is necessarily desirable, even if the standard to be chosen were my own.

    -- James

  5. I frequently use CAD application too, but I always liked the way LabVIEW does not automatically snap in position. actually it does make some things easier for me (on large VIs with many wires). Best solution might lie in between (like turn on / off snap positions by hot-key).

    I've been thinking of posting simple CAD-lite guides as an Idea on NI.com. I think it could work well as long as it is kept simple. Like snap in position only for:

    1) eliminating a wire kink.

    2) aligning a control/indicator to another control/indicator, regardless of distance between them (possibly include constants and control references)

    3) align all BD items, but only to other items that are nearby (to reduce the number of snap-to points which would be far too large otherwise)

    Maybe even just do (1) and (2), since most subVI/primitive alignment involves eliminating wire bends anyway. The "snap-to" range should only be a few pixels (less than the snap-to range currently used in aligning labels). Keeping the feature quite limited will make it very intuitive and easy to use.

    -- James

  6. exception: I avoid wiring through the top/bottom of structures since it is not well defined if those tunnels are input or output

    I prefer that a subVI's terminal layout roughly matches its connector pane. So in the example, "ObserverRegistry in" is on the top because it connects to the top of the subVI. In contrast, with ShaunR's example I can't tell where his seven inputs connect.

    If you deselect view subVI as icon and drag down the nodes so it looks like a bundle by name or more accurately like an express VI. How many people knew you have the option NOT to view a VI as an icon? Does anyone do that? :lol:

    post-17905-0-96966700-1319755009.png

    I recall in the past considering displaying a VI like this, either because it was a key VI that I wished would stand out, or it just had so many inputs/outputs that it would look like spider otherwise. Can't recall if I actually used it, but in principle it is very similar to bundle-by-name or property nodes, and like them is somewhat "auto-documenting". However, like Paul, I currently find I don't have that many input/outputs in any one subVI, and they are all pretty clear, due to the nature of LVOOP-style programming.

  7. You know LabVIEW can automatically align your terminals by two clicks? :P

    --> I always spend time to clean up code, since it will come back to me after some time (easier to maintain).

    Do you mean "select the terminals" and the little pull-down menu of alignment options? Or something better? I did a little CAD work back in the day and I've always found the LabVIEW alignment tools to be slow and clunky. A relatively simple "snap to guidelines" feature (aligning in both dimensions to related objects and also aligning for straight wires) would make it easy to write code clean from the start.

    I understand the 'problem' of object or reference types, but I don't understand what you could possibly miss on small terminals for standard types like string. I would say standard types are best to read on small icons. If you don't understand standard type coloring, you should properly change glasses... :ph34r:

    I can't say I care that much about the standard types either way. A string is a string. But mixing icons and smaller types means I have to go through the effort of manually changing either the standard or non-standard types. And in larger diagrams, with only a few controls/indicators, I don't see much benefit to the smaller terminals, even for standard types. But I'd be happy to meet people half way if we all want to standardize on small terminals for simple types and icons for objects, type-def clusters and enums, refnums, or anything where the small terminal doesn't convey the full type.

    If you don't remember the type of your object, just switch to FP (icons on FP are much more readable than icon terminals on BD).

    Flipping back and forth between BD and FP just to figure out what the terminals are? Can't say I like that option for code clarity! :wacko:

    -- James

    PS> haven't been keeping up the score; it's a low two digit number to a mere 3.5 unicorns.

  8. Ahhh.... so much better. ;)

    My experiment illuminated a few things:

    1) for small, compact diagrams the smaller terminal size is plenty large enough to be clear, and I can see why they are an advantage to making small compact diagrams.

    2) I, personally, find it less tedious to spend time making up lots of icons than carefully aligning and adjusting things into small compact diagrams. I'm perfectly happy with my previous diagram that takes up twice the space and doesn't have the terminals all lined up, yet I'll open the icon editor if I notice a few-pixel inconsistency among related icons. Your OCD may vary.

    3) I really don't like the "OBJ" issue. I don't care if I can read the label; although many objects like "Observer Register" will basically never be labeled any different, "Message" objects could easily be labeled "Command", "Response", or "Ack". And the icons sometimes express cross-class relationships, such as the "envelope" icon showing the close connection between Messenger and Message objects, or other relationships among objects, subVIs, clusters, and enums. And hey, LabVIEW's not a text language, don't you know. :D

    -- James

  9. No its not (an object). It's just a container like a string, int etc that contains raw refnum data. Why complicate it by adding 15 new vis just to read/write to it.

    Raw refnum data? Does the cluster also have an enum or some such that identifies what the refnum specifically is? That you use to decide internally what bit of code to execute? Your right, that's nothing like an object; it certainly doesn't need an identifying icon. I'm feeling calmer already!

    -- James

    PS> I just noticed the 3D raised effect in your subVI icons; nice!

  10. I greatly prefer the icons (as James knows) because:

    Oh, are you "paul.dct" on NI.com?

    See everyone, another unicorn.

    Terminal icons are too similar in shape and size to sub vi icons, and to a lesser extent class constants.

    I must admit, I wouldn't mind the visual distinction of a control from a VI (the border around the icon, little arrow, etc.) being a little more obvious.

    -- James

  11. Yup. But my style is to have control labels to the left, indicators to the right and for them to be stacked closely to lessen wire bends.Of course. I don't suffer as much from the "obj" problem :)

    Well, I certainly can't pretend that isn't a very clear diagram. But that "refnum", that's actually a cluster, is really making my OCOOD act up! It's an Object! It's an Object! Quick, quick, open an icon editor! Aaarrrggghhhhhhhh...

  12. If I count right that's 9.5 to 1.5 (2.5 if you count AQ).

    Let's look at one of my more recent subVIs:

    post-18176-0-84452700-1319643931_thumb.p

    I choose this one because it has 8 total controls/indicators, which is more than most.

    The first thing this illustrates is that I use a lot of LVOOP objects, of more than one type, and a little thing that says "OBJ" really isn't that useful for easily identifying the object.

    The second thing is that in subVIs, the terminals are all (or mostly) the inputs and outputs of the VI, and I place them on the outside, with plenty of space for a larger icon. Even if the icons were just bigger versions of the smaller terminals, I would still consider them superior, because inputs and outputs should stand out and draw the eye.

    This example also shows what is probably the highest density of Front Panel terminals on any of my VIs. My User Interface VIs tend to use event structures and subpanels and a relatively small number of "big" controls/indicators. "Big" in the sense that they do a lot, like a Multi-column Listbox with an extensive list of Right-Click Menu options, or a Text box that holds a summary of a large amount of information, or an Xcontrol. So I seldom have a need to pack controls in densely. In fact, the terminal icons are sometimes just sitting, unwired, near the event structure, acting more as graphical documentation than actual code. This documentation is important, because most UI control/indicators are major parts of the VI, and I want their presence to stand out on the diagram.

    To be devils advocate, if your fighting for space to fit yet another not-very-important indicator nested somewhere deep in a tight diagram, then your not coding as clearly as you could. :)

    -- James

  13. Oh dear, 5 to 1 against so far. Surely there are some more unicorns?

    Additional question (motivated by the ni.com discussion): do you think having both forms leads to confusion, and would you be in favor of LabVIEW being changed to allow only one form? For the purposes of this question, assume the form to be chosen is the other one from the one you like.

    -- James

  14. I was just reading some opinions on NI.com about whether it is better to show FP terminals on the block diagram as full-sized 32x32 icons or as the smaller 16x32 terminals. I wondered what people on LAVA think. Do you use one type or the other and how strongly do you feel about it? Personally I (moderately strongly) prefer icons.

    -- James

  15. Now from what i can tell it seems that the sampling frequency (for the 6509 card) set in MAX is ignored and the card just samples at its maximum rate which appears to be around 1MHz.

    Are you sure you're actually using the "task" configured in MAX, rather than creating a new task programatically? Is "BDHSIO" the name of your MAX task, or is it the name of a channel? If it is a channel, you are creating a new task with the default sampling rate.

  16. I've only been back on these forums for the last few months, and have been looking at every VI I can find - ramping up on inheritance. Now that the terminology is better defined - at least in this thread - (Instrument Loop is the only place to communicate with a real-world instrument; API is the set of messages that can be sent to the Instrument Loop), I can be clear about the problem:

    Think about the one-panel VIs many people start with: open a VISA session, initialize an instrument, take several measurements, save data to file, close the VISA session. Now imagine a top-level VI like the one in Daklu's post mentioned above. Add a button to the front panel called "Run Test 1". The User Event structure tells the Mediator Loop to run a one-panel VI (invoke node, do not wait for completion). I want to send the by-reference message queues to the one-panel VI so it has access to the real-world instruments via the Instrument Loop's API. I also want the by-reference message queues to be available to another top-level VI so messages can be injected (typically used for simulating instruments that are not present, or that don't have fine-enough controls for checking the test's logic).

    Hi todd,

    I can't comment on everything you and Daklu have been discussing, but let me make some comments on your design so far.

    I would suggest that, whatever design for a parallel process you decide on, use the same one for both "Instruments" and "Tests". They are both "actors"/"active objects"/"slaves"/whatever. Currently, you have a totally different method for running tests (dynamic launch after "Set Control Value") and running instruments (class method on-diagram), which means you basically have to debug two different, quite advanced and complex, designs. Settle on one design that works for both and perfect it.

    For example, I've been developing a design I call "Parallel Process". I have a parent class that contains the "Launch" (Create) VI that dynamically launches a VI, sets it up with a command messenger (message queue) and wraps that queue inside the parent object. All my... let's standardize and call them "actors"... inherit from this class (I have a few templates that I copy from). As each actor is standardized, each is created by the same parent "Launch.vi", and once that VI is debugged I never have to worry about the details of dynamic launching. And the child actor classes are very simple; generally with no additional private data items, no additional methods or overrides, and only one VI (the one that is dynamically launched) copied from a template. The complicated stuff is all in the parent class.

    Note that once you have the basics of a VI dynamically launched and running in parallel, sitting behind a command queue, you have all the ability to customize it. Your "Create Test.vi" could (after launching) send your Test a message containing a cluster of "Instrument" proxy objects (ie. objects that contain the command queues of your instruments). Your "Create InstrumentA.vi could send messages containing the information needed to configure InstrumentA.

    Regardless of what design you go with, try to get complicated stuff like dynamic launching and messaging standardized into clean reusable components (such as in a reuse library or a parent class) so that you can deal with the complexity once and then not have to worry about again.

    A second comment I have is about running multiple "Tests" in parallel, which I get the impression you're trying to do. If that is the case, it may be a good idea to add the ability for a Test to "lock" an instrument so that it can't be simultaneously used by other Tests, to prevent potentially dangerous conflicts. This is something for which a parent "Instrument" class (child of your "Actor" class) might be useful. The locking could use a semaphore or similar feature contained in the parent class.

    -- James

    Errors are in a SR because the Dequeue method looks for errors on its error in terminal. When it finds one, instead of dequeuing the next message it packages up the error in an ErrorMessage object and spits it out in a "QueueErrorIn" message. Then the error handler case is just like any other message handler. Some people put an error handler vi inline after the case structure. Nothing wrong with that. I prefer this way because it puts my error handling code on the same level of abstraction as the rest of the message handling code and it gives me a more coherent picture of what the loop is doing.

    I stole this idea it for my own messaging system within about 10 minutes after downloading LapDog. I would also note that Daklu treats timeouts in a similar way: outputting a "Timeout" message rather than having an extra timeout-boolean output. I really great idea.

    -- James

  17. I just don't generally say to myself, "I think I'll use a Producer/Consumer design pattern here." I say, "I ought to feed this loop that's doing a bunch of stuff on command with a queue in another loop."

    I don't do a lot of thinking about "design patterns" when I'm coding either. Instead LVOOP extends my ability to think things like "these operations are closely related, I ought to make a simple class and inherit from it". That statement is not more complex than inter-loop communication.

    As I indicated before, I think one is better off just starting to use LVOOP as "type-def clusters+" and start exploring a (very small amount of) inheritance, LONG before ever reading about a "design pattern". Even now, the only "named" design patterns I've used are the "Decorator Pattern" and the "Command Pattern". Technically, I've probably used a few others, but independently of actually reading about them as "Patterns" (as, probably, have you).

    -- James

  18. I suppose the point of this thread is that this approach seems almost obvious to me, but I don't see it in other places. So is that because it's obvious to everyone else, or there is something wrong with it?

    I don't use your method, but mainly because I do use two closely-related methods that "bracket" yours.

    One is to extract important information from commands and keep it is a named cluster in a shift register. On timeout, the appropriate action is expected based on this information. This is more flexible than yours as the retained information can be from all the past commands, rather than just the last one.

    The other is to use a Notifier: wait of notification, and on timeout get the last notification. This is the same as using yours with a single-element lossy queue. I use this design when there is only one "command" (ie. the parameters controlling what to do on timeout).

    So, your method is more capable that the later (can use alternate queue characteristics and multiple "commands") but less capable than the former (only save the last message). The problem is I can't think of a use case where I would want more than my notifier design, yet only care about remembering the last command on timeout.

    -- James

  19. Yes, exactly! Just have to figure out how to transport their control across BD lines. I'm picturing having the Execute method hold the class (with its private data) in a shift register - example below.

    ...

    Because I haven't figured out how to send all instruments' classes into invoked test VIs - although now I'll try bundling up all their message queues and passing them in. I like this idea because it allows me to send that bundle to the simulator loop, too!

    ...

    I haven't yet figured out where to put the Create and Execute VIs (not yet classed, as mentioned above). Perhaps a "dispatcher" loop at the top-level that receives Create-required information and spawns Execute loop VIs - don't know, yet.

    Hi todd, I'm having a hard time following what your doing. It sounds complicated, but things always sound complicated when one is can't tell what's going on. I'm not sure what exactly you are "classing" and calling your "instrument. So let me sketch out how I see things.

    The design I normally use is this:

    There is a communication reference (often a queue) that allows communication with a loop (on diagram or in a subVI). This loop contains information about, and methods to act on, a real-world instrument.

    Now, there is more than one place to use a LVOOP class in the above design, and more than one thing that can be referred to as "the instrument class". The place I think your using is the latter part: the information about, and methods to act on, the real-world instrument. That's a perfectly good place to use a class, the most obvious place, and useful for various reasons, but I would never want to use that class as "the instrument" in the higher-level code. The entire structure is "the instrument", and the communication reference out front is a "reference to the instrument".

    If you look at my designs, the LVOOP classes with instrument names are actually just wrappers of a communication reference (or references). And these are effectively by-reference objects; they can be easily passed around and used in multiple places (no problem with branched wires). They serve as a proxy for, and encapsulate, the entire structure of communication, loop, data/methods, and real-world hardware. To interact with the "instrument", I send a message to this proxy object (or call a class method that sends a predefined message), which passes the message to the loop, which performs the appropriate action on the internal data or communicates with the real-world instrument.

    If I have second, by-value class for the instrument data and methods, it is accessed only internal to the loop, and never "passed across BD lines". If your trying to pass this by-value object between different loops, that sounds like a rather complicated and troublesome thing to do.

    -- James

    • Like 1
  20. What is the the best way to organize code when your core program has to manage several instruments and device drivers? I need to have several instrument drivers up and running while my state machine operates. I have to talk to about 10 devices and coordinate their actions. I have to handle the case where I loose communications with some of them, in which case I need to be able to try to re-initiate communications. How do you handle this when you have to communicate with 100 devices? Surely this is not unique? I am looking for a modular approach so we can test and debug without too much going on in one vi.

    One thing to think about is the possibility of organizing instruments into "subsystems", so the higher level program controls subsystems and each subsystem deals with a limited number of instruments that work together. Then no part of the program is trying to juggle too many parts, and you can test the subsystems independently. It depends if your application can be organized into subsystems ("VacuumSystem", "PowerSystem"). I think that is called the "Facade Pattern" in OOP.

    Personally, I usually find it worthwhile to give each instrument a VI running in parallel that handles all communication with it. Then any feature like connection monitoring (as Daklu mentioned) can be made a part of this VI; this can include things like alarm conditions, or statistics gathering, or even active things like PID control (the first instrument I ever did this way was a PID temperature controller). Think of this as a higher-level driver for the instrument, which the rest of the program manipulates.

    You can use a class to represent or proxy for each subsystem or instrument; this class would mainly contain the communication method to the parallel-running VI. Daklu's "Slave loop" is an example. You can either write API methods for this class (as Daklu does) or send "messages" to this object (which I've been experimenting with recently).

    • Like 1
  21. Master/Slave, Producer/Consumer, Observer/Observable, Publisher/Subscriber, etc. describe the relationships between parallel processes, not the implementation. The transport mechanism is an implementation detail and entirely incidental to the relationship. No doubt it is a very important detail, but it doesn't affect the overall relationship between the processes.

    I agree, there are more significant differences than communication method. A "server" stands ready to respond to requests from a "client"; a "subject" notifies its "observers" of what it's doing (but is not acting on requests and is entirely unaffected by the observation). A "slave" serves, and is entirely dependent on, one master. "Producer" and "consumer" are dependent on each other and have no separate identity.

    There all essentially pieces of code exchanging information, but the particular metaphors chosen carry useful, if imprecise, meaning.

    -- James

    • Like 1
  22. I did run into a bit of trouble when I went to implement an example. I didn't create protected accessors to the MessageQueue's private data. This means the PrefixedMessageQueue class has to contain an instance of its parent and you have to override every method to unbundle the parent object. Seems kind of wasteful and unnecessary...

    Ah yes, the "Decorator Pattern". I like using it but it is a royal pain if your class has a lot of methods, and that limits its use. Direct inheritance is easier.

    (I'll have to think about releasing a LapDog.Messaging minor update that includes protected accessors. That would allow you to only override the enqueue methods instead of all of them, but I need to ponder it for a bit.)

    Instead of accessors, you could alternately make a "Create" method that accepts a MessageQueue object as input. Then you can provide a PrefixedMessageQueue constant as input and it will initialize the internal queue. Then PrefixedMessageQueue never needs to directly access the data items of MessageQueue (I think; I'm still waiting for a replacement of my development computer that broke weeks ago, so I can't experiment). You can, of course, make the "Create" method protected and wrap it in a "Create PrefixedMessageQueue" method that doesn't accept a class input, thus maintaining your current public API.

    -- James

  23. Does anyone actually use the master/slave? I find it to be useless. Actually I think notifiers in general are pretty useless, every time I've used them I've been "trying" to use them, and ultimately just find them to be less useful than other patterns.

    I wouldn't use the "master/slave" pattern provided, but I do use an alternate structure where a slave loop performs an action continuously, with control parameters provided in a Notifier. The difference being, the loop relies on the "Get Notifier Status" function rather than "Wait on Notification". Admittedly, I could probably do the same thing with a one-element lossy queue.

    -- James

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.