Jump to content

drjdpowell

Members
  • Posts

    1,969
  • Joined

  • Last visited

  • Days Won

    172

Posts posted by drjdpowell

  1. I want to create a plugin class hierarchy and have a generic config GUI. If a class overrides a specific method I will enable a button that allows to open a class specific config GUI.

    Would it be better to have a “ConfigGUI” object that was recursive (contained an optional subConfigGUI)?  Have a “Get ConfigGUI” method that has a “subConfigGUI” input.  The parent implementation would initialize the generic GUI and add the inputted subConfig GUI.  Child implementations could override the method to initialize a specific GUI and pass this in to the parent method.  The “display” (or whatever) method of the ConfigGUI object would enable the button if a non-default subConfigGUI was present.  That avoids any class introspection.  It would also work at any depth (so your more specific GUIs could themselves have even more specific sub-GUIs).

  2. Cyth SQLite Logger


    A logger and log viewer using an SQLite database.  

    The logger is a background process that logs at about once per second.  A simple API allows log entries to be added from anywhere in a program.  

    A Log Viewer is available under the Tools menu (Tools>>Cyth Log Viewer); this can alternately be built into a stand-alone executable. 

     

    Requires SQLite Library (Tools Network).  

     

    Notes:

    Version 1.4.0 is the last available for LabVIEW 2011.  New development in LabVIEW 2013.  

    Latest versions available directly through VIPM.io servers.


     

  3. index.php?app=downloads&module=display&section=screenshot&id=226

    Name: Shortcut Menu from Cluster

    Submitter: drjdpowell

    Submitted: 06 Mar 2013

    Category: User Interface

    LabVIEW Version: 2011

    License Type: BSD (Most common)

    A pair of subVIs for connecting a cluster of enums and booleans to a set of options in a menu (either the right-click shortcut menu on control or the VI menu bar). Adding new menu options requires only dropping a new boolean or enum in the cluster.

    See original conversation here.

    I use this heavily in User Interfaces, with display options being accessed via the shortcut menus of graphs, tables, and listboxes, rather than being independent controls on the Front Panel.

    Relies on the OpenG LabVIEW Data Library.

    Click here to download this file

    • Like 1
  4. The trick to making this work is that the DUT parent class must have an abstract dynamic-dispatch method for every child class of the Method class, and the corresponding DUT dynamic-dispatch method must be embedded into the "Execute Message" override method of each Message child class.

     

    No, the DD methods are not per child class; they are just methods to do stuff (which the children can override).  Child DUTs can also provide new methods, and messages can be written that call them by casting the DUT input to the correct child class.

     

    — James

     

    Added later: here’s an example “execute” method (though called “Do.vi”):

     

    post-18176-0-41217100-1362579061_thumb.p

     

    “VI Display name" is the message; it calls two methods on “Logger Daemon” to complete its task.

  5. Your Message class should have an “execute message” dynamic-dispatch method that has a DUT input.  Inside the execute method you call dynamic-dispatch methods of DUT.  So you can make child message classes that call different DUT methods, and child DUT classes that override those methods.

  6. If an actor has a behavior that requires some sort of event timeout, I prefer to implement that behavior in the actor using a metronome or watchdog loop--similar to the helper VI you describe above.

    I made one yesterday. Here is the only public API method, “Send Message with Reply Timeout”, which is identical to my regular “Send Message” method but with a timeout input and an optional input for the Timeout message to send:

    post-18176-0-50840700-1362305247.png

    Works by an asynchronous call of a small “watchdog” that waits on the reply and returns either that reply or a timeout message. It then shutsdown.

    post-18176-0-85607400-1362306235.png

     

    I should add that this only works for a system where the address to send replies to can be attached to the original request message.  Hard to define a “reply timeout” without a defined reply.

     

    — James

  7. You are correct that if one of the Exited messages fails to arrive then the UI loop will never shut down.  That's exactly what I want to happen.  Putting a timeout in the UI loop permits a resource leak--Loop 1 can continue operating after the rest of the application has been shut down.  By not using a timeout any errors in my shutdown logic are immediately apparent.  Fail early, fail often.  The hardest bugs to fix are those I don't know about.

     

    Well, timing out would be an error, and one could handle that error in multiple ways: log error and shutdown anyway, display error to User and wait for input, trigger hardware emergency safe mode.  My point is that one’s code may require awareness about something, that is supposed to happen, not happening in a reasonable time.  It’s obvious how to do that with synchronous command-response, but not so clear if one is staying fully asynchronous.

     

    I’m thinking of creating a “helper” VI for my framework that can be configured to wait for a message.  If it receives the message within the designated time it just forwards the message to its caller and shuts down; otherwise it sends a "timed-out” message instead.  That way the calling loop can send a command that it expects a reply to, and execute code to handle the case that the reply never comes, while remaining fully asynchronous and unblocked.  

  8. For this reason your messaging system either needs a long timeout...

     

    Messaging systems don't need those features any more than a person without a car needs car insurance.  The reason you need those features is because you are using a message protocol designed around synchronous query-response communication.  If your protocol is designed around asynchronous event-announcements then the need for those features goes away.

     

    Question: Doesn’t a fully asynchronous message system still need the concept of a timeout?  In your shutdown example, the UI loop is at some point waiting to receive the “Exited” messages from Loops 1 and 2.  If one of those messages fails to arrive, won’t it be waiting forever?  

  9. Well the act of delegating to a private subActor (or any private secondary asynchronous task) only hides the extra layer. The public Actor might indeed respond to the message on short order, but there's no getting around to the fact that actually acting on that message takes time. Ultimately if some sort of message filtering has to be done at any layer because it just doesn't make sense to process everything, you're back to the original argument. If I can't do stuff like this easily with an Actor and my Actors are just hollow shells for private non-Actor tasks, I might not see a benefit for even using the Actor Framework in these cases.

     

    I was thinking more of the use of a message queue as a job queue for the actor, rather than what to do about filtering messages, but the general idea would be to have the actor’s message handler serve as supervisor or manager of a specialized process loop.   The manager can do any filtering of messages, if needed, or it can manage an internal job queue.  It can also handle aborting the specialized process by in-built means that can be more immediate than a priority message at the front of the queue (like an “abort” notifier, or directly engaging a physical safety device).  It wouldn’t be a hollow shell.

  10. This is typically only an issue when your message handling loop also does all the processing required when the message is received.  Unfortunately the command pattern kind of encourages that type of design.  I think you can get around it but it's a pain.

     

    My designs use option 5,

     

    5. Delegate longer tasks to worker loops, which may or may not be subactors, so the queue doesn't back up.

     

    The need for a priority queues is a carryover from the QSM mindset of queueing up a bunch of future actions.  I think that's a mistake.  Design your actor in a way that anyone sending messages to it can safely assume it starts being processed instantly.

     

    I particularly second this.  Actors should endeavor to read their mail promptly, and the message queue should not double as a job queue.

    • Like 2
  11. Hi Ben,

    The launch technique is stolen from mje’s “message pump” package.  It’s used by the Actor Framework, also.  

     

    The Actor Manager installs in a different location, and should be available under the Tools menu:

    post-18176-0-74878200-1361877295.png

    Please note that the Actor Manager is badly in need of rewriting.  It’s not pretty.

     

    What’s your use case for TCP?  I have TCP Messengers in the package (which use TCP actors to run the client/server) that are intended to seamlessly substitute for other messengers (handling replies and observer registrations).  At some point I will write the code to launch an actor sitting behind one of these servers.  Do you want a TCP actor to talk to external non-LabVIEW code?

     

    BTW, I’m in the midst of writing a talk on this package that I’m going to present at the European CLA Summit.  Of course, this has made me relook at lots of things I did and want to change them  :) .  I’m going to upload a new version before that summit in April.  

     

    — James

  12. In the framework I’ve developed, I get a lot of use out of subclassing the central enqueuer-like class (called, perhaps too simply, “Send”).  Below is the class hierarchy.  But “assertions of correctness”, what’s that?  Breaking down some walls will certainly lose something to what AQ is trying to do.  Personally, I think the tradeoff in flexibility would be worth it, but it would mean that that flexibility would be used to build some problematic code.  

     

    post-18176-0-38696600-1361873605_thumb.p

     

     

  13. I tend to agree; Send.vi is an invocation of the message transport mechanism -- Messenger.lvclass -- and Construct.vi (i prefer this terminology to Write.vi) is a member of a concrete instance of Message.lvclass -- something I realized a while back after naïvely convolving the message with the messenger.

     

    This aside, i still want to impose 'Must Implement' on Construct.vi for Message.lvclass, yet it clearly cannot be 'Must Override' because message construction has a unique ConPane for each concrete message type.

     

     

    It sounds like the overzealous parent class designer you describe is taking contracts to an extreme, and has crossed the line of "strategically-placed contracts to make the class easy to use correctly and hard to use incorrectly".

     

    But aren’t you in danger of being the overzealous designer, Jack.  :)  You want to impose “Must Implement” of a “Construct.vi” on “Message”, a class that I don’t believe even has a constructor at the moment.  And at least initially, you imagined this required constructor to be “Send”.  What requirements could you have made at that point that were not, in hindsight, either blindingly obvious (“we need to construct our objects”), or overzealous (“must be called Send or Constructor”, "must have an Enqueuer input”)?  You can’t enforce “must actually use this function”, so any error in requirements will just lead to unused “husk” methods made only to satisfy the requirements.  

     

    — James

     

    BTW: There is an example of this very thing in the Alpha/Beta Task example in 2012, where “Alpha Task Message” has two independent constructors: a “Send Alpha Task”, following the standard pattern (not enforced, of course), and then a “Write Data” constructor written when it became necessary to write a message without sending it.

  14. Consider for Actor Framework, if Message.lvclass were to define 'Must Implement' on Send.vi (as it already specifies 'Must Override' on Do.vi). Do you agree this as a good use of the contract to make subclass creation more robust and even simpler? Does this example better explain my sentiment for wanting 'Must Implement'?

    A good example, because “Send” being a method of Message has always looked wrong to me.  Messages are written; sending is an action of a communication channel.  The act of sending should be independent of the message type.  I don’t want to implement Send.vi; I want to implement Write.vi.  How will “Must Implement Send.vi” feel about that?

    Also, what about messages that have no data, and thus don’t need a creation method of any kind?  They don’t need to implement Send or Write.  

  15. I do have an example of a parent-class restriction that I wish I could make.  I have an abstract “address” object that defines a method for sending a message.  The framework that uses this parent assumes that “Send.vi” is non-blocking (and reasonably fast).  But there is nothing stopping a child-class being implemented with a blocking enqueue on a size-limited queue, other than documentation.

  16. I am curious why you used the In Place Element Structure for accessing the sqlite3.dll?

     

    One of the topics in the thread following this post by MattW is a possible reduction in performance due to LabVIEW needing to check if the dll path input has changed between calls.  In this later post by me I explained why I switched to using an in-place structure:

     

    "I found through testing that some of my subVIs ran considerably slower than others, and eventually identified that it was do to details of how the class wire (from which the library path is unbundled) is treated. Basically, a subVI in parallel to the CLN node (i.e., not forced by dataflow to occur after it) would cause the slowdown. I suspect some magic in the compiler allows it to identify that the path has not changed as it was passed through several class methods and back through a shift register, and this magic was disturbed by the parallel call.

    This being a subtle effect, which future modifiers may not be aware off, I’ve rewritten the package to use In-Place-Elements to access the library, thus discouraging parallel use."

    In the Execute Prepared SQL (string results).vi you didn't wire the EI/EO terminals on the Get Column Count.vi. I assume that was an oversight?

    Opps!

    I noticed in the Pointer-to-C-String to String.vi you added 1 to the Length before the MoveBlock call, but elsewhere you used the MoveBlock directly the converted to a string. Additionally in the Pointer-to-C-String to String.vi you preallocate the U8 array, but not elsewhere. This is an inconstancy that's not explained. Could you elaborate on this?

    I really should document more.   This was only my second “wrap a C dll” job, and the first time I’ve used “MoveBlock”.  The issue is the fact that C strings have an extra 00 byte at the end and thus are one byte longer than their LabVIEW string length.  I’m not sure I’m doing it correctly, but in “Pointer-to-C-string to String” I’m walking along the string to find the 00 byte, while in the other MoveBlock uses I’m getting the exact string length from sqlite.  

    Hm I'm very sorry. I made a demonstration project and there all works fine.

    Strange, I'll analyse my VIs.

    Check that you are “reseting” any statements when finished (or you’ll be holding a Read lock on the database file), and that you aren’t holding an SQL transaction open (a Write lock).  

     

    — James

  17. When inserting data into the database through producer consumer loops, the producer loop gets paused during the calling of the sqlite execution VI.

    Could you post some code illustrating the problem?  Or images of your producer and consumer code?  I can only guess that your holding a lock on the database open somehow.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.