Jump to content

Dequeue Element Timeout State Control


Recommended Posts

See snippet below.

post-181-0-12225400-1319468915.png

I have used this approach a few times to dequeue elements while they exist and to simply maintain the previous state when they don't. However, I cannot remember seeing this in any high level asynchronous communication approaches. Most everything I see only reacts to additional messages being sent to the 'slave' process (Daklu's slave loops, actor framework, even QMH). Correct me if I'm mistaken of course. Perhaps I'm over thinking this, but is there anything fundamentally wrong with an approach like this? Like I said, I've used this basic idea a few times before and it has served me well.

The use case is when you want the other loop to continue it's previous command when nothing is left in the queue rather than pausing. A simple example could be a DAQ loop that is restartable/reconfigurable. You can send it all the configuration commands which will be transferred without loss, but then when you start the acquisition it simply continues acquiring without any help from the other loop.

Link to comment

I don't know about this one. In most situations my state is maintained separate from the actual dequeued element: state is a calculation based off history, and rarely defined exclusively from a single element. So during a timeout I use the calculated state and the previous element is likely irrelevant.

  • Like 1
Link to comment

I don't think there's anything inherently wrong with it. It's a clever improvement over QSM implementations I've seen where the default case has an internal case structure to handle the many different "continue doing the same thing until told otherwise" situations.

The use case is when you want the other loop to continue it's previous command when nothing is left in the queue rather than pausing.

IMO, what you have is definitely an improvement but--as the bolded text indicates--you're still using a QSM. Some people swear by them. Personally I find them more trouble than they're worth.

[Even though I don't use and (still) don't like QSMs, that seems to be enough of an improvement to deserve a :star: .]

Link to comment

I don't think there's anything inherently wrong with it. It's a clever improvement over QSM implementations I've seen where the default case has an internal case structure to handle the many different "continue doing the same thing until told otherwise" situations.

IMO, what you have is definitely an improvement but--as the bolded text indicates--you're still using a QSM. Some people swear by them. Personally I find them more trouble than they're worth.

[Even though I don't use and (still) don't like QSMs, that seems to be enough of an improvement to deserve a :star: .]

I mentioned your slave loop architecture because from what I saw when getting into it, the execution loop waits to dequeue a 'message' and then act upon it. Is this not quite similar to a QSM? It's entirely possible that I've missed something important about the architecture as I've only had time to give the thread and the code a cursory look. I also brought up the Actor framework because while even more abstracted from a traditional LV architecture, it seems to me that the actors are simply waiting for messages as well. Objects/text/enums/variants/whatever, it seems to me that you don't necessarily want the other code to simply wait for a new message all the time. Now I'm probably beating a dead horse, but I did want to explore how this affects the other architectures that are more complex than just a QSM.

Edit//

Oh! Thanks for the stars :cool:

Link to comment

I mentioned your slave loop architecture because from what I saw when getting into it, the execution loop waits to dequeue a 'message' and then act upon it. Is this not quite similar to a QSM?

It looks very similar, but the way it's used is entirely different. QSMs are based on commands. You send a command and the loop executes it and waits for the next command. That part is similar.

The difference is in expectations and how states are represented. The QSM doesn't have explicit "states," it has commands. People try to emulate states by issuing the same command(s) repetively until it is time to move to a new "state." (I'll call it a "fuzzy state" for reasons that hopefully will become clear.)

For simple state machines it can work okay, but it can quickly and easily grow out of control. For example, I can continuously queue up 3 commands, GetData, ProcData, and SendData, and conceptually they are a single CollectingData fuzzy state. I could also have a fuzzy state called StreamingFromDisk that repetively queues LoadData and SendData commands, and another fuzzy state StreamingToDisk that repeats the GetData and SaveData commands.

So now you have 5 cases: GetData, ProcData, SendData, LoadData, and SaveData. In QSM terminology these are referred to as "states." I'd be really interested in seeing someone create a state diagram using those five states. (I don't think it can be done, but I'm not a state machine expert.) The three states that do exist, CollectingData, StreamingFromDisk, and StreamingToDisk, aren't shown anywhere in the code. It's up to the developer reading the code to mentally put the pieces together to form an image of the state machine. The states are "fuzzy" because they're not well defined. The lack of definition makes it very easy to break the state machine without realizing it.

The slave loop example I posted includes a couple different ideas that probably aren't explained very clearly. (I've been thinking about blogging a series of articles about it...) These aren't new ideas, but I'll define them here for clarity. First, in my terminology a "slave" is a parallel process where all messages to and from the slave are routed through a single master process. A single loop is the smallest component that can be a slave, but there could also be multiple loops in a slave. A slave can be wrapped in a vi or a class, or it can simply be a separate loop on a block diagram.

Second is the "message handler," shown below. Most people look at it and think it is a QSM. If you look closely you'll notice I don't extend the queue into the case structure as many QSMs do. I don't allow the message handler to send messages to itself because that's a primary cause of broken QSMs.

More importantly, it's a reminder to me that each message must be atomic and momentary. Atomic in that each message is independent and doesn't require any other messages to be processed immediately before or after this message, and momentary in that once the message is processed it is discarded. It's not needed anymore. (Tim and mje do the same thing I think.)

post-7603-0-48078500-1319220273.png

The third is the "simple state loop." It's been around near enough to forever that everyone should be aware of it. NI includes a "Standard State Machine" template with Labview.

post-7603-0-27858600-1319220294.png

When I'm coding a new loop usually starts life as a message handler. Sometimes as the project progresses I discover the loop needs states, so I add a simple state loop around the message handler as shown below to create a state machine. How do I know if I need states? It's kind of intuitive for me, but a good clue that you're transitioning from message handler to state machine is needing a shift register (or feedback loop) to maintain flags or other information about itself.

post-7603-0-11595100-1319222046_thumb.pn

State machines and message handlers serve very different purposes. The QSM looks like it was created in an attempt to add stateful information to message handlers as requirements changed. I hold a minority opinion, but personally I don't think they are a very good solution. There are other reasons why the QSM is a poor substitute for state machines, but that's a different discussion.

  • Like 1
Link to comment

Second is the "message handler," shown below. Most people look at it and think it is a QSM. If you look closely you'll notice I don't extend the queue into the case structure as many QSMs do. I don't allow the message handler to send messages to itself because that's a primary cause of broken QSMs.

More importantly, it's a reminder to me that each message must be atomic and momentary. Atomic in that each message is independent and doesn't require any other messages to be processed immediately before or after this message, and momentary in that once the message is processed it is discarded. It's not needed anymore. (Tim and mje do the same thing I think.)

This was the disconnect between my understanding and your implementation I was expecting. With the assumption that all messages are atomic, and momentary, then of course there is no need to do anything but wait for a new message to execute. Also, I may have unintentionally limited the original question to 'state' machines. I've found that it can be useful with unreliable network communications and the information being transmitted is simple control signals that can be lossy. Things time out every once in awhile, just ignore it and continue with the previous commands. RT targets and such. This may open the door to a myriad of ways to handle such situations.

I suppose the point of this thread is that this approach seems almost obvious to me, but I don't see it in other places. So is that because it's obvious to everyone else, or there is something wrong with it?

Link to comment

Somewhat on this topic... do you ever add some kind of scheduled/timed tasks to your command handling loop? I have a similar structure with a timeout wired to the dequeue, and I recently added code to the timeout (no command) case that runs through an array of "timed tasks" and any that have reached their "next execution time" are added to the command queue. This seems to be a flexible way to add tasks that need to be executed at regular but not precise intervals. Adding a scheduled task is as simple as enqueueing a "Start Scheduled Task" command with a few parameters including a unique name, and stopping it just requires a similar "Stop Scheduled Task" command with that name as a parameter to remove it from the list. Does anyone else use a similar structure? Have you implemented something similar in a different way?

Link to comment

I suppose the point of this thread is that this approach seems almost obvious to me, but I don't see it in other places. So is that because it's obvious to everyone else, or there is something wrong with it?

I don't use your method, but mainly because I do use two closely-related methods that "bracket" yours.

One is to extract important information from commands and keep it is a named cluster in a shift register. On timeout, the appropriate action is expected based on this information. This is more flexible than yours as the retained information can be from all the past commands, rather than just the last one.

The other is to use a Notifier: wait of notification, and on timeout get the last notification. This is the same as using yours with a single-element lossy queue. I use this design when there is only one "command" (ie. the parameters controlling what to do on timeout).

So, your method is more capable that the later (can use alternate queue characteristics and multiple "commands") but less capable than the former (only save the last message). The problem is I can't think of a use case where I would want more than my notifier design, yet only care about remembering the last command on timeout.

-- James

Edited by drjdpowell
Link to comment

I suppose the point of this thread is that this approach seems almost obvious to me, but I don't see it in other places. So is that because it's obvious to everyone else, or there is something wrong with it?

It wasn't obvious to me--primarily because I haven't run into situations where I needed it--but I don't think anything is wrong with it if you need latching commands.

Somewhat on this topic... do you ever add some kind of scheduled/timed tasks to your command handling loop? I have a similar structure with a timeout wired to the dequeue, and I recently added code to the timeout (no command) case that runs through an array of "timed tasks" and any that have reached their "next execution time" are added to the command queue.

I've done that in the past, but in general I try to write my code so it doesn't depend on having the queue timeout. The timeout is unreliable--it resets every time a message is received. Having critical code in the timeout case opens the door for unexpected (and unidentified) bugs. You can write reliable software with critical code in the timeout handler, but you have to always be aware of whether or not the new code is preventing the timeout from firing on schedule. It leaks implementation details and you have to tightly control all the conditions in which messages are sent. Here's what I like to do instead:

post-7603-0-43300800-1319295769_thumb.pn

The is the block diagram from the main UI panel for an app I'm currently working on. Look at the bottom two loops. The ImgDisp loop's main responsibility is to refresh the front panel display with images from a camera. This being a UI, it's pretty critical that it refreshes consistently. Notice there's no timeout. Right below it is a separate loop whose only job is to send "RefreshDisplay" messages to the ImgDisp loop on a regular schedule.

Earlier in this project I did have the RefreshDisplay functionality in the ImgDisp loop's timeout handler. The only external message the loop handled was "Exit," so it wasn't a problem. A late feature request came that required a box be drawn on the image under the mouse pointer during certain alignment processes. That required adding new messages to the loop: SetPinSize, MouseMove, and MouseExit. MouseMove messages are going to come fast and would have disrupted the timeout process, so I just moved that functionality from the "QueueTimeout" message handler to the "RefreshDisplay" message handler and added the heartbeat loop. Problem solved.

There's a couple things about this loop you might notice. First, I'm operating on an ImgDisp class. This particular class is just a glorified cluster. I'm only using it for the abstraction. I don't expect to ever create a child class for it, but it's way easier for me to understand the code at a glance. The other benefit is it's obvious the functionality encapsulated by the DrawAlignmentPinBox is intended for only this small part of the application. It's kind of a built in aid for project organization.

Second, the ImgDisp loop behavior changes based on the DrawAlignmentPinBox flag. *sniff* *sniff* It smells like a state machine, but it's implemented as a message handler. (I've broken my own rules! Oh, the horror!) In this particular case I allow it because the ImgDisp loop is so simple and there's only one flag. In the back of my head I know if I add more functionality to that loop I'll need to refactor it into a state machine.

Third, in this example I'm putting the message directly on the ImgDisp queue, which violates the "all messages come from one source" rule of slave loops. I consider the heartbeat loop "attached" to the ImgDisp loop. In other words, if I were to wrap the ImgDisp loop in a slave class the heartbeat loop would go with it. Depending on the situation I might route the heartbeat message through the mediator instead and the heartbeat loop would not be attached to the ImgDisp loop.

Link to comment
I also brought up the Actor framework because while even more abstracted from a traditional LV architecture, it seems to me that the actors are simply waiting for messages as well.
Although it is true that the loop that handles messages does wait for the next message, each Actor may have one or more additional control loops in its Actor Core.vi. I suggest that the reason you don't see anything like what you suggest in the Actor Framework is the existence of these separate control loops. If I needed any "do this until you hear otherwise" behavior in one of my actors, I would encode it in the control loop. An example of that is in this Actor Framework demo in the Temperature Sensor class, where the sensor polls for the current sensor value every five seconds in the absence of any other message.
Link to comment

Although it is true that the loop that handles messages does wait for the next message, each Actor may have one or more additional control loops in its Actor Core.vi. I suggest that the reason you don't see anything like what you suggest in the Actor Framework is the existence of these separate control loops. If I needed any "do this until you hear otherwise" behavior in one of my actors, I would encode it in the control loop. An example of that is in this Actor Framework demo in the Temperature Sensor class, where the sensor polls for the current sensor value every five seconds in the absence of any other message.

Ah. Thanks AQ.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.