Jump to content

Sequencing alternatives to the QSM


Recommended Posts

Hi Lava's,

 

I have an application which I have constructed around Slave loop obects (Daklu's approach) using the Lapdog messaging library, and I'm currently struggling to decide upon the best approach for implementing a sequence of operations. 

 

The application involves doing some DAQ with a USRP device where by I require it to sweep an entire spectral range and acquire samples for each subset, what would seem standard state machine material. I currently have a slave process/loop named "scanner", which I planned to have sit there and wait for a "start scan" message to be sent to it from the powers that be. Upon receiving this start command, it would need to initialise, acquire, move local oscillator, acquire, move local oscillator, acquire... and so on, until it reaches the end of the range.

 

Aside from the slave loop using its own input queue to enqueue messages to itself and carry out this sequence, how else could it be approached? I'm asking because I've been reading around for quite some time on the topic of QSMs and I can see they are widely disliked for reasons I can generally appreciate, however, I'm struggling to think about a better way for a slave loop in this scenario to carry out a sequence of operations when instructed. I.e. for a mediating loop to send it the "start scan" command and then for the scanner to return a message via its output queue to say its done and here are the results.

 

Many thanks in advance for any contributions,

Link to comment
Does the sequence need to change dynamically? If not, then a chain of sub VIs is hard to beat :-)

Thanks for your reply.

 

The sequence doesn't need to change dynamically, but certain operations have to be looped until a condition is reached (acquire, move local oscillator, acquire, move local oscillator, acquire...). Are you saying to simply place a state machine within a sub VI which will carry out the entire sequence when called? What if I wanted to abort the sequence while it was running? When a QSM is used, the abortion can be ordered by the calling process by adding an abort message to the front of the input queue.

 

I'm wondering in general now what the best approach would be for a scenario where you have a sequence of operations you wish to "trigger", but still wish to have the option to intervene and cancel once running. Say the sequence is finite but could take a while to complete, you'd need to be able to abort it somehow. The way I've currently been thinking is to have the slave process waiting on an "initiate" message and subsequently enqueue the next stage of the sequence onto its own message queue. The loop would then churn away doing its job until it finishes or receives an abort message which cancels the sequence. This nature of adding messages to the loop's own queue and the potential consequences of performing such actions is what appears to be argued, so I'm trying to gain some insight into the alternative and 'safer' method.

Link to comment

It appears that the latter posts of this thread on the QSM (Producer/Consumer) discuss the kind of thing I'm talking about. One more satisfactory solution to using the QSM which was discussed is to ensure you keep the public message queue and internal functions queue separate. This way the QSM can enqueue its own sequence of states internally when a public message is received to tell it to do so. By adding a check for an external interrupt on each iteration of the QSM loop, a sequence can also be interrupted by a public message. This snippet of code from Daklu in that thread shows the kind of thing.

 

What I'm interested to know is what forum members such as Daklu, who have said they would seek an alternative to the QSM in any scenario, would implement as an alternative to achieve the same functionality? i.e. You need to be able to trigger a sequence to run, but it should also be possible to interrupt it (can't just be subVI'd as it would lock the QMH loop until it completes).

Link to comment

I'm not familiar with LapDog so I don't know if this is useful, but I've been using the approach shown below. Inside the Fire VI (this application controls a system that fires droplets of ink) you can have a state machine where each iteration of the main while loop executes only a single iteration of the state machine. It's easy to interrupt but because all the actions are inside one VI, it can't be interrupted at the wrong time (as can happen when you queue your states). You could trivially add a "Cleanup" VI that would run on the class stored in the shift register whenever the dequeue did not time out and the previous action was not yet Done.

post-3989-0-95439600-1358808107_thumb.pn

I do use this code for sweeping various parameters. The class hierarchy has a simple Fire action at the top. One child of Fire is Stop. Another child is a Measure class which takes a single measurement. Below that is a Measure with Sweep class that can sweep through various parameters, executing the parent Measure method at each step. When the parent Measure object is Done, the sweep moves to the next parameter.

Link to comment
It appears that the latter posts of this thread on the QSM (Producer/Consumer) discuss the kind of thing I'm talking about. One more satisfactory solution to using the QSM which was discussed is to ensure you keep the public message queue and internal functions queue separate. This way the QSM can enqueue its own sequence of states internally when a public message is received to tell it to do so. By adding a check for an external interrupt on each iteration of the QSM loop, a sequence can also be interrupted by a public message. This snippet of code from Daklu in that thread shows the kind of thing.

 

What I'm interested to know is what forum members such as Daklu, who have said they would seek an alternative to the QSM in any scenario, would implement as an alternative to achieve the same functionality? i.e. You need to be able to trigger a sequence to run, but it should also be possible to interrupt it (can't just be subVI'd as it would lock the QMH loop until it completes).

I'd use normal VI sequencing and dynamically launch it. If you want to stop it just Abort/close the VI. Quite often there will be a "controller" to launch/abort which the rest of the system messages to, effectively making it a module with an API.

Link to comment
I'm not familiar with LapDog so I don't know if this is useful, but I've been using the approach shown below. Inside the Fire VI (this application controls a system that fires droplets of ink) you can have a state machine where each iteration of the main while loop executes only a single iteration of the state machine. It's easy to interrupt but because all the actions are inside one VI, it can't be interrupted at the wrong time (as can happen when you queue your states). You could trivially add a "Cleanup" VI that would run on the class stored in the shift register whenever the dequeue did not time out and the previous action was not yet Done.

 

I do use this code for sweeping various parameters. The class hierarchy has a simple Fire action at the top. One child of Fire is Stop. Another child is a Measure class which takes a single measurement. Below that is a Measure with Sweep class that can sweep through various parameters, executing the parent Measure method at each step. When the parent Measure object is Done, the sweep moves to the next parameter.

Thanks ned, this sounds like an interesting approach. Would you be able to post a generic template of the code so I can take a look? 

 

 

 

I'd use normal VI sequencing and dynamically launch it. If you want to stop it just Abort/close the VI. Quite often there will be a "controller" to launch/abort which the rest of the system messages to, effectively making it a module with an API.

Ah yes, I never thought much about dynamic launching. This makes sense. Also, I guess if you don't want to abort the VI and its sequence abruptly, you could always use the 'control value:set' invoke method on the VI reference to send a boolean which causes a clean up case to be entered. Thanks for your help  :)

Link to comment
Thanks ned, this sounds like an interesting approach. Would you be able to post a generic template of the code so I can take a look?

Sorry, I don't have a generic template at the moment, and I'm a bit on the busy side right now, but I'll post something if I can find time to put it together.

Link to comment
What I'm interested to know is what forum members such as Daklu, who have said they would seek an alternative to the QSM in any scenario, would implement as an alternative to achieve the same functionality? i.e. You need to be able to trigger a sequence to run, but it should also be possible to interrupt it (can't just be subVI'd as it would lock the QMH loop until it completes).

Apologies for the late response. I've been neck deep in real life stuff (like keeping customers happy so I can pay the bills) and haven't been on LAVA much the past several months.

Interrupts don't exist in labview, or as near as I can tell, in any data flow language. However, modern event driven user interfaces require some sort of mechanism for interrupting a process the user wants to cancel. All solutions essentially boil down to the same thing...

 

Basic Sequence Loop

1. Execute a functional step.

2. If user requested interrupt, exit.

3. Else, goto 1.

The one exception is Shaun's method, which pretty much just executes (in the criminal justice sense, not the computer science sense) the running vi when it's no longer needed. Depending on what your process is doing, that may not be a safe thing to do. Also, assuming you only need one instance of the process, dynamic launching and using VI server (Control Value:Set) to interrupt execution doesn't buy you anything over just setting up a parallel loop in the original vi. It's still using an execute-check loop. All you're doing is add complexity and latency.

There are lots of way you can implement the basic sequence loop.  If you look you can see the example you linked to on the other thread and ned's example are just different implementations of the basic sequence loop.  The details are different but conceptually they are the same.  In general, the process I use for interruptable sequences plays out something like this:

1. User clicks button to execute sequence ABCD.

2. Controller receives message from UI to start sequence ABCD.

3. Controller sends "Do A" message to execution loop.

4. Execution engine does step A and sends "A complete" message to controller.

5. Controller sends "Do B" message to execution loop.

6. Execution engine does step B and sends "B complete" message to controller.

7. Controller sends "Do C" message to execution loop.

8. Execution loop starts doing step C.

9. Controller receives "User Interrupted" message from UI.

10. Execution loop finishes and sends "C complete" message to controller.

11. Controller, understanding that the user interrupt takes precendence over the remainder of the sequence, doesn't send the "Do D" message, effectively interrupting the sequence.

The responsiveness of the application to the user's interrupt is directly related to how fine-grained each functional step is. If you have a functional step that takes 5 seconds to execute, then the use might have to wait 5 seconds after hitting the cancel button before control is returned to him. Usually I have several levels of abstraction between the UI and the steps that are executed. That allows me to keep the high level code coarsely grained without imposing extended waits on cancel operations.

 

 

[Edit]

BTW, I don't *think* I've said I wouldn't ever use a QSM... at least not since '08 or whenever I first started ranting about QSMs.  The QSM design is exceptionally good at one thing in particular:  time to implement.  If you want to test some sort of functionality while spending as little time on it as possible, the QSM is probably your guy.  I just make it very clear to the customer that this is prototype code and it will be thrown away when the prototype is complete.  It is not the start of the application and we will not be adding new features to it.

  • Like 1
Link to comment

One other useful technique is to create an “abort” notifier in your “Controller” process, pass this as part of a “Do<whatever>” message, and have the execution loop check this notifier at appropriate points.  It can even use “Wait on Notification” in place of any “Wait” nodes it might require.  Package the “wait on notification” inside a subVI that outputs a “process aborted” error cluster, and you can stick it in any chain of subVIs sequenced by the error wire.

 

— James

 

PS to Daklu> I was excepting you to mention a true state machine as a “Sequencing alternative to a QSM”.

Link to comment
PS to Daklu> I was excepting you to mention a true state machine as a “Sequencing alternative to a QSM”.

 

Ahh... my reputation preceeds me.  :lol:

 

I do often use a true state machine (I call them "behavioral state machines" to help differentiate from the QSM) in places where others would reach for a QSM, but that's mostly a matter of personal preference and situational considerations.  I'm not sure I would use a BSM for a general purpose sequencer in that situation.  An interruptable BSM still follows the same basic sequence loop (BSL) and there are other BSL implementations that are lighter weight. 

 

Several years ago I did prototype a general purpose sequencer based on the composition pattern.  The idea was to mimic Test Stand in that each step in the sequence could be a single fine-grained step or it could be a subsequence that contained several steps and/or additional subsequences.  I don't remember where I left it.  At the time I was building lots of sequencers.  Since then... not so much.

Link to comment
  • 2 weeks later...

Thank you all very much for your suggestions and contributions, I appreciate you taking the time. I'm continually trying to improve my knowledge on good architecture design for larger applications, so it's really great to hear a variety of different approaches and gain insight from experienced developers.

Link to comment
  • 4 months later...

Hey guys,

Bumping this a bit because I'm in a similar scenario. I've developed a distributed app for controlling an instrument. The UI runs on a Windows machine and connects to an LVRT machine via TCP. Up till now, I've driven it manually, pressing all the buttons to test all the functionality. Now I'm trying to actually use it to do experiments and have hit on the idea of writing simple .csv files in plain text which are parsed to generate a list of the commands and as with Paul, I'm struggling a little bit on exactly how to handle the sequencing logic. Specifically, how do I handle the distinction between messages sent as part of a sequence, and the same messages when they're sent by the user just pressing a button?

I've injected the sequencer at the level of the main message routing loop running in Windows (if Daklu is watching, I've also refactored to use a stepwise routing schema so the Windows routing loop has access to all messages). The sequencer generates an array of commands which is stepped through one by one (in theory).

So far, I've tried setting a flag "Sequencing?", but that's not really sufficient to deal with the cases where a message may arrive as part of a sequence, but then also arrive when the user pushes the button, resulting in a superfluous call to the sequencer to grab another job. Perhaps I should disable UI interactions which generate the events I'm passing as part of a sequence (i.e. "enable motors")?

I know the distinction between "Command" and "Request" messaging has been discussed a lot recently, and as I understand it, there doesn't seem to be much in the way of a hard distinction for telling whether a message is a "Command" or a "Request". What I'm wondering is, should I perhaps bump the "Sequencing?" flag, or something similar, down to the level of the individual process? So that the individual processes only send "CommandAcknowledged" when they're sequencing?

Intuitively, I'm rejecting that idea because what if I want to do things with "CommandAcknowledged" type messages while the instrument is being driven manually?

What techniques do you guys use to distinguish when a message is sent as part of a command sequence or when it's sent manually? Alternatively, how do you avoid the issue entirely?

Edited by AlexA
Link to comment

Hmm, another thought struck me, though this might be more work than it's worth. Perhaps I should extricate the sequencer from the Windows Routing Loop and make it it's own process. Then if it's online Ack Messages which may normally flow to the respective UIs are also copied to the sequencer. Thoughts on this idea?

Link to comment

That's what the

 

Hey guys,

Bumping this a bit because I'm in a similar scenario. I've developed a distributed app for controlling an instrument. The UI runs on a Windows machine and connects to an LVRT machine via TCP. Up till now, I've driven it manually, pressing all the buttons to test all the functionality. Now I'm trying to actually use it to do experiments and have hit on the idea of writing simple .csv files in plain text which are parsed to generate a list of the commands and as with Paul, I'm struggling a little bit on exactly how to handle the sequencing logic. Specifically, how do I handle the distinction between messages sent as part of a sequence, and the same messages when they're sent by the user just pressing a button?

I've injected the sequencer at the level of the main message routing loop running in Windows (if Daklu is watching, I've also refactored to use a stepwise routing schema so the Windows routing loop has access to all messages). The sequencer generates an array of commands which is stepped through one by one (in theory).

So far, I've tried setting a flag "Sequencing?", but that's not really sufficient to deal with the cases where a message may arrive as part of a sequence, but then also arrive when the user pushes the button, resulting in a superfluous call to the sequencer to grab another job. Perhaps I should disable UI interactions which generate the events I'm passing as part of a sequence (i.e. "enable motors")?

I know the distinction between "Command" and "Request" messaging has been discussed a lot recently, and as I understand it, there doesn't seem to be much in the way of a hard distinction for telling whether a message is a "Command" or a "Request". What I'm wondering is, should I perhaps bump the "Sequencing?" flag, or something similar, down to the level of the individual process? So that the individual processes only send "CommandAcknowledged" when they're sequencing?

Intuitively, I'm rejecting that idea because what if I want to do things with "CommandAcknowledged" type messages while the instrument is being driven manually?

What techniques do you guys use to distinguish when a message is sent as part of a command sequence or when it's sent manually? Alternatively, how do you avoid the issue entirely?

 

Just inspect the SENDER part of the message ;)

Link to comment
Of course...

Edit: Well, actually, that doesn't really work. Ack messages come back from the same sender, regardless of whether it was the sequencer or a person who triggered it.

Just append the SENDER (as received by the handler) as part of the ACK message

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.