Jump to content

QSM (Producer/Consumer) Template


Recommended Posts

That's correct. However, you have two message receiving mechanisms in that loop: the queue and the event structure. You have to make sure both are being serviced. If you don't have a timeout on the event structure none of the messages from the lower loop will be processed until the user initiates some action on the UI.

I've thought of two possible solutions to this.

1) Only use User Events to pass data around and don't use queues.

2) Or have the queue fire a User Event, telling the event structure to go read a queue message.

The obvious benefit from this is no polling. Your VI will sit idle until either a command comes in, or the UI event is fired. Of course if you need polling you can enable it by giving your event case a timeout.

I've heard that behind the scenes a user events are just queues so I wouldn't expect much (if any) performance difference.

Edited by hooovahh
Link to comment

Further to my latest post, attached is the ACTUAL project that I have just done using the latest template. In the ZIP file you should find "Track History" and "User ID" folders that I haven't yet implemented on the code.

The "Track History" was initially made by a forum member called WMassey from the post below:

http://lavag.org/top...__pm341__st__40 (zero-tolerance was me back then)

This was quite far back while I was a student just learning LabView, among others, he was the first one who guided me through the initial stages of learning.

Now, if I implement the "Track History" (which is design to record anything that is sent or read from VISA), I fear that the application is going to accumulate all the information in memory and slow the whole application down.

Is there a way of implementing this without impeding the run speed of the main GUI. I thought of saving the info every lets say 100 lines on a temp.txt file and clearing the memory, but again, I'm not sure if this is the best way.

Also, I wanted to see if anyone knows an example code (or VI) that would monitor in a similar way to how "track History" does but this time for the main GUI. Basically, once the software is deployed and the users start using it, if there are any errors generated, then I can easily just see which case structure/function generated this error, what was the Main GUI doing before hand (i.e. a record of previous states that were executed) etc. This is probably where proper error handling and recording comes into play. I know its not very nice to ask for ready made stuff, but I thought this might save me some time in creating one and this way I can concentrate on the other projects that I'm developing. There are quite few machinery in our Antenna department that they want to update the control software's. We're still using basic HP :P .

As for the "User ID", it currently just serves as a filtering mechanism making sure that certain people/groups have access only to certain control software. The admin of course being the main Lab Manager (not me by the way).

Since I'm doing all this, I thought its time to updated the "User ID" section as well and make it more advanced that what it already is. Again, its not about absolute secrecy and hiding the password from hackers/crackers etc. For this, I was thinking to start using "Database VI's" for recording and retrieving information rather than a simple "*.txt" file.

Currently, the scalability is an issue. For now, there are only two groups, i.e. Admin and Users. I want to make this more dynamic where an admin can add/remove groups on the USER ID, (i.e. the admin can further add Temporary users whose login time runs out in lets say 1 week, add department groups where any user registered to that department would have access, etc. Any way, this is my end game, but any ideas towards this would be great.

I am a PhD research assistant in physics and not really a full time LabView user (even though I love it). Which is why even if I'm involved in these freebees projects for Uni, my main time is spent on research.

Finally, for me, this thread is "Christmas come early", having the high caliber programmers discussing their issues and voicing their thoughts in something that I'm directly working on is amazing.

Edited by Kas
Link to comment

I just don’t see it. The 5 subVIs chained on some clustersaurous object doesn’t seem that clearer than a JKI macro: SimulateReactor NormalOperatingModel InstabilityModel ReadReactorTemp IsTempOverLimit?

Let me explain in a little more detail. On the left is the cases from the QSM reactor controller. On the right is the cases from a functionally identical reactor controller using sub vis. (You can review both loops in the attached project.)

post-7603-0-79983600-1347636880.pngpost-7603-0-05319300-1347636880.png

With the QSM the case structure has four "private" cases that should only be enqueued when ReadReactorTemp is executed. (IsTempOverLimit?, SimulateReactor, NormalOperatingModel, and InstabilityModel.) When I open the list of cases there's nothing to indicate which of them should be private. Contrast that with the list of cases when using sub vis. Every one of the cases handles a message received from an external entity. [Edit - Every one of the cases I added. The first two sets were built into template and I didn't mess around with those.]

Once I've examined the loop and am convinced it works correctly I don't care about the details. With the QSM every time I open the list of cases I'm faced with implementation details. In the past I have tried various things to differentiate "public" cases from "private" cases in a QSM, like indenting the names of private cases. It helps a bit, but in the end the "loop api" is much easier for me to use and understand when I don't have to sift through all the private cases. When private cases exist it can be very difficult to present an appropriate level of abstraction to the developer.

And reordering the sequence or adding/removing actions is very fast with a macro.

I'll just go ahead and pretend you didn't say this. :P I've said before the QSM is great if writing code fast is your goal. I read far more code than I write, so spending a few extra minutes to make the code readable is a trade I'll take every time.

Ah, yes, the ability to connect directly to UI indicators and locals is one of the reasons I like using the JKI cases instead of subVIs for the high-level code that interfaces with the UI. But often times a particular UI update needs to be triggered by more than one action; in your design your updates only happen in one message case.

In that particular case, yes. If I had to do that exact same fp update in response to more than one message, the first thing I'd do is copy and paste using local variables. Eventually I'd move the limit checking logic to a sub vi. Yeah, it does cause some code duplication. IMO that's the lesser of two evils in this situation.

QSM vs Sub VI.zip

If only one queue is used, how would you control which consumer loop should dequeue the message that the producer sent. This is of course provided that more than 2 consumer/message handler loops are used.

I apologize if I was not clear. Yes, each loop needs to have its own receive queue; however, each queue does not need its own set of enqueue/dequeue vis. You can use the same vis for all the queues in your app. Each time the Obtain Queue function executes it creates a new queue, as long as you are not using named queues.

I've thought of two possible solutions to this.

1) Only use User Events to pass data around and don't use queues.

2) Or have the queue fire a User Event, telling the event structure to go read a queue message.

I'm not quite following #2. Do you mean have the dequeue fire a user event when it receives a message on the queue, in order to release the event structure block? It doesn't work. The dequeue vi has to be executing to check the queue or fire a user event, and it won't execute until the event structure releases its block.

There is another strategy several of us prefer. Use user events to send messages to event handling loops and queues to send messages to other loops.

I've heard that behind the scenes a user events are just queues so I wouldn't expect much (if any) performance difference.

I have a hunch you'd see very significant performance problems if you implemented strategy #2. :P If you have multiple receive mechanisms in a single loop, you *have* to use a timeout and polling. (Unless you implement something like AQ's priority queue.)

Link to comment

Hi Daklu.

In your latest example, while your message handler implementation makes it easier to understand and cleaner, I somehow feel like its use is limited.

If for instance more than one mechanism/object/device needs to read the reactor temperature "ReadReactorTemp", with macros you can just fire the initial sequence of executions (where each device or mechanism would have their own sequences) where one of those sequences would include "ReadReactorTemp" and then go on to do other things. This however cannot be done that easy using your "atomic message handler" since using SubVIs you're mixing more functions into a single case. Whereas if each case structure would have a single function, you can then refer and call them or mix and match them however you want with macros.

In my latest attachment you can see that I'm polling the device status quite frequently, If I was to put that in a SubVI, I would've had to put this SubVI all over the place. With macros however, I just call "Orbit Status: Get" whenever needed and the macro goes and takes care of everything else.

Sorry, maybe this is not what you guys were discussing but this is what sprang to my mind when I first compared the examples.

Regards

Kas

Edited by Kas
Link to comment

Hi Daklu.

In your latest example, while your message handler implementation makes it easier to understand and cleaner, I somehow feel like its use is limited.

If for instance more than one mechanism/object/device needs to read the reactor temperature "ReadReactorTemp", with macros you can just fire the initial sequence of executions (where each device or mechanism would have their own sequences) where one of those sequences would include "ReadReactorTemp" and then go on to do other things. This however cannot be done that easy using your "atomic message handler" since using SubVIs you're mixing more functions into a single case. Whereas if each case structure would have a single function, you can then refer and call them or mix and match them however you want with macros.

The point (I think) that Daklu is trying to make is that some operation ARE atomic and, additionally, sequential - which is implied by the relegation of multiple functions to a single sub-vi.

Let's consider a very simple example of an environmental chamber where you want to step it through a temperature profile. You have to set the temperature you want, wait for the chamber to get to a obtain that temperature, wait for it to settle, then save the actual temperature to a file.And you continue like this until you have covered your profile. So we have atomic and sequential operations.

Here is the code.

Now do this with the QSM thingy without just copying my VIs into cases (don't forget to have cases for open, write and close file as well - after all. It is more "flexible" :) )

What is the problem with my code that the QSM thingy solves?

Link to comment

If the above example is taken strictly as you explained, then yes, you are correct. However, if lets say user changes the target setpoint 2 or 3 times continuously, than "Dwell.vi" would continuously be executed as well. Placing the functions of the three vi's in separate case structures (or the vi's itself), using the "QSM THINGY" I would have thought you have more control over what-is-executed-when. In this case you have control over the "Dwell" case where you wouldn't execute that until the user is finished with the target sp (this may not be the best example to show what I'm thinking).

My experience with labview comes from hardware control and operations. I tend to find it easier if I leave the basic operations in the front level and refer to them through macros along the way (as shown on my previous attachment).

By comparison here, I'm in no way good at programming, slowly however, I'm trying to get there :) .

(don't forget to have cases for open, write and close file as well - after all. It is more "flexible" :) )

Heh... nice..

Link to comment

Now, if I implement the "Track History" (which is design to record anything that is sent or read from VISA), I fear that the application is going to accumulate all the information in memory and slow the whole application down.

Heh... I like to think I'm a reasonably proficient developer. After spending the last couple hours looking at the Track History QSM, I think your fears are justified. I even took the time to map out the execution flow and I still don't know if it will do what it's supposed to do. If my map of the execution flow is correct, then I don't think it will slow the whole app down. As near as I can tell it terminates fairly quickly. It really depends on how you integrate it into your app.

post-7603-0-26723200-1347679745_thumb.pn

Track History is a perfect example of why I dislike QSMs so much. It might work, but it's a huge pain in the neck trying to figure out how it works and it can be nearly impossible to tell if a change I have made breaks something else. Whenever I have to work on a non-trivial QSM like this one I'm forced to adopt a code-and-fix development pattern, and I don't have nearly the confidence in the code that I do using other techniques.

I was going to map out your application too, but honestly I just don't have the time or motivation. I really, really, don't like QSMs. Maybe I'm just too stupid to wrap my head around these things... *shrug*

Is there a way of implementing this without impeding the run speed of the main GUI.

Yeah, but my implementations don't fit very well with QSMs so anything I told you to do would require significant refactoring. If you don't want to impede the main UI make sure nothing blocks execution in the UI loop. You can launch Track History dynamically and send it data on a queue.

I am a PhD research assistant in physics and not really a full time LabView user (even though I love it).

It's one thing to write some data collection and display tools--Labview is great at helping engineering and scientists do these things quickly. However, your requests are moving beyond the realm of quick tools and moving into software engineering. There is much, much, more to designing good software than figuring out what procedures to string together.

Please believe me when I say I don't mean this to be derogatory in any way, but I think you may be getting in over your head a bit. Physicists have specific training and knowledge that allows them to perform their job. So do software engineers. Based on some of your comments on this thread my guess is you'd eventually be able to implement some of what you're looking for, but it will never work quite like you wanted. It will crash, lock up, or maybe the front panel won't respond correctly all the time. Furthermore, you won't know why it has problems. New features will be added a bit at a time until the entire project becomes so rigid and fragile nobody can make any changes without breaking it. Anyone who has been around software development long enough has seen this pattern repeat itself countless times.

Can you learn the skills and knowledge required? Absolutely. Do you want to spends months or years studying software engineering so you can write the application yourself? I don't know, that's your call. If this software is important for the physics lab, I suggest you find an experienced LV architect to come in for half a day and help you break down your app into a manageable design. It will save you scads of time and frustration in the long run. I'm sure there are good developers in the UK here on Lava that could help you out. (*cough* Shaun *cough*) ;)

If budgets are too tight, check you local LV user group for experienced developers. You might find someone to help you out for an hour or two for the cost of lunch.

Again, I don't mean for this to be condescending or dismissive. You're starting a journey across the Atlantic and I'm just trying to point out that a rowboat isn't an ideal vessel for the trip.

I somehow feel like its use is limited.

I understand, but I respectfully disagree. If I don't know how to fly an airplane does that mean airplanes are limited to the ground? Or does it just mean I need to learn how to fly an airplane if I want to be able to use it?

If I was to put that in a SubVI, I would've had to put this SubVI all over the place.

Why is that bad?

With macros however, I just call "Orbit Status: Get" whenever needed and the macro goes and takes care of everything else.

Yes, you can do that. And the poor sod trying to figure out what's going on (that would be me) has to trace into "Orbit: Send & Receive", then follow that into "Orbit Param: Proc.increment", only to discover you're eventually sending the execution back to "Orbit: Send & Receive." (Perhaps after a brief detour to "VNA: Read" and "Graph: Display", which I also have to examine to see if the execution is diverted anywhere else.) All this, just to figure out you're doing a simple loop.

What advantage are you getting out of having "Orbit Status: Get" turn around and queue up a bunch of additional cases? Flexibility? You know what else is flexible? Jello. Doesn't make it a good building material. The ability to interrupt processes like you mentioned with users changing the temperature of Shaun's oven? Nope, not with any sort of reliability anyway.

Take your exit button as an example. When the user hits the exit button the UI loops puts "Macro: Exit" on the front of each of the queues. Lots of people do this with the expectation that it will be the next message read and everything will shut down nicely. They're all wrong. (Most of the time. It can work under very limited restrictions that nobody ever follows.)

Suppose the bottom loop has just dequeued the "Orbit Status: Get" message, but has not entered the case structure. Then the event loop places "Macro: Exit" on the queue. How many iterations will the lower loop go through before "Macro: Exit" is processed. Here's a hint--you've built a race condition into your app. Were you aware of that? (Kudos if you were. I've talked to many experienced developers who didn't realize it.)

Here's an exercise for you. Trace out the execution flow of your lower loop like I did in the diagram above. Now, for each arrow leading from one case to another, figure out what happens when each of the messages from the UI loop is placed on the queue during that transition. You've got five different messages the UI loop can send. On the Track History vi there are roughly 40 transitions, so that would be 200 different potential race conditions I'd need to verify will not cause problems. Off the top of my head I'd guess your app has roughly the same number of transitions.

Oh, and there's not really any way to automate this testing, so you have to verify it by hand. Be sure to take good notes, because every time you make a change to the execution flow diagram or add another message from the UI, you'll have to go through the process again.

If you find it easier to develop your stuff using QSMs, that's great. They can work sufficiently well for small, uncomplicated applications with no growth potential. In my experience they don't scale well at all and, like a Jello house, collapse under their own weight long before they become useful to me.

Link to comment

If the above example is taken strictly as you explained, then yes, you are correct. However, if lets say user changes the target setpoint 2 or 3 times continuously, than "Dwell.vi" would continuously be executed as well.

The user cannot change anything at all after the vi is started so it will always execute as expected and the dwell times will always be as they are in the array..

Placing the functions of the three vi's in separate case structures (or the vi's itself), using the "QSM THINGY" I would have thought you have more control over what-is-executed-when. In this case you have control over the "Dwell" case where you wouldn't execute that until the user is finished with the target sp (this may not be the best example to show what I'm thinking).

So. You slap my 3 vis into cases. What do you do about getting them to execute in the right order with the right parameters?

My experience with labview comes from hardware control and operations. I tend to find it easier if I leave the basic operations in the front level and refer to them through macros along the way (as shown on my previous attachment).

You mean turn a graphical language into a scripting language? Maybe Luaview is of interest:)

Heh... nice..

You may laugh. But that is exactly what you will find is some of the examples using the JKI QSM on this very board.

The answer I was looking for is you cannot stop it (without pressing the Labview stop button) once it has started. But that is the only problem.

The point I was trying to make is why break up ordered and sequential things that can be defined by the language so that you have to write a load of code to do the same thing and make it difficult to understand and (as Daklu has pointed out) introduce race bugs?. Admittedly it was a contrived example, but it was to make the point. Daklu an I disagree on many things, but we agree on form over function where the choice is arbitrary.

Have you seen the event structure state machine?

Link to comment

you've built a race condition into your app. Were you aware of that?

Hah.. I had no idea. It never even came to mind that such things can cause race conditions. Since this is the case, looks like my whole QSM Template structure is just a ticking time bomb. From this aspect, the amount of macros involved is nothing less than a spaghetti code.

Amazing, the more I learn the dumber I feel.

Thank you Daklu for taking your time, you've certainly put things into perspective.

By the way, what software did you use for that diagram flow. Looks different from general flow diagrams.

As per the projects them self, they're not all that important, they've just mildly asked me if I can do something to bring things back to the modern age. The original HP Basic code they have still works for them. This was just me trying to mix things up a little by stepping of my comfort zone a bit (JKI Template :)).

The answer I was looking for is you cannot stop it (without pressing the Labview stop button) once it has started. But that is the only problem.

The point I was trying to make is why break up ordered and sequential things that can be defined by the language so that you have to write a load of code to do the same thing and make it difficult to understand and (as Daklu has pointed out) introduce race bugs?. Admittedly it was a contrived example, but it was to make the point. Daklu an I disagree on many things, but we agree on form over function where the choice is arbitrary.

Now I see what you guys meant. Daklu hit the nail with his last post. I guess before I start labview, I need to learn the lingo first. hehe.... its what you get when dealing with a beginner :D.

It will take me some time to TRULY digest everything you have all mentioned here. But this thread has sent me into an amazing start of.

Finally, to Daklu, ShaunR, and everyone participating in this thread: :worshippy: :worshippy: :worshippy: :worshippy: :worshippy: :worshippy: :worshippy: :worshippy:

In words, BIG thank you, for having examplory responses and patients in dealing with us (newbies of course).

Regards

Kas

Link to comment

Hah.. I had no idea. It never even came to mind that such things can cause race conditions. Since this is the case, looks like my whole QSM Template structure is just a ticking time bomb. From this aspect, the amount of macros involved is nothing less than a spaghetti code.

Amazing, the more I learn the dumber I feel.

Don't be too hard on yourself. Everyone has to go through the same learning curve. A couple years ago I was helping out on a project implemented as a QSM. It had grown so convoluted it took me over a day just to map the execution flow from one message. That was the day I decided QSMs were not, contrary to popular opinion, the bees knees and started looking for something better.

By the way, what software did you use for that diagram flow. Looks different from general flow diagrams.

yEd

This was just me trying to mix things up a little by stepping of my comfort zone a bit (JKI Template :)).

So learning multi-loop programming is the next step for you. You don't have to throw out all the work you've done. You can start by condensing your QSM cases so every case in the structure represents a single public message it can receive from an external source, like what I did in the example above. If you do that and follow the three guidelines I gave earlier that should go a long ways towards eliminating race conditions.

Another thing you can do is learn what real state machines are--not the horribly and incorrectly named Queued State Machine. (Or as I sometimes call it... Hector.) Try modelling your application's behavior as a state machine on paper before writing any code. If you can create the correct behavior on paper writing the code is easy.

Link to comment

Another thing you can do is learn what real state machines are--not the horribly and incorrectly named Queued State Machine. (Or as I sometimes call it... Hector.) Try modelling your application's behavior as a state machine on paper before writing any code. If you can create the correct behavior on paper writing the code is easy.

Indeed. It is a Queue [based] Sequence Engine. Although I am also guilty of calling it a QSM.

Link to comment

If I had to do that exact same fp update in response to more than one message, the first thing I'd do is copy and paste using local variables. Eventually I'd move the limit checking logic to a sub vi. Yeah, it does cause some code duplication. IMO that's the lesser of two evils in this situation.

To me the evils are in the opposite order. Having the case-structure cases all correspond to atomic operations is certainly nice, but I really don’t like copying the UI update code. I don’t particularly like doing UI work by property nodes in subVI either (I like to be able to find the property nodes by right-clicking on the control, or vis versa).

BTW, looking at the “Track History” VI does make me admit something: QSMs give you enough rope to hang yourself. QSMs give you a lot of flexibility; they don’t force you to follow narrow rules. But this doesn’t mean you shouldn’t be following the rules! Most of the time at least. And if you don’t know what the rules are then the QSM structure isn’t really going the guide you to them. So I would advise anyone using QSMs to understand the good arguments against them, not necessarily to stop using them, but to understand the rules, which you need to understand to use QSMs effectively.

Link to comment

Although I am also guilty of calling it a QSM.

As do we all. If the goal is to communicate with other developers we don't really have a choice.

[From my Hector post]

"I could start calling an apple an "orangutan" and those who know me well would understand my meaning, but when somebody in an online forum asks for a dessert suggestion because the the in-laws are in town and I respond with "orangutan pie," it's bound to cause all sorts of trouble. (Especially when I recommend using peeled orangutans and removing the stem. blink.gif ) I know I'm unlikely to change the world and get everyone to start calling it a "function machine." I'm sure there are those reading this (if anyone is left now that we've completely derailed the original topic) who dismiss this as a minor semantics issue. I disagree--what we choose to call things often conveys information about that thing. "QSM" implies that pattern is, in fact, a type of state machine when in reality it is not. If instead of "QSM" we referred to that pattern as "Hector," I would consider that a better (if rather arbitrary) name. "QSM" is an extremely poor name for the pattern."

I hope the someday the term QSM will be universally recognized as inappropriate and replaced with something better. I don't see that happening without a concious effort from NI. I've heard some of their courses actively taught the QSM by that name. I don't know if they still do.

[As an aside, while the QSM Hector's unpredictability is what trips people up, it also intrigues me a bit. I have this nagging idea in the back of my head to try using it for some sort of machine learning applications.]

Link to comment

[As an aside, while the QSM Hector's unpredictability is what trips people up, it also intrigues me a bit. I have this nagging idea in the back of my head to try using it for some sort of machine learning applications.]

Well. I'm not sure that is such a good idea (I'm gonna call a hector a QSE from now on. People will think I've just mistyped it so it may permeate :)). But it is suited to a parser so I could quite easily see a good use for it as an emulator.

Edited by ShaunR
Link to comment

And if you don’t know what the rules are then the QSM structure isn’t really going the guide you to them. So I would advise anyone using QSMs to understand the good arguments against them, not necessarily to stop using them, but to understand the rules, which you need to understand to use QSMs effectively.

In principle I agree with you. In practice it doesn't work. Nobody understands the rules. I've tried to come up with an appropriate set of guidelines newbies can follow to avoid getting into trouble. It either ends up being so restrictive it borders on uselessness or too difficult to be practical. For example:

Simple Rules:

1. Only the producer (UI loop), not the consumer (QSM), is permitted to put items on the queue.

2. Always put items on the rear of the queue.

If developers follow these rules they will not have race conditions. But nobody does because it takes away one of the primary features people like about QSMs--the ability to interrupt an ongoing process.

Difficult Rule:

1. It is okay for both the producer and consumer loops to manipulate the queue as needed, provided the manipulation does not introduce unintended side effects.

The rule is correct. Race conditions will exist but become irrelevant if you follow this rule. Except... how does one know if there are unintended side effects? Grab a big stack of paper and start manually tracing through your code like I suggested to Kaz earlier, keeping track of the contents of the queue as you go along. There is no other way.

There are situations where it is perfectly safe for either the producer or consumer to add items to the front of the queue, to the rear of the queue, or even to flush the queue. When I try to explain to people why QSMs are dangerous and show samples illustrating the problem, invariably I get objections from seasoned developers who say things like, "That's no big deal. I would just..." Usually they are correct. The solution they propose will work in this simple example application for this specific problem, but what I have a hard time getting across is it is not a viable general solution to the problem.

Unfortunately there are also situations where it is not safe for the producer or consumer to do any of those queue manipulations. For any non-trivial QSM it is extremely difficult to tell when it is safe and when it is not. If you don't have an execution flow diagram it is borderline impossible, and it's not necessarily easy even when you do.

When I was attempting to come up with general guidelines for safely using a QSM, I quickly realized the QSM can receive any message at any time. In other words, guaranteed sequences of messages are a myth. If I need to guarantee a sequence of procedures occur in order, I have to put them in a single case instead of spreading them between multiple cases. That, along with posts from other forumers (like this one,) led to the idea of atomic messages and ultimately blew up the entire concept of QSMs. When I cleared away the ashes and debris, what I ended up with is what I now call a Message Handling Loop.

At the end of the day the best rule I've been able to come up with for guaranteeing a QSM application works correctly is,

1. Don't use a QSM.

However, many developers still swear by them, so I am hopeful guidelines exist allowing developers to easily use a QSM correctly--I just haven't seen any. I'm keenly interested in any rules you believe are appropriate.

(I'm gonna call a hector a QSE from now on. People will think I've just mistyped it so it may permeate :)).

That would be better, but personally I'd really like to see the "Queued" part of the name dropped from all the labels. IMO it focuses too much on the specific implementation rather than the important part--what it is intended to do.

Take the QMH. What if I decide to use notifiers or a RT FIFO for a messaging transport? Does it then become a NMH or RTFIFOMH? What they all have in common is they receive and process messages, so why not just call it a Message Handler?

Link to comment

Simple Rules:

1. Only the producer (UI loop), not the consumer (QSM), is permitted to put items on the queue.

2. Always put items on the rear of the queue.

If developers follow these rules they will not have race conditions. But nobody does because it takes away one of the primary features people like about QSMs--the ability to interrupt an ongoing process.

Difficult Rule:

1. It is okay for both the producer and consumer loops to manipulate the queue as needed, provided the manipulation does not introduce unintended side effects.

The rule is correct. ...

Like the simple rules; don’t like the difficult one. And the simple rules are followed by the JKI template. (2) is absolute because the message “queue” in the JKI is an event registration and you can only add to the end of the queue. (1) can be broken with Value(signaling) nodes and User Events (I sometimes use the first to initialize the UI) but it is easy to follow with the JKI.

The simple rules can be followed with the JKI because it uses a separate queue for internal “operations”. This has different rules:

1. Only write from one process (enforced in the JKI by using a by-value queue implementation).

2. Always put items on the front of the queue.

However, these rules aren’t the one I was talking about, as they CAN be built into a good “QSM” design. And should be. Like the JKI.

[Edit: actually, JKI doesn’t enforce or guide one to put items only on the front of the internal queue. Wish it did.]

— James

Link to comment

So I've mentioned before my criticisms are aimed at the multi-loop QSMs commonly produced by beginning/intermediate developers. Single loop QSMs like the JKI SM do solve the problem of race conditions by virtue of separating the message transport and the function sequence, and they are harder to screw up than the typical multi-loop QSM. On the other hand, combining the UI event producer (the event structure) and event consumer (the case structure) into a single loop also has side effects that may not be acceptable to the end user. (Like an unresponsive UI.)

Like the simple rules; don’t like the difficult one. And the simple rules are followed by the JKI template.

No they aren't. JKI splits the message transport and function sequence into separate elements. That's good; it helps eliminate race conditions. But they specifically allow items to be placed on either the front or the rear of the function sequence in any case statement. If it can be done there needs to be guidelines explaining when it should or should not be done.

(1) can be broken with Value(signaling) nodes and User Events (I sometimes use the first to initialize the UI) but it is easy to follow with the JKI.

Okay, so sometimes you break the rules. We all do. The question is how do you know when it is okay to break them? What do you do to make sure breaking the rule will not introduce unwanted side effects?

The simple rules can be followed with the JKI because it uses a separate queue for internal “operations”. This has different rules:

1. Only write from one process (enforced in the JKI by using a by-value queue implementation).

2. Always put items on the front of the queue.

I'm interpreting this to mean these rules are sufficient to implement any arbitrary sequence of functions. After all, the most often cited benefit of the QSM is it's flexibility, right?

Suppose I have a QSM with cases A, B, C, etc. I have two sequences I want the QSM to execute based on UI inputs:

Sequence 1: A; B; C; If results of C = 4, then D, else E;

Sequence 2: A; B; C; If results of C = 4, then F, else G;

QSMs can work well when the entire sequence is known up front. Once you introduce branching logic based on information obtained during the sequence things get much more difficult. Yes, you can implement this with a QSM. For example, you could add clustersaurus elements for 'Next Step if C=4' and 'Next Step if C != 4.' Or you could add new cases for 'Test C Results for Sequence 1' and 'Test C Results for Sequence 2.' Would you consider either of these solutions as satisfactory or easy to understand as connected sub vis and a case structure testing C's output?

Here's another one:

Sequence 3: A; X; B; X; C; X; D; Where X=True aborts the remainder of the sequence.

Can you implement this in JKI's QSM? Yep, but can't implement it if you only put items on the front of the queue. Your rules are incomplete. You have to be able to flush the queue too. [Edit: In retrospect, actually you *can* implement it by only putting items on the front of the queue, but the code is so obfuscated as to be impractical.]

[Edit]

And another one:

Sequence 4: A; B; if B = True then append D; C; if C = 4 then append E; {D;} {E;}

There's no obvious way to implement this functionality at all without being able to add things to the end of the queue or examining the contents of the queue and maintaining sequence specific information in the clustersaur.

However, these rules aren’t the one I was talking about, as they CAN be built into a good “QSM” design. And should be. Like the JKI.

Any design imposing rules to ensure safe use will also limit the developer's ability to add new capabilities the customer may require. QSM developers like the flexibility and from what I've experienced are loathe to give up any aspect of it.

The challenge stands: Show me a QSM template and set of rules suitable for beginning/intermediate level programmers that provides flexibility, maintainability, and predictability. Until I see that I'll continue to oppose the belief that QSMs are appropriate constructs for developers at those levels.

Link to comment

So I've mentioned before my criticisms are aimed at the multi-loop QSMs commonly produced by beginning/intermediate developers. Single loop QSMs like the JKI SM do solve the problem of race conditions by virtue of separating the message transport and the function sequence, and they are harder to screw up than the typical multi-loop QSM. On the other hand, combining the UI event producer (the event structure) and event consumer (the case structure) into a single loop also has side effects that may not be acceptable to the end user. (Like an unresponsive UI.)

Sorry, I didn’t understand this comment. It’s easy to make a multi-loop with a function sequence independent of the inter-loop message queue.

No they aren't. JKI splits the message transport and function sequence into separate elements. That's good; it helps eliminate race conditions. But they specifically allow items to be placed on either the front or the rear of the function sequence in any case statement. If it can be done there needs to be guidelines explaining when it should or should not be done.

I agree. I realized my mistake there and edited my post.

Okay, so sometimes you break the rules. We all do. The question is how do you know when it is okay to break them? What do you do to make sure breaking the rule will not introduce unwanted side effects?

...

I'm interpreting this to mean these rules are sufficient to implement any arbitrary sequence of functions. After all, the most often cited benefit of the QSM is it's flexibility, right?

I was thinking more about code clarity. Do it the clear and simple way and reserve the arbitrary sequence for when it is really necessary. They’re not rules, there guidelines. And they aren’t complete. Sorry, I realize I wasn’t really meeting your challenge.

Here's another one:

Sequence 3: A; X; B; X; C; X; D; Where X=True aborts the remainder of the sequence.

Can you implement this in JKI's QSM? Yep, but can't implement it if you only put items on the front of the queue. Your rules are incomplete. You have to be able to flush the queue too.

“Abort” does involve manipulation of the queue (partial flush) that would be against the normal “rules”. No enqueuing on the end, though.

[Edit]

And another one:

Sequence 4: A; B; if B = True then append D; C; if C = 4 then append E; {D;} {E;}

There's no obvious way to implement this functionality at all without being able to add things to the end of the queue or examining the contents of the queue and maintaining sequence specific information in the clustersaur.

Is this kind of thing common? Why can’t D happen right after B? Why does C have to happen in between? And why can’t E happen after C? Even if you did this in subVIs it would seem a bit strange to me. Though certainly far clearer in subVis.

Gotta go...

— James

Link to comment

First off I want to make it clear I am not claiming anyone is wrong for using JKI's SM, or a traditional two-loop, single-queue (2L1Q) QSM if that's what they like. I'm not claiming their code is "bad." I've often stated that if the code meets the requirements (both stated and unstated) it is good by definition.

Second, my thoughts and comments about state machines are almost always directed at 2L1Q QSMs, not JKI's SM. However, since you brought it up we should probably spend some time on it, but it wasn't my original intent.

As I've studied the QSM over the past several years there are two things in particular that stand out as high risk areas:

1. Race Conditions: Using a single queue for both the message transport and function sequencing is a huge red flag.

2. Branching Execution: Implementing conditional logic that changes the sequence after it has started is prone to errors.

JKI's SM takes care of the first problem. It does not take care of the second.

“Abort” does involve manipulation of the queue (partial flush) that would be against the normal “rules”.

Okay, so we have an exception to the rule. "Sometimes it's okay to flush the queue." That's good. We all know there are exceptions; the trick is identifying and characterizing the exceptions as completely as possible. Is it ever not safe to flush the queue on abort? Can the queue be flushed for operations other than abort? I don't claim to know the answers. These are questions I've asked myself and been unable to answer to my satisfaction.

No enqueuing on the end, though.

I assume this is a rule you've adopted to help you use QSMs safely. I don't deny that it works for you, but that rule eliminates a huge chunk of flexibility that makes the QSM so attractive to users. By restricting yourself to only enqueuing to the front of the queue, all your in-sequence branching is in the form of "branch immediately and return." Other kinds of branching logic is very hard (and may be impossible) to write, and even harder to read.

As near as I can tell branch immediately and return is not a sufficiently powerful operation to meet the QSM developer's flexibility requirements. To be able to implement the kind of branching options available using a message handler and sub vis you need to be able to add items to both the front and rear of the queue, and maybe even examine and modify the contents of the queue.

Is this kind of thing common? Why can’t D happen right after B? Why does C have to happen in between? And why can’t E happen after C?

You're thinking too specifically. Maybe the output from B is being fed into C and C has to occur before D. It doesn't matter. This is just a way to talk about sequences and functions abstractly without getting caught up in the details. Why the requirement exists is irrelevant; it only matters that the requirement does exist.

The sequences I presented are all valid requirements for an application's execution flow. Can the QSM handle them as cleanly and safely as a message handler and sub vis? I don't know. I doubt it, but someone might prove me wrong.

Branching logic is easy to implement when using sub vis, in part because there is execution time in between subsequent vis where the branching logic can be placed. In a QSM there is no in between time to put the branching logic. It has to be built into the function itself, which obviously limits its ability to be reused. I guess infinitely short transitions is the one way in which a QSM *is* similar to a real state machine. Too bad that leftover property has such a negative effect.

QSM implementations seem to have this weird tension going on. You want to break down each function into small pieces to improve reusability, but it's very hard* to implement branching based on results from a previous function, so that pushes you toward larger functions. (*When I say hard I mean it is hard to do it in a way that does not obfuscate and clutter up the code.)

Link to comment

Since this might have a short answer, I thought it might be best if I just continue here rather than open a new thread.

Based on one of my previous post about the user login: http://lavag.org/top...dpost__p__98442

Do you think it might be best to use the power of the DCS module for user-login system implementation, or should I start looking into creating my own login system.

One thing I can see while reading about the DCS module is that I cannot add/modify user details on the run through user VI without accessing the "Domain Account Manager" through Tools>>Security on the block diagram.

I thought that DSC module can help greatly in reducing the workload in creating a new login system from scratch. If however what I've just mentioned is true, has anyone managed to find a workaround?.

I haven't personally spent to much time on the DSC module. I'm just trying to determine if this is something that I can start using, and if so then I can concentrate on learning more about the DSC and building the system around it. If not, I'll have to spend my efforts in creating a new login system that would probably end-up doing the same thing as what DSC security module does, but with more leniency.

Regards

Kas

Edited by Kas
Link to comment

First off I want to make it clear…

I think I should restate how I use the JKI.

Mostly I use it for something that either is a UI or incorporates the UI. I use the cases of the case structure not to replace subVIs in general, but instead:

1) code closely related to the UI (where with a case one can use terminals and locals, and one can find the property nodes from the front-panel control right-click menu).

2) high-level code that if in a subVI would be a method of the “clustersaurus” cluster/object and would mainly be called in a simple chain connected by the clustersaur. Calling these instead as a “macro” doesn’t really lose any clarity.

At finer levels of abstraction I use subVIs (basically, once I’m writing methods on components of the clustersaur). I do not, thus, "break down each function into small pieces to improve reusability”, at least not with cases. Your complex, enqueue-at-back example strikes me as something that should really be handled at this level (as a single case or clustersaur method). Or simplified, such as by subsuming both C and D under B (so B enqueues in front either C or CD). I am not an advocate of very complex branching logic in the QSM.

As I use cases as an alternate to (some) subVIs, I tend to use them like subVIs in that one subVI can “contain” another subVI call. By enqueuing on front, and not having external writers to the internal queue, I can mentally abstract “macros” as unified operations independent of other macros (example: “Macro: refresh main table” refreshes the main display table, it doesn’t depend on what I enqueue after it).

Second, my thoughts and comments about state machines are almost always directed at 2L1Q QSMs, not JKI's SM.

Using a single queue for both internal sequencing and inter-loop communication is a major flaw, IMO. All “QSM”s should use two queues.

— James

Link to comment

There's alot of good discussion going on here and I don't want to introduce more topics, I just wanted to clarify what I meant.

I'm not quite following #2. Do you mean have the dequeue fire a user event when it receives a message on the queue...

No what I meant was have a subVI that enqueues data, but just after it enqueues data onto the queue, it then generates a user event. The consumer loop is not polling anything, it has a while loop with a event structure with no timeout. But it does handle the custom user event. In the custom user event case it dequeues the data from the queue.

I wouldn't go this route, but for those that like working with queues this would be a middle ground between user events and queues. I would take it a step further and not use a queue, but use a user event for consumer/producer data which means no polling, and all events (UI events or other) are all handled in one event structure, without polling.

I have a hunch you'd see very significant performance problems if you implemented strategy #2. If you have multiple receive mechanisms in a single loop, you *have* to use a timeout and polling. (Unless you implement something like AQ's priority queue.)

Hopefully with my clearer explanation you'd see that there would be no polling with #2, and no timeout. There are multiple receiving mechanisms, but one triggers the other, so in reality you are only waiting for one to fire, and when it does, go handle the other.

Attached is a very quick test of method 1 and 2 I mentioned earlier (in 2011). Just open the main and run it. It will enqueue 5 things to handle, (500ms apart) and the consumer loop on top handles them one at a time and then waits for user interaction to hit the stop button (without polling). Again I would use method 1 which only uses user events, method 2 uses user events but queues as well.

Queue Event Test.zip

Link to comment

Mostly I use it for something that either is a UI or incorporates the UI.

I have heard a few (too few, imo) other developers say they limit it to UI code too. Unfortunately this information seems to get lost in the enthusiasm of "it's *so* flexible" thinking and people tend to use it as the basis for their entire application. (Because it's "scalable," right?) I cringe whenever I hear someone say, "I'm using a QSM architecture."

At finer levels of abstraction I use subVIs (basically, once I’m writing methods on components of the clustersaur).

I take this to mean any code that reads or writes to the clustersaur is encapsulated in a sub vi? I can see how that would be useful for a UI-oriented QSM. I'm not sure how useful it would be in non-UI related QSMs--seems like most of the cases will be doing that to some extent.

This idea is helpful though. For a long time I've had this feeling that QSM states need to have some sort of classification if suitable rules are going to be created, but I haven't been able to nail down anything that felt right. The only two classifications I've been able to come up with so far are splits (cases that direct execution down one of multiple possible paths) and joins (cases where multiple execution paths converge.) Splits isn't a very good classification--the case that makes the decision which execution path to take isn't necessarily the case where the execution path actually diverges. Maybe there needs to be a modifying classification for those that change the shift data?

(I get the feeling I'm treading down a path well travelled by theoretical computer scientists. Too bad I have no background in theoretical computer science to help me find the way...)

Or simplified, such as by subsuming both C and D under B (so B enqueues in front either C or CD).

You can only do that if B is always followed by the C/CD choice. The primary feature of QSMs is flexibility. People like being able to call any function at any time by simply typing the name. They want to be able to create sequence ABx, where x is any sequence of zero or more additional functions.

As I use cases as an alternate to (some) subVIs, I tend to use them like subVIs in that one subVI can “contain” another subVI call.

Right, a sequence might inject a subsequence in the front of the queue. In practice I suspect most of the time that would be safe. Is it always safe? Nope. Can we identify the conditions under which it is not safe?

What are your personal rules for reusing a function? If you have a macro A that in turn enqueues BCD, are you free to use any of those in another macro? What about macro m that enqueues arbitrary functions xyz? Are there limitations on what functions m can enqueue? I can think of one off the top of my head--m can't enqueue any function that in turn results in m being enqueued, (unless one is implementing a recursive algorithm and a terminating condition is also implemented.) When I join a project and open the QSM for the first time to a list of 30+ functions, how can I figure out how the functions are related to each other so I know which ones are safe to call?

example: “Macro: refresh main table” refreshes the main display table, it doesn’t depend on what I enqueue after it

Your macro has the characteristics I consider necessary for my messages: atomicity and independence. It completes the entire operation before returning control to the message/event receiver and it doesn't require other functions to precede or follow it to ensure correct operation. Do all your message handling cases invoke macros, or do they sometimes invoke an arbitrary sequence directly? Do you make sure each of your functions is also independent and will operate correctly regardless of the function called immediately before it?

Using a single queue for both internal sequencing and inter-loop communication is a major flaw, IMO. All “QSM”s should use two queues.

[Note: I've been trying to write a paper capturing my various thoughts on QSMs. Last night I started referring to single-queue style QSMs as having a "public sequence," and those with a double queue (like the JKI SM) as having a "private sequence." It seems to accurately describe the functionality without specifying the exact implementation.]

I agree with the first, I disagree with the second. If you accept that QSMs are an appropriate style of programming what basis do you have for rejecting public sequence QSMs? Race conditions are a major (and underappreciated) concern with them, but public sequence QSMs can be used correctly when appropriate restrictions are observed, and private sequence QSMs can be used incorrectly when appropriate restrictions are not observed. Furthermore, public sequence QSMs have the feature everybody wants: interruptability.

I wouldn't go this route, but for those that like working with queues this would be a middle ground between user events and queues. I would take it a step further and not use a queue, but use a user event for consumer/producer data which means no polling, and all events (UI events or other) are all handled in one event structure, without polling.

Thanks for the example. I now see what you mean. I wouldn't use #2 either. Sending a message (event) to tell the receiver to check their messages (queue) is redundant. True, it does do it without polling, but what's the point?

It reminds me of some business communication. How many times have you had someone send you an email, then call you up (or visit your desk) and tell you they just sent you an email, but because you haven't read it in the 12 seconds you had between receiving the email and getting interrupted by their phone call, they go ahead and repeat everything they said in the email? Hey, if you were going to repeat the contents of the email to me over the phone, why'd you send the email in the first place?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.