hooovahh Posted September 18, 2012 Report Share Posted September 18, 2012 Thanks for the example. I now see what you mean. I wouldn't use #2 either. Sending a message (event) to tell the receiver to check their messages (queue) is redundant. True, it does do it without polling, but what's the point? Absolutely. The custom user events mechanism is very powerful when combined with a UI and I think it would be a great evolution to the QSM. Not used in all situations (someones signature comes to mind) but in my mind I don't know why I would use native queues anymore, other than possibly interfacing with other already written tools. And even then a wrapper could be done to convert an incoming queue to a user event. 1 Quote Link to comment
Daklu Posted September 18, 2012 Report Share Posted September 18, 2012 The custom user events mechanism is very powerful when combined with a UI and I think it would be a great evolution to the QSM. Evolution? User events and queues are both just transports. Obfuscation (chaining functions instead of using sub vis) and insufficient decomposition (cramming too much stuff in a single loop) are the two main implementation issues I encounter when I work with existing QSMs. Switching from queues to user events doesn't help either of those problems. What problem does events solve that you don't like about queues? Dynamic registration and one-to-many are the main functional differences of events. How do these features help you write good code? I don't know why I would use native queues anymore... If events are more suited to your development style than queues I don't know why you'd use queues either. I'm more curious about the development style in which events are more easier than queues. Do you use hierarchical messaging or are your designs based more on direct messaging? Quote Link to comment
Kas Posted September 18, 2012 Author Report Share Posted September 18, 2012 ShaunR:In your latest example, you have a case structure named "Destroy" on your "Queue.vi". In there you flush the queue and then destroy it. Is this normal when working with queues? Its just that majority of the examples I've seen with queues, they just seem to release the queue in the end. Kas Quote Link to comment
PaulL Posted September 18, 2012 Report Share Posted September 18, 2012 ... but in my mind I don't know why I would use native queues anymore, other than possibly interfacing with other already written tools." I thought I should clarify: I liked hooovahh's post above not because of the QSM comment but because I think he is on to something here (and above) when talking about user events. The components in our system use strictly events, no queues. This isn't because I think there is anything bad about queues--I don't think that at all--but because we were already relying on events for items on the view (button clicks and so on) and for networked shared variable events. I found it easier to do everything in terms of events (creating user events when necessary) than to support another data transport mechanism. (Also, I am generally more familiar with events.) Our controllers never repeatedly poll (although we do pull preexisting values of shared variables on a few transitions when we haven't previously monitored the value). We also have a very few instances where in practice for convenience our view code polls shared variable values on a timeout event, but almost all our views are entirely event-driven. Again, I don't have anything against queues. For me it is just easier to limit the number of message transport mechanisms; events have met our needs quite well, and I think the event concept is relatively easy to understand. I haven't used a queue in production code in years. I'm not disparaging the use of queues in any way (and if you have an application--probably rare, though--that absolutely requires reordering messages in transport, then you probably need a queue); I am saying it is entirely possible and reasonable to build an event-driven system and ignore queues altogether. Quote Link to comment
ShaunR Posted September 18, 2012 Report Share Posted September 18, 2012 ShaunR:In your latest example, you have a case structure named "Destroy" on your "Queue.vi". In there you flush the queue and then destroy it. Is this normal when working with queues? Its just that majority of the examples I've seen with queues, they just seem to release the queue in the end. Kas Releasing a queue doesn't destroy a queue unless the reference count falls to zero, you wire true to the "Force Destroy" input or all VIs that have references to the queue go out of memory (too many bullets in the gun to shoot yourself in the foot with IMHO). The obtain only returns a reference to a queue, not the queue itself (which can cause memory leaks). This means you have to be very aware of how many "obtains" to "releases" there are in your code and, if you pass the queue ref around to other VIs, ensure that you don't release it too many times and make the reference invalid. Since the VI obtains a new reference and releases it on every call if there is already a reference (which makes it atomic and ensures there is always 1 and only one ref), you only need to get rid of the last reference once you are finished with the queue completely and don't have to worry about matching release to obtains (i.e. it protects you from the ref count inadvertently falling to zero, vis going out of memory and invalidating a queue reference or leakage). The flush is purely so that when you destroy the queue you can see what was left in the queue (if anything). The upside to this is that you can throw the queue VI in any VI without all the wires and only need to destroy it once when you are finished with it (or, as I have used it in the example, to clear the queue). Quote Link to comment
Kas Posted September 18, 2012 Author Report Share Posted September 18, 2012 Releasing a queue doesn't destroy a queue unless the reference count falls to zero... Thanks P.S. Your "Windows API" and "Transport" very useful. Quote Link to comment
Daklu Posted September 18, 2012 Report Share Posted September 18, 2012 The components in our system use strictly events, no queues. I agree it's easier if you choose a single transport mechanism in your code, and since you already use events for your back end (NSV) I understand why you'd decide to go that route. Since NSV events are the backbone of your system, do you find your systems are heavily event-based instead of command/request-based? Not events the transport mechanism, but events in the general sense that each component is telling others something just occurred ("I just stubbed my toe.") instead of instructing another component to do something ("Go get me a bandaid.") I use about an even mix of request and status (or event) messages. For most applications request messages go down the tree and status messages go up. I've read stuff that speaks highly of pure reactive systems that only use status messages. Thinking about all those direct connections makes me shudder and I'm curious what your experience is. I am saying it is entirely possible and reasonable to build an event-driven system and ignore queues altogether. I don't disagree with you at all. There are some implementation decisions that can have a huge effect on how easy they are to use. When I first played around with user events my instinct was to create a unique user event for each custom message sent to the event loop. I hated it. They became much more usable once I started creating a single generic user event and sending all custom messages on that. Do you use generic user events or create one for each message? I'm also curious how you manage the interface ownership between two components. Which component gets to decide what the message looks like? In my applications a component owns and defines the request messages it will honor and the status messages it will provide. If there is an interface mismatch the other component has to implement an adapter of some sort. Events more naturally align with status messages, but if all you use is status messages and status messages are owned by the sender, I would think it would lead to dependency pollution. (Interface dependency, not static dependency.) I know you guys use abstract classes to define interfaces for components, but it's not clear to me if you do that for messaging interfaces or just for method interfaces, and if you do define abstract classes for messaging interfaces, which component owns the class? Quote Link to comment
AlexA Posted September 19, 2012 Report Share Posted September 19, 2012 Another interesting discussion on multi-loop applications. Personally, I don't see anything wrong with writing my cases as sub-vis which execute on the data cluster for that loop. There's no reason you can't write a single op command by creating another, appropriately named case and dropping the method sub VI in there. I agree with Daklu that it's easier to understand "macro" type behaviour when you can see it chained out as sub-vis. My question to those who advocate "macros" as implemented via a chain of case structures. What advantage do you gain by separating sequentially dependant operations into their own cases? A program should have strict operational interactions for the user, press this button, get this result. If it doesn't make sense for a user to be triggering some behaviour while other behaviour is occurring, then it's better to constrain the user (by greying out buttons for example, or, as daklu argues, by making cases atomic sequence executions of sub vis), than to try and handle every, potentially extremely stupid, thing a user can do to interfere with an executing macro. Quote Link to comment
drjdpowell Posted September 19, 2012 Report Share Posted September 19, 2012 You’re not going to get much argument Alex. I’m the one advocating QSMs, yet I’m strongly against the common single-queue designs where user input can interfere with an executing macro (in the JKI template, for example, the Event structure is only run in the “idle” case, which executes only when the internal sequencing queue is empty). I don’t even like macros potentially interfering with each other by enqueuing on the back. Quote Link to comment
Daklu Posted September 19, 2012 Report Share Posted September 19, 2012 then it's better to constrain the user (by greying out buttons for example... Disabling buttons is one of the things I often see people do to work around the problems with public sequence (single queue) QSMs. It can work in very limited circumstances, but it doesn't scale well and is not a good design practice. [Note: The examples use an UI event loop as the sender, but it could be any kind of process executing in parallel.] Disabling the button can be done either in the receiving loop (shown on the left) or in the sending loop (shown on the right.) In the example on the left, it is easy to see that it is possible for the sender to send multiple StartBtnPressed messages before the button is disabled. If the receiving loop is waiting for a message the user will not be able to click fast enough to generate multiple messages, but what if the receiving loop is executing a case and won't return to service the queue for another 0.5 seconds? Multiple StartBtnPressed messages. The problems with the example on the right are a little more subtle. Users cannot enqueue multiple StartBtnPressed messages, but what happens if the StartBtnPressed message is invalid for some other reason? Maybe the receiving loop has entered an error condition users need to take care of before starting the test. Preventing users from pressing the Start button while in an error condition requires the receiving loop to set the button's disabled state, which as I showed in the example on the left still exposes you to race conditions. Q: So if neither solution is safe, how do you prevent users from pressing the start button? A: You don't. The problem with both of these solution is they are attempting to implement correct behavior by preventing the message from being sent. I call this "sender side filtering." In order for a message sender to know whether or not a message should be sent, it needs to know what state the receiver will be in when it processes the message. In the examples above the event loop needs to implement this logic: if QSM.StateWhenMessageIsProcessed != (TestRunning OR Error) then send(StartBtnPressed) end [/CODE]Clearly this is impossible. When using parallel independent loops sender side filtering cannot prevent race conditions. [u]Ever.[/u] The best you can hope for is nobody will accidentally trigger them. As the application grows that gets harder and harder to justify.If you have messages that sometimes should not be processed, the only way to implement that without race conditions is to use receiver side filtering. The receiving loop is the only one that knows what state it is in when the message is read. That's where the message filtering needs to be implemented. I implement message filtering using a Behavioral State Machine (BSM.) Here is a slightly simplified version of what I usually use.In this implementation it doesn't matter how many times the StartBtnPressed message is put on the queue. As soon as the BSM enters a state where the StartBtnPressed message should not be processed it won't process them. They get ignored. If I want to disable the Start button to give visual feedback to the users, the BSM can send a "DisableStartButton" (or something along those lines) to the event loop when it enters a state where it shouldn't be used, similar to this:Here each [i]loop [/i]is a separate and distinct entity with clearly defined responsibilities. The event loop is responsible for the state of the user interface--detecting events, setting control properties, displaying data, etc. The BSM is responsible for managing the state of the underlying business logic and responding correctly to requests to do something.Again, public sequence QSMs like those in the first example [i]can[/i] be used successfully as long as certain restrictions are maintained. As the app grows and message filtering becomes necessary you can't help but introduce race conditions. (That's on top of having to trace through the maze of conditionals buried in each case to figure out the overall logic.) When people claim it "scales well" they must be using a different interpretation than what I'm used to--it's clear to me it does not scale well at all. I have no problem if people use public sequence QSMs (provided I don't have to work on it,) just as long as they understand they are painting themself into a corner. Unfortunately the limitations I've described here are not well understood by the community at large, and the full set of limitations isn't understood by [i]anybody.[/i] Quote Link to comment
PaulL Posted September 19, 2012 Report Share Posted September 19, 2012 Since NSV events are the backbone of your system, do you find your systems are heavily event-based instead of command/request-based? Not events the transport mechanism, but events in the general sense that each component is telling others something just occurred ("I just stubbed my toe.") instead of instructing another component to do something ("Go get me a bandaid.") I use about an even mix of request and status (or event) messages. For most applications request messages go down the tree and status messages go up. I've read stuff that speaks highly of pure reactive systems that only use status messages. Thinking about all those direct connections makes me shudder and I'm curious what your experience is. I will attempt to answer your question, but I think I will need a little clarification from you. 1) All inputs to a component (e.g., a subsystem) in our system go through the same processing pipeline, as you suggest. 2) Stated another way, all state machines are reactive systems (vs. transformational systems--think of data processing) , in that they execute behavior based on events. 3) I consider all events to be inputs of some data (although a simple command may just be the command name, or type). 4) Some of that data I think of as commands. In other words, commands are a subtype of data. (Some are obviously commands, like "Enable", some are obviously status, like "state=Enabled", some are maybe somewhere in between, like "positionSetpointsToWhichYouWillAddCorrections." I have changed the names here to protect the innocent. Lol.) 5) Yes, a higher-level component can send commands (in the strictest sense) to its children (i.e., it can can control/coordinate its children). Children, on the other hand, are unaware of their parents. 6) All communication is publish-subscribe. I'm confused by the last part of your question: "I've read stuff that speaks highly of pure reactive systems that only use status messages. Thinking about all those direct connections makes me shudder and I'm curious what your experience is." I'm not sure what you mean by "direct connections" here. In particular, our components publish status values, so that they are available as needed by subscribers. (Of course, we are careful about how we design the system so that components have exactly the data they need to do what they need to do.) Quote Link to comment
Kas Posted September 19, 2012 Author Report Share Posted September 19, 2012 I'm hoping that its ok to bring this discussion back to basics for a bit. Based on the feedback's provided on this thread, I have amended the initial template to use 3 loops instead. Furthermore, I've eliminated one set of enqueue/dequeue SubVIs by making the other set as reentrant execution (JKI I believe does this for theirs). I am however finding it difficult implementing other rules: 1. Don't execute macros within macros i.e. where one state decides what set of macros to execute depending on the instrument reply or other states. As an example, for this project, I send commands and receive replies from the instrument through 1 set of SubVIs located in "Orbit: Send & Recieve". However, upon receive, I continuously check weather the instrument (Orbit FR controller) has triggered an interrupt, and if that's the case, I than go and read the VNA reply, if no, the motion/operation continues without reading anything from VNA. Since there are various motion modes that Orbit can perform, if you set the controller to a certain motion mode i.e. sector scan, than only then would I expect to have an interrupt triggered. If however the motion is just "Move to Position" i.e. send motors to zero position, than I know there is no interrupt from Orbit. In the end, I use this "interrupt" behavior of the Orbit controller to read from VNA. Further more, I check if the scan/motion was finished and if so then send a set of macros where it would save the result or go to idle at the end etc. 2. Don't execute macros from within the same loop, but instead let the main producer loop decide what to do. Again, because of the example stated above, I'm kind of forced to break this rule. There's way too many macros going back and forth where forcing the main producer loop as the message sender becomes impractical. My understanding of the solution to the above: Use SubVIs instead for a set of direct executions i.e. instead of a set of macros where the execution pattern never changes. While this certainly eliminates/reduces a lot of the macros being executes, I cannot see that it would completely eliminate the problem stated on 1, since there are 2 or 3 critical points of the program that are user independent and based on its reply, it decides what the program should do next, i.e. interrupt, scan finished etc. Further more, there a some low level code i.e. "Orbit Mode: Stand By" where I execute through macros from various points in the program i.e. Before I prepare orbit for receiving a set of "Load" commands, before a measurement starts etc. I now understand the reasons and the pros/cons that QSMs have, as well as the solutions and the ways that QSMs should be used in general terms. My problem is implementing these guidelines towards a more specific order on the QSMs. From what I can see, I cannot simply use a QSM in a safe manner without the use of SubVIs (with build-in queues that refer back to main QSM when finished) for tasks such as "Start measurement", which in itself has a set of SubVIs that refers back to the main QSM to check if the measurement is finished. Is there a safe way of implementing problem 1 above in a safe manner using QSMs. If so, would it be possible to see an example on how you would accomplish this. I see the Daklu refers to Private and Public messages. Can you explain a little bit as to what private and public messages are? Thank you Kas Quote Link to comment
PaulL Posted September 19, 2012 Report Share Posted September 19, 2012 Do you use generic user events or create one for each message? I'm also curious how you manage the interface ownership between two components. Which component gets to decide what the message looks like? In my applications a component owns and defines the request messages it will honor and the status messages it will provide. If there is an interface mismatch the other component has to implement an adapter of some sort. Events more naturally align with status messages, but if all you use is status messages and status messages are owned by the sender, I would think it would lead to dependency pollution. (Interface dependency, not static dependency.) I know you guys use abstract classes to define interfaces for components, but it's not clear to me if you do that for messaging interfaces or just for method interfaces, and if you do define abstract classes for messaging interfaces, which component owns the class? Part 2, I guess: There is one shared variable event per event structure. When we send commands (properly speaking) via user events within a component (or between components via shared variable events--see below), yes, the type is the top-level command class, so we also only have one event here, too. "In my applications a component owns and defines the request messages it will honor and the status messages it will provide." This is essentially our approach as well. We define external interfaces through data sent via shared variables in one of two approaches: 1) Not objects: a) primitives, b) typedefs (Enums, clusters). We define the external interface typedefs in a library for that component (and those are the only items in the library). That means at most another component will have this library in its dependencies. (We convert the values to command objects internally.) 2) Command objects: For certain complex external interfaces have command objects (flattened) on the shared variables. In these instances, interfacing components have the library of interface commands in their dependencies. So, yes, generally each component defines its own interfaces. It is easy for other LabVIEW components to use these interfaces (via libraries and via aliased shared variables). For nonLabVIEW components we convert the same data to XML (not always easy, as we have discussed on other threads). Disabling buttons I agree with Daklu that the controller should not assume that the user interface will not send a command that is valid at that moment. There is no way to ensure that under all circumstances. As Daklu also says, a true state machine processes triggers that are valid for that state. (The default behavior for any trigger on the highest-level state is to do nothing.) We do use a combination of invisibility (often for a subpanel) and button disabling on views. The views update the display based on the state of the controller. Quote Link to comment
Daklu Posted September 19, 2012 Report Share Posted September 19, 2012 I'm confused by the last part of your question: "I've read stuff that speaks highly of pure reactive systems that only use status messages. Thinking about all those direct connections makes me shudder and I'm curious what your experience is." I'm not sure what you mean by "direct connections" here. In particular, our components publish status values, so that they are available as needed by subscribers. (Of course, we are careful about how we design the system so that components have exactly the data they need to do what they need to do.) I didn't explain it very well. Direct connections means each component sends messages directly to the component that ultimately receives the message--there's no need to forward messages like you have to do with hierarchical messaging. With hierarchical messaging an owner needs to know specifics about its subcomponents. To explain a little more, in the context of what I read, a "pure reactive system" (or "event-based system") is one in which all messages at the abstraction level you're looking at are status messages. They are announcements that something interesting just happened with the message sender. ("I stubbed my toe.") There are no messages requesting specific actions from another component. ("Get me a bandaid.") The idea is that by being purely reactive, each component is naturally decoupled from the others--since it isn't explicitly sending messages to them--and more reusable. Hierarchical messaging requires message routing, which in turn requires owners to know specific information about their subcomponents. In the literature I've read it was implied that event-based systems are not organized in a hierarchy (at the abstraction level at which you're looking,) so there is no concept of ownership or subcomponents. Without ownership each component must send messages directly to other components. But in order to establish communication links between components without creating static dependencies, you have to dynamically register all the events at runtime in source code that is part of neither component's libraries. (I imagine this code would be implemented in the application's initialization routine.) The last bit, and the part I'm most fuzzy on, is that the semantics of the code establishing the communication links was such that it was possible to link an arbitrary event generator with an arbitrary event consumer, regardless of whether or not they had compatible interfaces. Obviously some sort of adapter code would have to be implemented to make the translation, but you didn't see evidence of that while creating the links. I don't recall if it included source code, but it could look something like this, Nurse.TreatInjury handles Me.IStubbedMyToe[/CODE]Anyway, I'm mostly just thinking out loud. It sounds like you guys are at least part of the way there. I haven't needed that kind of functionality yet, but I'm curious what it would look like in Labview.5) Yes, a higher-level component can send commands (in the strictest sense) to its children (i.e., it can can control/coordinate its children). Children, on the other hand, are unaware of their parents.Do you use hierarchical messaging throughout your application, or do you have abstraction layers or subsets of components that use direct messaging?6) All communication is publish-subscribe.Publish-subscribe implies to me dynamically registering for messages at runtime. i.e. Each component sends a message to another component saying, "add me to the list of receivers when you send message [i]m[/i]." Is that how you are using it? (I know we've talked about it before, but it's so hard to keep everyone's interpretations straight.)I'm hoping that its ok to bring this discussion back to basics for a bit.Well, it's your thread. The rest of us are just hijacking it. I see the Daklu refers to Private and Public messages. Can you explain a little bit as to what private and public messages are?Sure. In the context of a QSM, a public message (or state) is any message that external loops are allowed to put on the queue. Private messages are the "sub vi" messages that external loops should not put on the queue. One of the difficulties with public sequence (single queue) QSMs is that there is no way to prevent external loops from sending a private message. Therefore, all messages by definition are public and all message handling code should be written with the expectation that it can be called at any time.Is there a safe way of implementing problem 1 above in a safe manner using QSMs. If so, would it be possible to see an example on how you would accomplish this.There's a lot of detail here that will take me a while to absorb, and unfortunately I've spent far too much time on this thread today. My initial reaction to your question is if you are using a public sequence QSM, no. The cost of using a public sequence QSM to allow interrupting a sequence is being very restricted in where you can put decision making logic and what that logic can do. Race conditions will breed like rabbits. If you are using a private sequence QSM, yes, but you lose interruptability. Until I understand the entire scope of your problem I can't give too many specifics, but here are a few things to get you started.1. Create an execution flow diagram showing the how the cases transition from one to another. Don't worry about race conditions for the moment. Just map out the logic in your QSM loop. You [u]have[/u] to understand what you've implemented if you want to continue on this path. (And I have to understand it before I can give you any specific advice.) If a case is used in more than one execution sequence, create duplicates on the diagram. In the example I posted all the non-orange cases represent duplicates.2. Create an execution flow diagram modelling how you want the application to behave. Focus on what happens when users do each action available to them. Break them down into however many steps you're comfortable with. Usually this is an Idle case with a branch for each event the QSM responds to. The branch will go through a sequence of functions before returning to the Idle case. It ends up looking a bit like a flower with each branch being a petal.3. Post both diagrams.--------------[i wrote the following, then was going to delete it as there are way too many details and exceptions for this to be a "good" guide. I decided to let it remain in case you wanted to give it a go... But beware, there be dragons down here.]4. [b]Save your project and back it up. [/b]Refactoring a QSM is tricky and I haven't developed an easy list of steps to follow.The idea is to simplify your execution flow diagram by reducing the number of cases to the bare minimum. The following is my general approach, but there are lots of things that can trip you up.5. Create a sub vi out of any duplicate cases on your execution diagram. For example, if you have a case for "IncCounter," select all the code inside the case structure and [i]Edit >> Create Sub VI[/i]. Save the sub vi as IncCounter.vi.Now you're going to try to move the IncCounter functionality from its own case to other cases so we can remove the IncCounter case. This takes some judgement and requires understanding what you have implemented.6. Find all the places where IncCounter is put on the queue. Figure out what case immediately precedes IncCounter. Let's call it GetNextWidget. (If IncCounter is the first item put on the queue in response to a user input, skip to 8. IncCounter is a public message and the case cannot be removed.)6a. If GetNextWidget appears multiple times in the execution flow diagram and it is not always immediately followed by IncCounter, skip to 7. There's additional branching logic that needs to be figured out.6b. If GetNextWidget only appears once in your execution flow diagram, place IncCounter.vi in the GetNextWidget case so it is the last action before exiting the case. Remove IncCounter from the list of items placed on the queue.6c. If GetNextWidget appears multiple times in the execution flow diagram, but it is always followed immediately by IncCounter, place IncCounter.vi in the GetNextWidget case so it is the last action before exiting the case. Remove IncCounter from the list of items placed on the queue. (If you've already created a sub vi for GetNextWidget, put IncCounter.vi in GetNextWidget.vi.7. Repeat step 6 for every case just before IncCounter on your execution flow diagram.8. If every iteration through 6 resulted in 6b or 6c (in other words if you are no longer enqueuing IncCounter anywhere) you can delete the IncCounter case.9. Repeat steps 5-7 for every duplicate case in your diagram.10. Update your execution flow diagram.-------------To be honest, I suspect you'll get stuck on step 2. That's where I got stuck when I struggled with QSMs. QSMs are good at implementing flow charts, and flow charts are good ways to model QSMs. The problem is flow charts are useless when it comes to modelling modern event-based behavior users expect because flow charts don't have any concept of interrupts. If you can't model it using a flow chart you shouldn't implement it using a QSM. Quote Link to comment
Kas Posted September 20, 2012 Author Report Share Posted September 20, 2012 Daklu, if you decide not to go ahead with this, believe me I understand. I'm putting my self in your shoes and I'm in two minds myself if I would've helped or not in this detailed level (specially after I looked at the flow chart). The flow chart is just "Hairy". I've attached the "yEd" flow chart because as an image, its too big. The only thing I haven't implemented is the "STOP" UI button, where it would just flush the queue and send a "Stand By" command to the Orbit instrument, no matter where the execution process is on the third loop. This acts as a "Soft Emergency" procedure if the user sees that the motion is becoming dangerous (i.e. cables stuck on the rotation table etc.). This project could be implemented using JKI Template, but the "STOP" button would have to be monitored regularly, since there's no way of coming out of a macro sequence, even if a User Event is triggered. Furthermore, this software will have other capabilities (not yet implemented) i.e. plot, compare, calculate and manipulate previously measured data from the database (I call it database but its only going to be on a common folder on hard drive) and this is independent of whatever the measurement loop (3rd loop) is doing. Which is why I opted for producer/consumer QSM style Template as the backbone for this project. Seeing how easy this can turn into a disaster (if it already isn't one), this type of template might not really be the best way forward. Attached is also the 2nd flow chart where I tried to simply show the steps that the software should ACTUALLY go through. Since all of the communication protocols/settings and any other processes required are already done, it might be best to simply "abandon ship" and use a different programming style that could be extrapolated from the 2nd flow chart (using it as a guide). Every-time I think of a different programming structure, I come back to QSMs and macros. Since I've used them for nearly everything till now, I just cannot think of any other way. Its as though it has a hold on me and I cannot choose. In terms of knowledge, I pretty much understand what most of the functions in LabView are, i.e. semaphores, calling VIs dynamically, queue system, triggering events dynamically or via signalling (i.e. value change (signal) property), sub panels, etc. effectively the basic day-today operations that LabView has. I believe that so long as the code doesn't end up in OOP style, I should be able to follow. Thanks Kas P.S. Every time I Load the "Main - GUI.vi" it keeps asking for "dotNET ChartXControl.xctl" (used to be a dotNET graph) but I no longer use that control, and I cannot find it on FP to delete. Hence the program will still work if you simply ignore this. Quote Link to comment
Daklu Posted September 20, 2012 Report Share Posted September 20, 2012 Daklu, if you decide not to go ahead with this, believe me I understand. I'm putting my self in your shoes and I'm in two minds myself if I would've helped or not in this detailed level (specially after I looked at the flow chart). I'll help as much as I can, but to be honest I doubt it will be as much as either of us would like. I look to be pretty busy over the next week and a half. The flow chart is just "Hairy". It's big, but it's actually not bad. I've seen (and helped create) far less organized QSMs. The only thing I haven't implemented is the "STOP" UI button, where it would just flush the queue and send a "Stand By" command to the Orbit instrument, no matter where the execution process is on the third loop. There's no way to implement a reliable STOP function with the queue manipulations you are doing. Even if you flush the queue and put a Stop message on it, your QSM loop could put other messages in front of the Stop message. A quick scan through loop 3 shows a lot of places where you are putting messages on the front of the queue. If you want to use a Stop interrupt, the only way to make sure it will be the next message processed is by putting all your other messages on the rear of the queue. Once you put any other message on the front of the queue you've lost that guarantee. This project could be implemented using JKI Template, but the "STOP" button would have to be monitored regularly, since there's no way of coming out of a macro sequence, even if a User Event is triggered. Yep, the JKI template doesn't support interrupts. If you think about it for a bit you'll realize neither does dataflow programming. Public sequence QSMs pretend interrupts exist by frequently checking the queue to see if an "interrupt" occurred. Functionally it's not much different from you checking a Stop button local variable regularly. In fact, checking the local variable is probably safer, since the interrupt will only occur when the button is clicked instead of for any arbitrary message put on the front of the queue. I'd really prefer to explain how to break down your application into message handlers, state machines, continuous loops, and metronomes. I'd like to show you how to combine these different kinds of loops to build up the functionality you need. Unfortunately I haven't figured out how to explain it to beginning and intermediate level programmers yet. It's not that the ideas are too difficult to understand; I just haven't been able to figure out exactly how much information they need to understand how to get started using them. Since the goal is to get you on going on a path you're comfortable with I'm going to recommend switching back to a private sequence QSM. You can use JKI's template as long as the front panel events are not handled in the same loop as all the business logic. You can implement interrupts using something like this: [WARNING - I have never implemented this nor have I analyzed it much. There might be bad effects I haven't thought of.] I'm off to bed. But first, in the flow chart showing the desired behavior you have several "Interrupt?" decision points that feed into "Read from VNA." Are these interrupts from the UI or something else? And what happens after "Read from VNA?" Quote Link to comment
drjdpowell Posted September 20, 2012 Report Share Posted September 20, 2012 Yep, the JKI template doesn't support interrupts. If all one wants is “abort” functionality then that is easily addable to a JKI template. One has use a separate method of receiving the abort command (such as the notifier in Daklu’s example, or just a terminal to poll). I have in the past used something like this, with three extra cases added to the template: 1) “Check for Abort” which checks whatever you use as an abort signal and if true calls “Abort" 2) “Abort”, which if called takes the sequence queue (actually a string) and throws way all text up to the word “Jump_Here_On_Abort:” (throwing away everything if that word is not present). 3) “Jump_Here_On_Abort:”..“Jump_Here_On_Abort:~”, which doesn’t actually contain any code; it’s just a marker used by “Abort" Note that any statement placed after Jump_Here_On_Abort: will execute only if “Abort” is called. Then you write macros where you explicitly check for abort, at places in the sequence where you are sure it is OK to abort. For example: Macro: Ready equipment Check for Abort Macro: Step one Check for Abort Macro: Step two Check for Abort Macro: Step three Jump_Here_On_Abort: Macro: Execute this only on abort Macro: Equipment to Standby One can add a similar functionality to an all-subVI design by using a “Check for Abort" subVI that throws an error on abort, with Jump_Here_On_Abort: replaced by your error-handling code. — James Quote Link to comment
PaulL Posted September 20, 2012 Report Share Posted September 20, 2012 It sounds like you guys are at least part of the way there. I haven't needed that kind of functionality yet, but I'm curious what it would look like in Labview. Do you use hierarchical messaging throughout your application, or do you have abstraction layers or subsets of components that use direct messaging? Publish-subscribe implies to me dynamically registering for messages at runtime. i.e. Each component sends a message to another component saying, "add me to the list of receivers when you send message m." Is that how you are using it? (I know we've talked about it before, but it's so hard to keep everyone's interpretations straight.) Well, we have a completely working system and we are fully confident in the architecture. (We build our components on a completely functional template we developed.) Yes, each component dynamically registers for events. This effectively means each component has a list of shared variables to which it wants to subscribe and calls a method to establish subscription. The shared variable engine really handles all the connections. I think our component interactions are a lot simpler than that (because of very careful design upfront). Let me see if I can explain. A component controller subscribes to certain inputs to which it responds (we call this SubData), and it publishes other data (PubData). When we develop a component most SubData comes from the user interface for that component, but once in the system those messages can come from its parent or even a cousin. There is a sense of hierarchy in the system design (not the messaging per se) in that, for instance, the top-level component can send an enable signal to its immediate children, who on their transition to enabled send an enable signal to their children, and so on, so that we have a cascade effect. We very carefully designed our system so that components only need to know what they absolutely need to know about each other. The messaging as we have it is quite straightforward once we have identified the signals each component needs to send and receive. Quote Link to comment
PaulL Posted September 20, 2012 Report Share Posted September 20, 2012 Thinking about this a little more: A component really only "talks to" the shared variable engine, never another component. Which components send and receive which messages (the communications map) is part of our system design. Quote Link to comment
Kas Posted September 23, 2012 Author Report Share Posted September 23, 2012 (edited) But first, in the flow chart showing the desired behavior you have several "Interrupt?" decision points that feed into "Read from VNA." Are these interrupts from the UI or something else? And what happens after "Read from VNA?" Apologies for not replying earlier. Was little bit tied up with something else. As for the question, the interrupts are not from UI. They are instrument (Orbit FR) interrupts. i.e. If I set a scan from 30 degrees to 90 degrees, and I also set the interrupt to 1 degrees, then the instrument will send an interrupt through the GPIB every 1 degrees as the motor moves from 30 through to 90 degrees. The format of these interrupts is a digital number. Out of all of the replies I get from the instrument/controller, the interrupt is the only one that I get as digital number only, the others are letters and numbers combined, or just letters, or binary (status response). Which is why on the "Orbit: Send & Recieve", I check if the reply contains only numbers, and if so, I process the interrupt, and I also know that I should read the VNA results as well. This interrupt acts like a control to let me know when I should read from VNA, and it is sent by the controller itself. During motion, I continuously interrogate the controller for position, speed status etc. if however the controllers increment was triggered (i.e. the interrupt) and the reply happened to be in digital format I then read VNA. However, after I process the interrupt, the reply that I "SHOULD HAVE" received instead of the interrupt is still waiting to be read from the controllers buffer, which is why in the "Orbit Param: Proc. incremet", I go back to "Orbit: Send & Recieve" to make sure that I read whatever else is left (i.e. position, velocity etc.). The graph is also amended (attached). Basically, every time I read an interrupt, the sequence takes a "detour" (i.e. reads VNA, updates the graph etc) and then comes back to doing what it was supposed to do before the interrupt was detected (i.e. go to the original sequence). As for the UI, the user no longer needs to do anything during the measurements. The measurements them-self can take hours, maybe even days for accurate readings, and only in emergency situations, the user would have to press the stop button to halt the operations. So, for a normal operation, once the user presses the "start" button and gives a measurement name, the rest is pretty much automatic. As for the template that you've shown, I have a concern because even after the user has pressed the emergency stop, the sequences on the left "QSM functions" will still continue to be executed, since the QSM Functions queue and the emegency stop queue are bot independent of each other. If all one wants is “abort” functionality then that is easily addable to a JKI template. One has use a separate method of receiving the abort command (such as the notifier in Daklu’s example, or just a terminal to poll). With JKI in the past, this was the way I implemented it. However, I now need to be able to do control, data acquisition and other UI responses at the same time where one does not interfere with the other. Basically, the abort button is not the only thing that I would have to implement, however, in the attached project, this is the only thing I have so far implemented. The analyses section that I have to implement takes bit of time, and I wanted to make sure I have some sort of a template in place before I continue with the rest. Effectively, the user can go and do other things with the software WHILE the measurements are in still going on. Hence, the multi-loop QSMs idea. Flow Chart - Basic Idea.zip Edited September 23, 2012 by Kas Quote Link to comment
Kas Posted September 23, 2012 Author Report Share Posted September 23, 2012 Basically, every time I read an interrupt, the sequence takes a "detour" (i.e. reads VNA, updates the graph etc) and then comes back to doing what it was supposed to do before the interrupt was detected (i.e. go to the original sequence). Which is also why in situations like these I have to add sequences/macros at the front of the queue rather than at the back. Kas Quote Link to comment
Daklu Posted September 24, 2012 Report Share Posted September 24, 2012 Apologies for not replying earlier. Was little bit tied up with something else. Just a quick note to let you know it will be at least another week before I'll be able to look at your stuff again. Hopefully others will be able to give you some guidance in the meantime. Effectively, the user can go and do other things with the software WHILE the measurements are in still going on. Hence, the multi-loop QSMs idea. You are correct that you need multiple loops. How many do you think you'll need? However many tasks you want to execute in parallel is the absolute minimum number of loops you'll need. I always have additional loops handling messages and coordinating the actions of subsystems to create the high level behavior I want. My last medium-sized application had ~a dozen independent loops, and I don't think that's all that much. You need to learn how to create loops that are robust and self-deterministic. By that I mean the loop will always behave in a predictable and correct way, regardless of what message it receives from an external entity. Instead of just thinking of loops as a string of functions that execute in parallel, think of each loop as an independent entity with its own data space. Design your loops around a piece of functionality and/or data the loop provides to the rest of the application. As for the template that you've shown, I have a concern because even after the user has pressed the emergency stop, the sequences on the left "QSM functions" will still continue to be executed, since the QSM Functions queue and the emegency stop queue are bot independent of each other. No they won't. The sequence is contained in the string array, and the string array is emptied when the EStop notifier is processed. The message queue may still have messages waiting to be processed, but if EStop halts everything and shuts down the application those messages will never be read. If you're not planning on shutting down the app in response to an EStop, and are (rightfully) concerned messages on the queue will be processed when you don't want them to be, you need to implement some sort of message filtering in the receiver. I do that with the behavioral state machine I showed in an earlier post. There are probably other ways to implement it as well. Quote Link to comment
Kas Posted September 25, 2012 Author Report Share Posted September 25, 2012 Thanks Daklu, In the mean time, I'll continue to update the software in general, and see how I can incorporate it with a friendlier multi-loop interface, namely the one you proposed and there is another one that comes with LabView 2012 version. I'll also try and minimize the cases inside the QSM with SubVI's as you've suggested (as carefully as I can). Regards Kas Quote Link to comment
WC Leong Posted February 27, 2013 Report Share Posted February 27, 2013 Hi all, Attached above is a basic construction for Fuzzy Logic Controller and this is my collected data from interface and been processed using Fuzzy System Designer. Does anyone know how to attached the data collected to Fuzzy Logic Block? Please dont hesitate to comment another easier or simpler solution regarding this matter. Thanks. Best regards, WC Leong. Quote Link to comment
Yair Posted February 27, 2013 Report Share Posted February 27, 2013 Duplicate - http://lavag.org/topic/16614-fuzzy-logic-controller/ WC, keep your question in that thread. It has nothing to do with these other threads. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.