Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/11/2012 in all areas

  1. When developing apps with multiple parallel processes, many developers prefer messaging systems over reference-based data. One of the difficulties in messaging applications is how to handle request-reply sequences between two loops. Broadly speaking there are two approaches: synchronous messages and asynchronous messages. In some ways synchronous messaging is easier to understand. The sender asks the receiver for a piece of data and waits for a response before continuing. The downside is the sender isn't processing new messages while its waiting. Important messages will go unnoticed until the sender receives the data or gets tired of waiting. It can also be hard to verify correctness in synchronous messaging system. Cross loop execution dependency may hide deadlocks in your code and they are very hard to identify. Asynchronous messaging sidesteps the issue of deadlocks. Each loop executes on its own schedule so there's no danger of multiple loops waiting on each other. The drawback? There are a lot more messages to handle and the bookkeeping for sequences of data exchanges can get messy. For example, if a data consumer, C, needs data from a data producer, P, it might send a GetData message then continue servicing messages as they arrive. When P processes the GetData message, it in turn sends a HereIsData message back to C. C then not only needs to implement a HereIsData message handler, but also needs to keep track of the reason why it wanted the data in the first place so it can continue its previous task--there is more state to manage. Since synchronous messaging processes the returned data inline with the request it doesn't have this problem. Often I eliminate the need for additional state management by having P broadcast new data to all C's every time it changes. That ensures each C has the latest data and can use its internal copy inline instead of requesting it and waiting for a response. Sometimes that is not a very good solution. Consider the case of a high output P and a C that sometimes, but rarely, needs the data. That's exactly what I ran into recently with a motion control system where the motor positions are updated every 10 ms or so but the UI needs to know position data maybe 5 times every 30 minutes. Continuously broadcasting position data to the UI seemed like a waste of resources. Where am I going with all this? A little while ago I ran across a concept called "futures." It is essentially a promise to supply needed data at some point in the future when it is not available at that instant. The future doesn't have the data now, but it will when it is redeemed. Rather than broadcasting thousands of unnecessary messages or creating lots of extra states to manage, I used futures to get the readability of synchronous messaging while (mostly) maintaining the natural parallelism of asynchronous messaging. I don't have time to code up an example right now so let me try to describe it. (If there's interest I'll try to post an example later.) The process flow I used for my futures is similar to this: 1. UI determines it needs position data from the motion controller (MC.) 2. UI creates a future and keeps one copy for itself, sending the other copy to the MC as asynchronous message data. 3. UI continues processing, not caring about the specific data right now but trusting it will be there when needed. 4. MC eventually processes the message and "fills" the future. 5. UI gets to the point in its execution where it needs the data, so it "redeems" its copy of the future and obtains the data filled by MC. Compare that with synchronous messaging and the difference becomes clear: 1. UI determines it needs position data from the motion controller (MC.) 2. UI requests position data from MC. 3. UI sleeps while waiting for a response. 4. MC eventually process the message and sends a response. 5. UI continues processing, confident it has the data it needs for future processing. I don't claim this idea as my own, or even that its new. I implemented the future using a notifier. In fact, under the hood a future looks a lot like a synchronous message. The important difference is where the waiting takes place. Synchronous messaging forces the sender to wait for a response to the message before continuing. Futures give the sender more control over their own execution. They can redeem the future immediately and behave like a synchronous message or they can redeem it in the future and continue doing other stuff. It turns out I unknowingly implemented futures about a year ago as a way to have synchronous messaging with LapDog. At the time I was focused on obtaining the response before continuing, so it never occurred to me to defer reading the notifier until the data was actually used. I've just started using this idea so I don't really know where it'll lead. I don't think it's a "safe" replacement to synchronous messaging; there is still the danger of deadlocks if futures are used extensively. I think they're better used as a lightweight request-response mechanism when implementing a new state is too heavy and broadcasting is too resource intensive.
    1 point
  2. This thread finally made it to the front of my queue of "topics to dig into". Let's take the basic idea that a future is implemented using a Notifier. Needy Process is the process that needs information from another process. Supplier Process is the process supplying that information. I am choosing these terms to avoid conflict with producer/consumer terminology, especially since the traditional producer loop could be the needy loop in some cases. First I want to highlight one variation of asynchronous messages, a particular style of doing the asynchronous process that Daklu describes in his first post. If Needy Process is going to get information from Supplier Process using asynchronous messages, it might do this: Needy creates a message to send to Supplier that includes a description of the data needed and a block of data we'll call "Why" for now. Supplier receives the message. It creates a new message to send to Needy. That message includes the requested data and a copy of the Why block. Needy receives the message. The "Why" block's purpose now becomes clear: it is all the information that Needy had at the moment it made the request about why it was making the request and what it needed to do next. It now takes that block in combination with the information received from Supplier and does whatever it was wanting to do originally. There's nothing revolutionary about those steps -- please don't take this as me trying to introduce a new concept (especially not to Daklu who knows this stuff well). I'm highlighting this pattern because it shifts who is responsible for storing the state data from the Needy Process' own state to the state of the message class. This technique can dramatically simplify the state data storage problem because Needy no longer needs to store an array of "Why" blocks and figure out some sort of lookup ID for figuring out which response from Supplier goes with which task. It also means that most of the time, Needy isn't carrying around all that extra state data during those times when it isn't actively requesting information from Supplier. Why is this variation of interest when thinking about futures? I'm ok with the general concept of futures ... indeed, without actually naming them as such, I've used variations on this theme. I do want to highlight some details that I think are noteworthy. Do futures really avoid saving state when compared to asynch messages. I will agree that the *type* of the state information that must be stored is different, but not necessarily the quantity or complexity. Needy Process creates a notifier and sends that notifier to Supplier Process. And then Needy Process has to hold onto the Notifier refnum. That's state data right there. That four byte number has to be stored as part of Needy Process, whether it is in the shift register of the loop itself or stored in some magic variable. If there are multiple simultaneous requests to Supplier for different bits of information, then it becomes an array of Notifier refnums. In the original post, Needy is described as "knowing that it will eventually need information". But something still has to trigger it to actually try to use that information. In both of Daklu's posts, there is a secondary *something* that triggers that data to be used. In one, it is the five second timeout that says, "Ok, it's a good time for me to get that data." In the second, it is an event "MeanCalculated" that fires. Both of those event systems have state overhead. Now, it is state behind the scenes of LabVIEW, and that does mean you, as a programmer, do not have to write code to store that state, but it is there. Finally, be careful that these futures do not turn into polling loops. It would be very easy to imagine Needy creates the Notifier, sends it to Supplier, and then goes and does something, comes back, checks the Notifier with a timeout of zero milliseconds to see "is it ready yet?" and then rushes off to do some other job if it isn't ready. If you have to introduce a new state to check the notifier, you're on a dark dark path. And I've seen this happen in code. In fact, it happens easily. The whole point of futures is that Needy *knows* it will need this data shortly. So it sends the request, then it does as much work as it can, but eventually it comes around to the point where it needs that data. What happens when Needy gets to the Wait For Notifier primitive and the data isn't ready yet? It waits. And right then you have defeated much of the purpose of the rest of your asynchronous system. Now, you can say, "Well, I got all the work I knew about done in the meantime, and this process doesn't get instructions from the outside world, so if it waits a bit, I still have done everything I could in the meantime." But there is one message, one key message, that you can never know whether it is coming or not: Stop. The instruction to Stop will not wake up the Wait For Notification primitive. Stop will be sitting in Needy's message queue, waiting to be processed, but gets ignored because it is waiting on a notifier. Crisis? Depends on the application. Certainly it can lead to a sluggish UI shutdown. If you want an example of that bad behavior, come August, take a look at the new shipping example I've put into LabVIEW 2012. User hits the stop button and the app can hang for a full second because of one wait instruction deep in one part of the code. I've thought about refactoring it, but it makes a nice talking point for an example application. So, in my opinion, this concept of futures is a good concept to have in one's mental toolbox, but one that should be deployed cautiously. I'd put it on the list of Things We Use Sparingly as less common than Sequence Structures but more common than global variables.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.