Jump to content

Futures - An alternative to synchronous messaging


Recommended Posts

Due to popular demand (**snicker**) I put together a (very) quick example illustrating how and why I used a future. No time for a detailed explanation beyond what I posted above, but I will say this...

In the examples the UI loop retains an array of futures and then sends them all back to the producer loop so it can redeem them and perform a calculation on the values. This is what I my app required, but I suspect the "normal" use case would be for the UI loop to redeem the futures.

post-7603-0-14318600-1320202179_thumb.pn

What's the recommended action when the consumer reaches the point of needing the information, but the producer hasn't gotten around to filling the future yet? Do you end up having to detect stale data?

Futures, at least the way I implemented them, don't have stale data. It's a one shot notifier, so the data is either there or it isn't. I put a timeout input on Future.Destroy for just those situations. (Not shown in the example.) In general I think it's up to the developer to be reasonably certain the data will be there when needed (while putting in proper error handling code just in case it isn't.)

It turned out in my case I didn't have to worry about the data not being there. The UI loop requested all the futures representing the MC data, then sent them back to the MC for processing. Since all the initial future requests preceeded sending the array of futures for calculation, by definition all the future requests will be serviced before the calculation request.

FutureExample.zip

Link to comment

Here's another question - in your example you wanted to have the updated position data, but if I understand your implementation correctly, the future is only updated once (when the MC loop processes the request), so if the UI loop only gets to process the response a minute later, it will get data which is a minute old instead of data which is current. Does that mean that this type of architecture can only be used where you're willing to accept that the response may be old?

Link to comment

For fun, I whipped up a version of a "Future" in my Messaging Library. It takes my existing "Query" function and splits it among two VIs, connected by a "Future Reply" object. In the below example I have a process that takes a second per received message to reply. I start 10 queries, sending the "SendTimeString" message each time, then simulate doing something else for 2.5sec, then I wait for the futures to be completed. This is similar to the way "Wait on Async Call" works.

post-18176-0-90736700-1320254882_thumb.p

  • Like 1
Link to comment

Just as an aside, the new "Wait on Asynchronous Call" Node is another example of a type of "future".

Good point, the difference being it spawns a new execution path rather than linking into an existing one. (Well, not that call directly, but isn't it designed to be used with the new 'Run Without Waiting' function?)

Here's another question - in your example you wanted to have the updated position data, but if I understand your implementation correctly, the future is only updated once (when the MC loop processes the request), so if the UI loop only gets to process the response a minute later, it will get data which is a minute old instead of data which is current. Does that mean that this type of architecture can only be used where you're willing to accept that the response may be old?

To clarify, futures aren't an architecture and I'd actively discourage anyone from building an application primarily on futures. It's a concept that can be applied to solve specific difficulties in asynchronous systems. I'd describe my current project as having an "event-oriented messsaging" architecture.

Your assessment is correct. If the UI waits a minute before redeeming the future the data will be a minute old. The same thing would happen if the UI loop got the data immediately but waited a minute before using it. Any time you make copies instead of using references to share data between parallel loops you run the risk of operating on "old" data. (Edit... ahh, now I understand what jzoller meant by 'stale' data. Duh.)

If you need absolutely-the-latest-data-available-at-this-exact-point-in-time, creating references as close to the data source is the way to go. I don't think I've ever needed that level of instantaneousness. Because messages are routed through dedicated message-handling mediator loops, as a practical matter they propogate through the system instantly. Usually the end consusmer also handles the message close enough to instantly as well. On rare occasions the end consumer may be engaged in a relatively long process and use "old" data while new data is waiting on the queue. There's several potential solutions I can think of:

1. You can replace the last step in the message chain with a DVR to get the time critical data to the data consumer. If I can't change the message handlers to make the consumer more responsive this is often my first step. The second or third time the loop needs the DVR is a clue it's time to refactor it away. A very simplified example looks like this:

post-7603-0-25186800-1320254730_thumb.pn

2. You can set up a direct data pipe between the data producer and data consumer. I use the messaging system to send the producer and consumer each end of the pipe--usually a data-specific notifier or SEQ. Then the consumer queries the pipe when it needs the data instead of using an internal copy. I do this if I'm worried about a data stream loading down the messaging system.

3. Hmm... I seemed to have forgotten what it was while I was putting together the example...

Link to comment

There's still something I don't understand about your original example - when does the UI loop send the request for the updated data? Since we can assume the MC loop will handle it "instantly", the request can't be made a long time before you need it, because you want updated data, but it also can't be made too late, because that's exactly what you wanted to avoid, so how do you know when to send the request? Where would it fit into the code? It's easy to request the data when the code starts running or when you actually need the data, but I'm having a hard time imagining a piece of code that says "I know I'm going to need the position data 5 seconds from now, so I'm going to ask for it now", since code is generally not aware of upcoming events or states unless you would add a special "pre-check" event/state/whatever which will be responsible only for this.

Link to comment

I'm having a hard time imagining a piece of code that says "I know I'm going to need the position data 5 seconds from now, so I'm going to ask for it now"

There are two variations to this scenario:

1. In 5 seconds I'm going to need to know where you are at that time.

2. In 5 seconds I'm going to need to know where you are now.

My current use case is the second scenario. Technically there's nothing preventing the producer from solving scenario 1 by continuously updating the notifier data until the consumer reads the future, but I don't know if that's their intended use.

Boring Details

In this app I need to place an overlay over a video image and align it with the part attached to the motion control system. The only way to do that is to have the user click on the screen to identify fiducials--key locations on the part whose locations relative to each other are known. By matching the location of the mouse click with the position of the mc at the instant the mouse was clicked and doing a bunch of coordinate transforms, I can determine the location on the mc system where the user clicked.

It turns out the fiducials I have to work with are not in a fixed position. They can (and frequently are) out of position, so I have to capture 4 fiducials and do some averaging to minimize the error. I can't do the averaging until I have data from all the fiducials, but I also can't just save the location of the mouse clicks and send them all at once because the position of the mc system will have changed to bring other fiducials into the field of view. Each data point needs to contain both the mouse click location (in screen coordinates) and the mc position at the moment in time when the mouse was clicked.

Originally I had the UI sending each mouse click to the mc system. The mc system would do the coordinate transform and store each data point until it received a command to do the averaging. Then it would average the data and apply the overlay in the correct location. It worked, but I didn't like it because it had the mc system handling temporary user data--a UI responsibility. By using futures I was able to bring the responsibility for maintaining the fiducial data back to the UI component, simplify the messaging interactions, and remove chunks of mc code dedicated to keeping track of the UI state.

----------

One other thing... you might wonder why, if my messaging loops respond "instantly," I don't just use a regular asynchronous message exchange or a synchronous message. In truth I probably could have without a noticable affect. However, UIs tend to receive a lot of update messages and with screen refreshes and UI thread blocking my sense is there's a higher potential for the UI input queue getting backed up, especially as users ask for more functionality. I try to write my UIs so if the queue does get backed up there's no noticable user impact. In general I'm less concerned about the UI loop's message response time. A 50 ms delay in a screen update is no big deal, but a 50 ms delay in getting the mc position is deadly when a 5 um error makes the system unusable.

Link to comment

So the answer to my question (from the UI's loop perspective) is "I send the request when the user clicks the image, but since I only use the result later, I can keep moving on". This simplifies the issue I had, because the system doesn't need to know something is going to happen in advance - the request is only sent after the user clicked the image.

Link to comment

So the answer to my question (from the UI's loop perspective) is "I send the request when the user clicks the image, but since I only use the result later, I can keep moving on".

Yep. Right now it's not clear to me how often I'll have these kinds of situations. Maybe it's a side effect of the way I think about the components, maybe it's a unique situation that won't arise very often. I guess time will tell.

Link to comment
  • 7 months later...

This thread finally made it to the front of my queue of "topics to dig into".

Let's take the basic idea that a future is implemented using a Notifier. Needy Process is the process that needs information from another process. Supplier Process is the process supplying that information. I am choosing these terms to avoid conflict with producer/consumer terminology, especially since the traditional producer loop could be the needy loop in some cases.

First I want to highlight one variation of asynchronous messages, a particular style of doing the asynchronous process that Daklu describes in his first post. If Needy Process is going to get information from Supplier Process using asynchronous messages, it might do this:

  1. Needy creates a message to send to Supplier that includes a description of the data needed and a block of data we'll call "Why" for now.
  2. Supplier receives the message. It creates a new message to send to Needy. That message includes the requested data and a copy of the Why block.
  3. Needy receives the message. The "Why" block's purpose now becomes clear: it is all the information that Needy had at the moment it made the request about why it was making the request and what it needed to do next. It now takes that block in combination with the information received from Supplier and does whatever it was wanting to do originally.

There's nothing revolutionary about those steps -- please don't take this as me trying to introduce a new concept (especially not to Daklu who knows this stuff well). I'm highlighting this pattern because it shifts who is responsible for storing the state data from the Needy Process' own state to the state of the message class. This technique can dramatically simplify the state data storage problem because Needy no longer needs to store an array of "Why" blocks and figure out some sort of lookup ID for figuring out which response from Supplier goes with which task. It also means that most of the time, Needy isn't carrying around all that extra state data during those times when it isn't actively requesting information from Supplier.

Why is this variation of interest when thinking about futures? I'm ok with the general concept of futures ... indeed, without actually naming them as such, I've used variations on this theme. I do want to highlight some details that I think are noteworthy.

Do futures really avoid saving state when compared to asynch messages. I will agree that the *type* of the state information that must be stored is different, but not necessarily the quantity or complexity.

Needy Process creates a notifier and sends that notifier to Supplier Process. And then Needy Process has to hold onto the Notifier refnum. That's state data right there. That four byte number has to be stored as part of Needy Process, whether it is in the shift register of the loop itself or stored in some magic variable. If there are multiple simultaneous requests to Supplier for different bits of information, then it becomes an array of Notifier refnums.

In the original post, Needy is described as "knowing that it will eventually need information". But something still has to trigger it to actually try to use that information. In both of Daklu's posts, there is a secondary *something* that triggers that data to be used. In one, it is the five second timeout that says, "Ok, it's a good time for me to get that data." In the second, it is an event "MeanCalculated" that fires. Both of those event systems have state overhead. Now, it is state behind the scenes of LabVIEW, and that does mean you, as a programmer, do not have to write code to store that state, but it is there.

Finally, be careful that these futures do not turn into polling loops. It would be very easy to imagine Needy creates the Notifier, sends it to Supplier, and then goes and does something, comes back, checks the Notifier with a timeout of zero milliseconds to see "is it ready yet?" and then rushes off to do some other job if it isn't ready. If you have to introduce a new state to check the notifier, you're on a dark dark path. And I've seen this happen in code. In fact, it happens easily. The whole point of futures is that Needy *knows* it will need this data shortly. So it sends the request, then it does as much work as it can, but eventually it comes around to the point where it needs that data. What happens when Needy gets to the Wait For Notifier primitive and the data isn't ready yet? It waits. And right then you have defeated much of the purpose of the rest of your asynchronous system. Now, you can say, "Well, I got all the work I knew about done in the meantime, and this process doesn't get instructions from the outside world, so if it waits a bit, I still have done everything I could in the meantime." But there is one message, one key message, that you can never know whether it is coming or not: Stop. The instruction to Stop will not wake up the Wait For Notification primitive. Stop will be sitting in Needy's message queue, waiting to be processed, but gets ignored because it is waiting on a notifier. Crisis? Depends on the application. Certainly it can lead to a sluggish UI shutdown. If you want an example of that bad behavior, come August, take a look at the new shipping example I've put into LabVIEW 2012. User hits the stop button and the app can hang for a full second because of one wait instruction deep in one part of the code. I've thought about refactoring it, but it makes a nice talking point for an example application.

So, in my opinion, this concept of futures is a good concept to have in one's mental toolbox, but one that should be deployed cautiously. I'd put it on the list of Things We Use Sparingly as less common than Sequence Structures but more common than global variables.

  • Like 1
Link to comment

Thanks, this is interesting.

What's the recommended action when the consumer reaches the point of needing the information, but the producer hasn't gotten around to filling the future yet? Do you end up having to detect stale data?

Joe Z.

I only know Futures from Java, where they are an integral part of the java.util.concurrent package. And the established way in Java for this is a blocking get() method on the Future, with an optional timeout. The Future also has a cancel() method. To code that up without some potential race condition is however not trivial and Sun certainly had a few trials at it, before getting it really working.

Link to comment

Thanks, Daklu, for the interesting post! I really enjoyed reading about your problem & solution — same goes for AQ's response. About the latter, I have a small follow-up question:

Finally, be careful that these futures do not turn into polling loops. It would be very easy to imagine Needy creates the Notifier, sends it to Supplier, and then goes and does something, comes back, checks the Notifier with a timeout of zero milliseconds to see "is it ready yet?" and then rushes off to do some other job if it isn't ready. If you have to introduce a new state to check the notifier, you're on a dark dark path. [...]

Could you maybe elaborate on that? Why would polling the future's state every once in a while be such a Bad Thing? In my view, you're just pointing out a very fundamental problem of retrieving information from a parallel process, one that has nothing to do with futures in particular. It's the problem of having to wait for the requested info to come back, without locking up. That problem is universal to all approaches, right? Be it async messages, synchronous messages or futures.

So I'm a bit confused: what's especially bad about this use of Futures?

Thanks a lot :)

Onno

Link to comment

So I'm a bit confused: what's especially bad about this use of Futures?

With the asynch messaging, there is no polling. The process has one place that it waits for incomming messages. At some point, the asynch message "I have the data you asked for" arrives and the process can act on the delivered data. Until then, the process is asleep, pending a new message, and takes no CPU. Contrast this with the "polling for futures" case, which is "send request to other process, check for messages, if no messages, check future, if no future, check messages, repeat until either new message or future is availalbe." The process never really goes to sleep. It is constantly burning CPU flipping back and forth between the two polls. Futures are a fine idea unless they lead to that fairly expensive polling loop.
Link to comment

So, in my opinion, this concept of futures is a good concept to have in one's mental toolbox, but one that should be deployed cautiously.

Agreed. I haven't needed futures again in the 7 months since the original post.

  • Needy creates a message to send to Supplier that includes a description of the data needed and a block of data we'll call "Why" for now.
  • Supplier receives the message. It creates a new message to send to Needy. That message includes the requested data and a copy of the Why block.
  • Needy receives the message. The "Why" block's purpose now becomes clear: it is all the information that Needy had at the moment it made the request about why it was making the request and what it needed to do next. It now takes that block in combination with the information received from Supplier and does whatever it was wanting to do originally.

Conceptually I understand what you're saying, but I'm having a hard time figuring out if it would have been a simpler solution in my particular case. (In fact, my actual implementation does use something similar to what you describe, though in my case it is simply a data identifier and is more of a "what" block than a "why" block.) Probably not, but let me think out loud for a bit. Anyone can jump in with alternatives or to point out flaws in my reasoning.

The most obvious difference between query/response messaging and futures messaging is the number and sequence of messages transmitted between the two actors. Here are sequence diagrams for synchronous messages, asynchronous query/response messages, and asynchronous futures messages.

post-7603-0-41293400-1339447638_thumb.pn

One of the issues I've struggled with in actor based programming is message sequences. Understanding an individual actor's behavior in response to a message is fairly straightforward, but combining actors to create higher level behaviors often requires coordinating actors' actions using specific message sequences. It can be quite difficult to figure out a higher level behavior by reading source code when you're having to trace messages all over the place.

With the query/response messages, the behavioral code is split between the MouseBtnClick event and the ResolvedPosition message handler. Not only do I have to mentally combine the behaviors from those two locations, but I also have to figure out if there are conditions where MotionController will not send ResolvedPosition messages and what happens in the UI if it doesn't receive all the ResolvedPosition messages. This is not insurmountable, but it does require a fair bit of work and I'm not entirely happy with the solutions I've used.

The "fire and forget" nature of futures allows me to write the behavior's implementation largely within the code handling the button click event. There is no ResolvedPosition message handler, so I can look at that one block diagram and see the entire sequence of actions that occur in the UI component as a result of the mouse click. In this particular case the code is much easier to read and reason about. (At least it is for me, but I wrote it so I'm hardly an unbiased observer.)

There are other difficulties with using query/response messages here. My UI's display and behavior change when going through this process, so it is implemented as a separate "Alignment" state in UI code. From the user's perspective the process is complete as soon as the fourth mouse click is done. From the programmer's perspective the process is complete when the DoCalc message is sent. The mismatch makes defining an Alignment state exit condition problematic.

In my state machine implementations data that is specific to one state is not available to other states. The ResolvedPosition data will be discarded when the UI switches from the Alignment state to another state. On the one hand I can't leave this state until all four ResolvedPosition messages have been received and the DoCalc message is sent. On the other hand, the user expects the application's UI to revert back to normal after the fourth button click. How do I satisfy both requirements?

Potential options include:

- After the fourth button click, wait an arbitrary amount of time for the remaining ResolvedPosition messages.

- Move the ResolvedPosition data from state specific data to state machine data, so the following state can continue the Alignment process.

- Refactor the UI layer's state machine so UI Display and UI Behavior are separate state machines.

None of the options were particularly appealling at the time. The first two seemed like bad ideas (still do) and third was too time consuming and adds a level of complexity that wasn't needed, since this is the only place where changes to UI behavior and UI display aren't simultaneous.

Personally I'm not a big fan of query/response message sequences. I'm not opposed to them on principle, but they don't seem to fit very well with how I do event-based programming. In general I try to push data out to where it will be needed rather than pull it in by querying for it. Like I said earlier I haven't used futures again since my original post, and to be honest I'm not entirely sure what combination of conditions makes it seem like the right solution in this case.

It could be that the UI doesn't actually use (or even see) the data in the future--it just holds on to them until the user has clicked the button four times and sends them to the Motion Controller for processing. It could be that I wanted to keep the transformed data type out of the UI code to help maintain encapsulation. It could be that the UI already had some inherent statefulness I wanted to leverage. It could be that subconciously I had a solution looking for a problem and I've become attached to it. It could be all of the above or none of the above. I dunno...

Do futures really avoid saving state when compared to asynch messages. I will agree that the *type* of the state information that must be stored is different, but not necessarily the quantity or complexity.

I think we're talking about slightly different ideas of "state." An application's or component's "state" can be said to have changed any time a piece of its data in memory has changed. I'll give that concept the adjective-noun description, "data state."

In this thread when I'm referring to state I mean "behavioral state," where an application or subsystem responds differently to a given input. I usually implement that by changing the set of message handlers processing incoming messages using techniques I've described elsewhere (though when faced with time constraints or very simple behavior changes I might grit my teeth and use a flag instead of implementing an entirely new state.) My remark about implementing a new state being "too heavy" refers to adding a new behavioral state to a state machine, not simply adding a flag to change the way a specific message handler works.

I agree the total amount of memory used by the application to store the application's data state isn't reduced by an appreciable amount, if at all. Run-time decision making has to be based on a value somewhere in memory regardless of the implementation. I'm leaning towards disagreement on the question of complexity, but source code complexity is mostly subjective anyway, so I'll not belabor the point.

If you have to introduce a new state to check the notifier, you're on a dark dark path. And I've seen this happen in code. In fact, it happens easily.

I assume the "new state" you are referring to is being implemented as a flag of some sort, such as a "waiting for message" boolean. I've seen code go down that path (and written some myself) and agree it leads to a tangled mess if one does not keep the larger architecture in mind.

In my terminology, a chunk of code is only a state machine if it can be described by a state diagram. (Note to readers: A flow chart is not a state diagram.) Systems whose behavior is controlled by a flag parade often cannot be represented by a state diagram. Adding a "new state" to code that is not implemented as a state machine first requires a state machine be implemented. And then the new state must fit in with the rest of the state diagram, with clearly defined entry conditions, exit conditions, and behaviors while in that state.

The whole point of futures is that Needy *knows* it will need this data shortly. So it sends the request, then it does as much work as it can, but eventually it comes around to the point where it needs that data. What happens when Needy gets to the Wait For Notifier primitive and the data isn't ready yet? It waits. And right then you have defeated much of the purpose of the rest of your asynchronous system.

Actually, I set the timeout to zero on the WFN function specifically because I didn't want to bind them together in time. The notifier was just a convenient way to to make the future a by-ref object. In retrospect it would have made more sense to make my future using a DVR instead of a notifier, since the option of waiting until the notifier is sent is unnecessary in my case.

If the application logic made it possible to use the future before it was filled, I'd probably try to do some sort of data validation at the point where the future was redeemed. The Future.GetValue method could return an error if the notifier has not been set. MotionController would detect that and send out an "AlignmentCalculationFailed" message. The error would be forwarded to the UI, which would then alert the user that the alignment needs to be redone. (That's just off the top of my head though... no idea if it would work well in practice.)

Link to comment

Thoughts on Futures, as I understand them (and without reexamining Daklu’s implementation):

1) Isn’t the point of futures to block (not poll), just like synchronous operations, but to delay the block until the information is needed, rather than when it is requested? It’s “lazy blocking”. And very similar to standard LabVIEW data flow (blocking till all inputs available).

2) One use of Futures I can think of is if I wish to request information from several processes, and perform some action only when I receive all replies. I can send all the requests and pass the array of futures to a spawned “Wait on all Futures” process/actor that sends a single bundled-reply message back to the original process when all the futures are filled. This would be much easier than having to record each reply and checking to see if I have all of them.

— James

Link to comment

drjpowell:

Re: 1) Yes.

Re: 2) Yes, it is easier to code than watching for all the messages to come back. I wonder, though, if it might also be easier to design a "round robin" message: create a message with a list of processes to visit, send the message to the first one, it adds its info, then passes the message to the next process on the list, coming back to the original process when it is done. That would reduce the "do I have them all yet" bookkeeping and still be consistent with asynch messaging. I've never tried to build anything like that.

  • Like 1
Link to comment

1) Isn’t the point of futures to block (not poll), just like synchronous operations, but to delay the block until the information is needed, rather than when it is requested? It’s “lazy blocking”. And very similar to standard LabVIEW data flow (blocking till all inputs available).

The information I read about them described them like an IOU in a "I don't have the data now so I'll give you this instead, and later on you can turn it in to get the data" kind of way. The point was to avoid blocking, not just postpone it. That said, I've not used them in any other language nor seen how they are implemented, so what do I know? For all I know what I implemented aren't really futures.

Link to comment

The information I read about them described them like an IOU in a "I don't have the data now so I'll give you this instead, and later on you can turn it in to get the data" kind of way. The point was to avoid blocking, not just postpone it. That said, I've not used them in any other language nor seen how they are implemented, so what do I know? For all I know what I implemented aren't really futures.

I only know them from Java and only enough to use them in not to complicated ways. What they actually do as far as I understand it, is running as a separate thread (which can be a dedicated thread or one from a shared thread pool or a few other variants thereof from the java.util.concurrent package) and do whatever they need to do in the background. If they are there to produce something that will eventually be used at some point in the application you can end up with a blocking condition nevertheless. But they are very powerful if the actual consumer of the future result is not time constrained, such as multiple HTTP downloads for instance. If the HTTP client library is supporting Futures you can simply shoot off multiple downloads by issuing downloads in a tight loop, letting the Future handle the actual download in the background. It could for instance save the received data to disk and terminates itself after that freeing the used thread again. If you don't need the actual data in the application itself you could then forget the Future completely once you have issued it.

The way I understand it a Future is some sort of callback that is running in its own thread context for the duration of its lifetime and has some extra functions to manage it such as canceling, checking its status (isDone(), isCancelled()) and even waiting for it's result if that need should arise. What the callback itself does is entirely up to the implementer.

Depending on the chosen executor model the HTTP client could still block such as when using a bounded thread pool and you issue more downloads than there are threads available in the thread pool.

All that said I do think there are problems to implement Futures in such a way in LabVIEW. LabVIEW does automatic multhtreading management in its diagrams with fixed, bounded thread pools. There is no easy way to reconfigure that threading on the fly. So there is no generic way to implement Futures that run off a theoretically unbounded thread pool should that need arise, and if you start to use Futures throughout your application in many other classes you run into a thread pool exhaustion quickly anyhow. So I don't see how one could implement Futures in LabVIEW in the same way and still stay generic, which is the idea of the Future implementation in Java. The Future itself does not even define the datatype it operates on, but leaves that to the implementor of the class using the Future. That are all concepts where LabVIEW can't fully keep up with (and one of the reason I'm not convinced diving into LVOOP is really worth the hassle :cool: )

  • Like 1
Link to comment

I only know them from Java and only enough to use them in not to complicated ways.

After my last post I tried to find the information I had originally read. Searching for "software engineering futures" or any of the many variations I tried returns long lists of articles on the future of the industry or stock market trading. The one small piece of information I could find about Futures as a software construct was in Java documentation. That described blocking as an integral part of Futures.

If the HTTP client library is supporting Futures you can simply shoot off multiple downloads by issuing downloads in a tight loop, letting the Future handle the actual download in the background. It could for instance save the received data to disk and terminates itself after that freeing the used thread again. If you don't need the actual data in the application itself you could then forget the Future completely once you have issued it.

To me, not wanting the results from the Future defeats the point of using the Future in the first place.

All that said I do think there are problems to implement Futures in such a way in LabVIEW.

I agree implementing the Java model in Labview is problematic, but by stepping back and looking at the intent of Futures instead of their behavior in Java I think we can come up with reasonably good alternatives. Their intent appears to be to avoid blocking the current thread while waiting for information not currently available by creating a token that will contain the information later. What I did is apply the concept of Futures to messaging between existing threads. The implementation is very different from Java's but the intent is the same--avoid blocking the current thread by waiting for information.

I don't do Java programming, but my feeble understanding of the normal programming model is the app has a main thread and the developer creates new threads for specific asynchronous tasks. From that point of view it makes sense for Futures to automatically spawn new threads--the asynchronous code requires it. As near as I can tell the Java implementation closely follows the Asynchronous Call and Collect model made available in LV2011. In Labview we can write asynchronous code without dynamically creating a new thread (or Start Asynch Call in LV) so I don't think it is a necessary feature in a LV implementation.

LabVIEW does automatic multhtreading management in its diagrams with fixed, bounded thread pools. There is no easy way to reconfigure that threading on the fly. So there is no generic way to implement Futures that run off a theoretically unbounded thread pool should that need arise, and if you start to use Futures throughout your application in many other classes you run into a thread pool exhaustion quickly anyhow.

I know you have a much better understanding of the low level interactions between LV and the OS than I do, so this is a request for clarification, not an attempt to correct you.

Even though there is a theoretically unbounded pool of operating system threads, the hardware still limits the number of parallel activities that can take place. Java may allow a unbounded number of operating system threads while LV transparently manages a fixed number, but a program written in G can have an unbounded number of "Labview threads." I don't see how there is much difference between Java's OS thread pool and LV's technique of dynamically mapping parallel execution paths to a fixed thread pool. They are both constrained by the same resource--CPU threads. Can you elaborate?

So I don't see how one could implement Futures in LabVIEW in the same way and still stay generic

I agree achieving the same level of genericism isn't possible in Labview. Given NI's focus on providing a safe environment for novice programmers it may never be possible. I don't agree the concept of Futures is useless for LV developers simply because we can't implement it in the same way.

and one of the reason I'm not convinced diving into LVOOP is really worth the hassle

My car isn't a Audi, but it still beats using a tricycle for transportation. :)

Link to comment

To me, not wanting the results from the Future defeats the point of using the Future in the first place.

Well you may want it at some point but not necessarily in the same application and just as a copy of files on the harddisk. But you are right, whatever you want it is usually never as trivial as just shooting of the request and forgetting it. You want for instance usually be informed if one of the downloads didn't succeed and you also want a way to not have the library wait forever on never received data.

I don't do Java programming, but my feeble understanding of the normal programming model is the app has a main thread and the developer creates new threads for specific asynchronous tasks. From that point of view it makes sense for Futures to automatically spawn new threads--the asynchronous code requires it. As near as I can tell the Java implementation closely follows the Asynchronous Call and Collect model made available in LV2011. In Labview we can write asynchronous code without dynamically creating a new thread (or Start Asynch Call in LV) so I don't think it is a necessary feature in a LV implementation.

Java threading can be a bit more powerful than just blindly spawning threads. If you make consequently use of the java.util.concurrent package, you can create very powerful systems that can employ various forms of multithreading with very little programming effort. At the core are so called executors that can have various characteristics such as single threads, bounded and unbounded threadpools and even scheduled variants of them.

Especially thread pools are very handy, since creation and destruction of threads is a very expensive operation, specifically under Windows. By using threadpools you only have that penalty once and still can dynamically assign new "Tasks" to those threads.

Even though there is a theoretically unbounded pool of operating system threads, the hardware still limits the number of parallel activities that can take place. Java may allow a unbounded number of operating system threads while LV transparently manages a fixed number, but a program written in G can have an unbounded number of "Labview threads." I don't see how there is much difference between Java's OS thread pool and LV's technique of dynamically mapping parallel execution paths to a fixed thread pool. They are both constrained by the same resource--CPU threads. Can you elaborate?

You are fully right here, but the actual threadpool configuration in LabVIEW is very static. There is a VI in vi.lib that you can use to configure the numbers of threads for each execution system, but this used to require a LabVIEW restart in order to make those changes effective. Not sure if that is still the case in the latest LabVIEW versions.

LabVIEW itself implements still some sort of cooperative multithreading on top of the OS multithread support just as it always did even before LabVIEW got OS thread support in 5.0 or 5.1. So I would guess that the situation is not necessarily as bad, since you can run multiple VIs in the same execution system and LabVIEW will distribute the available threads on the actual LabVIEW clumps, as they call the indivisible code sequences they identify and schedule in their homebrew cooperative multitasking system. You do have to be carefull however about blocking functions that cause a switch to the UI thread as that could completely defy the purpose of any architecture trying to implement parallel code execution.

Not sure how the new Call Asynchronous fits into this. I would hope they retained all the advantages of the synchonous Call By Reference but added some way of actually executing the according call in some parallel thread like system, much like the Run Method does. But I haven't looked into that yet

Edit: I just read up on it a bit, and the Asynchronous Call by Reference looks very much like a Future in itself. No need to employ LVOOP for it, jupiieee!

I agree achieving the same level of genericism isn't possible in Labview. Given NI's focus on providing a safe environment for novice programmers it may never be possible. I don't agree the concept of Futures is useless for LV developers simply because we can't implement it in the same way.

I never intended to say it would be useless, just not as generic and powerfull as the Java version.

My car isn't a Audi, but it still beats using a tricycle for transportation. :)

Lol, I so much missed the hooded smiley here that is sometimes available in other boards. Should have scrolled in the list instead of assuming it isn't there. I knew I was poking some peoples cookies with this, and that is part of the reason I even put it there. :P

Link to comment

Re: 2) Yes, it is easier to code than watching for all the messages to come back. I wonder, though, if it might also be easier to design a "round robin" message: create a message with a list of processes to visit, send the message to the first one, it adds its info, then passes the message to the next process on the list, coming back to the original process when it is done. That would reduce the "do I have them all yet" bookkeeping and still be consistent with asynch messaging. I've never tried to build anything like that.

A “round robin message” would work, but would be serial, rather than parallel. And I suspect a “Wait on all Futures” actor would be just as simple.

The information I read about them described them like an IOU in a "I don't have the data now so I'll give you this instead, and later on you can turn it in to get the data" kind of way. The point was to avoid blocking, not just postpone it. That said, I've not used them in any other language nor seen how they are implemented, so what do I know? For all I know what I implemented aren't really futures.

I suspect we all have somewhat different ideas about what “futures” might be. My first reading on futures was some webpage (which I can’t find again) that gave a pseudocode example like this:

future A=FuncA()

future B=FuncB()

…do something else...

C = FuncC(A,B)

Here FuncA and FuncB run in parallel, and the code blocks at the first use the results. Note that we can already do this kind of stuff in LabVIEW due to dataflow.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.