Jump to content

Futures - An alternative to synchronous messaging


Recommended Posts

A “round robin message” would work, but would be serial, rather than parallel. And I suspect a “Wait on all Futures” actor would be just as simple.

I suspect we all have somewhat different ideas about what “futures” might be. My first reading on futures was some webpage (which I can’t find again) that gave a pseudocode example like this:

future A=FuncA()

future B=FuncB()

…do something else...

C = FuncC(A,B)

Here FuncA and FuncB run in parallel, and the code blocks at the first use the results. Note that we can already do this kind of stuff in LabVIEW due to dataflow.

Good point that dataflow inherently solves some cases that futures might be used for in conventional languages.

Link to comment

When developing apps with multiple parallel processes, many developers prefer messaging systems over reference-based data. One of the difficulties in messaging applications is how to handle request-reply sequences between two loops. Broadly speaking there are two approaches: synchronous messages and asynchronous messages.

I might be missing here something since I think the solution is very simple and as Daklu says: "If the solution looks simple, I don't know enough about the problem".

Here is how I would solve it:

Running an async visitor design pattern with db class input to all the called objects. The db class is passed as a dvr and actions on it are done using the in place structure to avoid races.

The db class contains:

0. a dynamic user event to the needy vi event loop.

1. relevant information to this part/topic of the code.

2. member access vis with write that increments an update counter.

3. a counter update member access vi that contains in the write a check for scenarios like did all observer object update their data or did someone specific update or was something specific updated.

4. this last member access vi "write" will trigger the event of the needy to check the result once requirements are met.

Thus, I send the dvr db data to all objects and they update the section relevant to them. If the in place is not enough to prevent races since the manipulation must be serial I can add a serialization mechanism or wrap the entire objects code in an in place structure. Once the event is triggered the needy will check the result and destroy the dvr.

If you want to send along a future/why block for general analysis or results in a general event case it is also possible and I would use an expert system to define rules over those scenarios.

Now, please tell me what did I miss :)

Link to comment

For kicks and grins I mocked up a simplified project (Async Messaging Problem.zip) with all the major elements related to the problem I was faced with. It contains an implementation using asynchronous query-response messaging and explains the flaw in that design. The task is to fix the flaw. I'll post my solution using Futures soon. I'd be interested in seeing how others fix the problem.

Notes:

* The top level vi is located in User Interface >> UserInterface.lvlib >> Main.vi

* Code is written in LV2009. I use LapDog for the messaging, but this project has a private copy so you don't need to install the package.

* It does not have extensive documentation describing how the application works. There is a fair bit of application framework code required to implement the design. I hope between the documentation that is there and stepping through the code it will be easy enough to understand. In particular, the UI State Machine is implemented using a less-well-known design. (Mostly it's just known by me.)

* The sections that will most likely need changing are explicitly identified.

If you make consequently use of the java.util.concurrent package, you can create very powerful systems that can employ various forms of multithreading with very little programming effort.

In LV we have queues, notifiers, etc. for exchanging information between parallel threads. Does Java have something similar?

You are fully right here, but the actual threadpool configuration in LabVIEW is very static.

Yeah, but the 21 threads available in Labview's thread pool still exceeds the number of simultaneous threads most processors can run. Doesn't an Intel processor with Hyperthreading essentially support two threads per core? That's only 8 threads total on a quad core processor. What advantages would you get by increasing the size of LV's thread pool and managing threads directly instead of letting LV's scheduler take care of it for you? Is it mainly for when you need to squeeze the last drop of performance out of a machine?

I knew I was poking some peoples cookies with this

My cookies defy poking. :P

Here is how I would solve it:

...

Now, please tell me what did I miss

If you are missing anything I'd guess it's that I wasn't starting with a clean slate. I had to fit the changes into existing code. To be honest I'm having a hard time visualizing what your proposed solution would look like in my project. If you have time would you mind trying to implement it in the attached project?

------------

Edit - My solution is posted here.

Async Messaging Problem.zip

Link to comment

In LV we have queues, notifiers, etc. for exchanging information between parallel threads. Does Java have something similar?

Let's see. I'm by no means a Java crack and am not sure I ever will be. My interest with Java was only started when I wanted to do some stuff under Android, mostly for fun so far. Incidentally looking at the Java scene I think the flair of the days when Sun was yielding the scepter has mostly gone. Oracle seems to be seen by many as an unfriendly king and by some even as a hostile tyrant. Google sort of took over a bit but tries to keep a low profile in terms of Java. They just use the idea but don't really promote it at all, probably also because of the law suits. I think Oracle has done the single most effective move to kill Java as an idea with their recent actions.

But lets get back at your question:

There is the java.util.queue interface with it's many incarnations like AbstractQueue, ArrayBlockingQueue, ConcurrentLinkedQueue, DelayQueue, LinkedBlockingQueue, LinkedList, PriorityBlockingQueue, PriorityQueue, SynchronousQueue.

Then we have the inherent synchonous(obj) { } keyword that can be used on any object to create blocks around code that needs to be protected. Only a single block of code can be at any time inside a synchronous block for a specific object.

And last but not least there is the notify() and wait() method the java object class implements which every other Java object is derived from directly or through other object classes. These two methods are a bit special since they do actually work together with the synchronous(obj) keyword. I haven't mastered this part yet fully but I had trouble to use those methods unless the code block in which they are called is protected with the synchonous(obj) keyword but reading suggests that this should not always be necessary.

Yeah, but the 21 threads available in Labview's thread pool still exceeds the number of simultaneous threads most processors can run. Doesn't an Intel processor with Hyperthreading essentially support two threads per core? That's only 8 threads total on a quad core processor. What advantages would you get by increasing the size of LV's thread pool and managing threads directly instead of letting LV's scheduler take care of it for you? Is it mainly for when you need to squeeze the last drop of performance out of a machine?

You probably have a point there. All I can say to that is that it seems cleaner to me to have a specific object interface be able to manage it's thread needs on its own by using the appropriate thread policy such as a threadpool with whatever limits seems useful, and let the proven OS implementation handle the distribution of the threads on whatever cores are available. It probably won't speed up things at all to do so, but I just like to have the idea of being in control if I feel the need is there. :rolleyes:

On the other hand, I have not to complain about how LabVIEW handles concurrent parallel execution so far. It seems just as capable to handle multiple code parts that can operate in parallel with the available CPU power as a Java program that uses several threads. So in the end there may be nothing really left but the feeling of having more control over the things, which is also sometimes a reason for me to implement certain things in C and incorporate them into LabVIEW as shared library.

My cookies defy poking. :P

And because of that you respond to it!? :P

I probably will try to take a look at your project. Always interesting to see new things, eventhough I'm not sure I will groke the whole picture.

Link to comment

All I can say to that is that it seems cleaner to me...

...there may be nothing really left but the feeling of having more control over the things...

Having confidence your code will behave correctly is a perfectly valid reason to favor direct thread control even if there is no performance benefit. (I doubt it will be enough to convince NI to expose more direct thread control to users, but maybe you can get them to implement a super-secret "rolf" ini switch for you.)

And because of that you respond to it!?

Only to return the poke.

I probably will try to take a look at your project. Always interesting to see new things, eventhough I'm not sure I will groke the whole picture.

Feel free to ask questions if you don't understand something. The only dynamic dispatching is related to the messaging framework, which fundamentally behaves very similarily to normal string/variant messaging. I suspect you won't have too much trouble with the OO nature of the app, though you might have questions about some of the application framework code.

Link to comment

I need to read up on this more. A perfect example of where this would be valuable is we had handshaking in a system where one loop was managing the DAQ and another managing the logging. We had to handshake with a PLC using relays controlled in the DAQ loop based off different logging conditions (the log file was opened successfully, closed successfully, ftp'd successfully etc). Right now the code queues up a command to the logging loop to FTP a file but then sits waiting for a response (success or fail) and does not move on! Talk about possibilities for locking up. Then after the file is FTP to the host, the buffer has overflowed on the DAQ and needs to be reconfigured. For this application it's not a big deal because the file is only ftp'd after the test is completed. But for future reference it may be beneficial.

Edit: After reading through everything more in depth maybe I did not have a great grasp on it at first. Seems to me that in my case I don't need some current data at some time in the future. There may be better methods for doing what I need to do, but I'm not sure if this solution is necessarily what I want. I'll keep reading and rereading to make sure I have a full understanding of everything and if this will "fit the bill"

Edited by for(imstuck)
Link to comment

I need to read up on this more. A perfect example of where this would be valuable is...

The implementation of Futures I used in my project would not help you. It's a fairly unique set of circumstances that makes it work in my original project. Namely, the actor setting the Future's value is also the actor that eventually reads the Future. Using normal asynchronous messages this would be like making 1.5 round trips between the two actors instead of just a single round trip:

Async Message Sequence:

1. UI sends the untransformed data to the Model.

2. Model transforms the data and returns it to the UI.

3. UI stores all transformed data until it has enough, then sends all transformed data back to the Model.

Equivalent Future-based Sequence:

1. UI creates a DVR (Future) and sends the untransformed data and a copy of the DVR to the Model.

2. Model transforms the data and puts it in the DVR.

3. UI stores all created DVRs until it has enough, then sends them all to the Model.

Because the Future is being set and read by the same thread, I can guarantee the Futures have valid data before being read by sending messages to the Model in the correct sequence. On the other hand, if the UI were reading the Futures (a single round trip), the Model would still need some way to notify the UI the Future is set, which mostly defeats the whole thing in the first place. (It can still work if you're willing to fail the operation requiring the Future or block execution until the Future is set.)

My Socratic question to you is, why does the DAQ loop care whether or not the FTP transfer was successful?

Link to comment

1) Isn’t the point of futures to block (not poll), just like synchronous operations, but to delay the block until the information is needed, rather than when it is requested? It’s “lazy blocking”. And very similar to standard LabVIEW data flow (blocking till all inputs available).

The information I read about them described them like an IOU in a "I don't have the data now so I'll give you this instead, and later on you can turn it in to get the data" kind of way. The point was to avoid blocking, not just postpone it. That said, I've not used them in any other language nor seen how they are implemented, so what do I know? For all I know what I implemented aren't really futures.

If something takes 10ms and one delays blocking for 11 ms, then one has avoided blocking all together. I hadn’t appreciated, though, that you are filling your futures in the same message handler that is redeeming them, and thus in your case there is no possibility of ever actually blocking on the redeeming of the futures. Clever, and I can’t think of a cleaner way of doing it.

Link to comment

Here's my Futures solution to the problem code given above. The changes required are pretty minimal I think... especially when compared to any other solution I can think of. I'm still interested in seeing other people's ideas if they're willing to share.

A couple notes:

- This solution changes the Model interface. Specifically, it changes the payload of the TransformData and DoCalc messages. If Model were a shared component or service, obviously this would cause problems. I haven't thought at all about what the implementation would look like if the Model had to support Futures without breaking backwards compatibility.

- I'm really curious what a solution would look like if changes were restricted to UserInterface.lvlib. The problem given in the example code feels like a UI issue, so I'd think it would be solvable by changing UI code. The reliance on asynchronous messages leads me to think I need to better separate the UI's display state from the UI's behavioral state for that to be applicable in the general case. Hmm... I'll have to think about that for a while.

Asynch Messaging Solution - Daklu.zip

Link to comment

Here's my Futures solution to the problem code given above. The changes required are pretty minimal I think... especially when compared to any other solution I can think of. I'm still interested in seeing other people's ideas if they're willing to share.

A couple notes:

- This solution changes the Model interface. Specifically, it changes the payload of the TransformData and DoCalc messages. If Model were a shared component or service, obviously this would cause problems. I haven't thought at all about what the implementation would look like if the Model had to support Futures without breaking backwards compatibility.

- I'm really curious what a solution would look like if changes were restricted to UserInterface.lvlib. The problem given in the example code feels like a UI issue, so I'd think it would be solvable by changing UI code. The reliance on asynchronous messages leads me to think I need to better separate the UI's display state from the UI's behavioral state for that to be applicable in the general case. Hmm... I'll have to think about that for a while.

Indeed. You would find it much easier with a "dispatcher" so you could have asynchronous to the UI and synchronous to the device.

Link to comment

Indeed. You would find it much easier with a "dispatcher" so you could have asynchronous to the UI and synchronous to the device.

Two questions:

1. I assume by "dispatcher" you mean inserting an abstraction layer between the UI and the Model?

2. Do you mean easier than the solution I used given the conditions, or easier if I could only change UI code?

I'm fairly certain I'll eventually have to insert another abstraction between the UI and Model. Often I have a Controller component coordinating information between the UI and Model. This project evolved in a way that left the Model with no public representation of individual states. As far as the UI was concerned the Model was a simple message handler rather than a state machine. (Though some Model subcomponents are implemented as state machines and this is known to other subcomponents.) All the application states the user is aware of are implemented in UI code, because they represent changes in UI behavior.

Inserting a Controller layer just to manage the UI pseudostate between "last TranformData message sent" and "last DataTransformed message received" seems like overkill. I know complex refactorings often have to start by adding code that looks out of place or makes the code harder to follow. This project is at the point where development could stop for years after each new release, but no feature list for the release is comprehensive enough to justify doing the entire refactoring in one fell swoop. I'm reluctant to start a refactoring process that I'm not sure I'll be able to finish before putting the source code on the shelf. It's probably psychological weakness on my part...

Link to comment

Hey Daklu,

I finally had time to check the this interesting post. Your code looks very nice yet very different from my solution.

Once you posted your implementation I understood your question much better and I see that my version had a similar problem. Thanks!

Even though the Logic and Model of my MVC wouldn't change after the 4th click the UI loop might still send user actions that won't be reflected and create a difference between the UI and the Model.

I guess I need to add a semi logic loop to the UI loop called: "instinctive common UI behavior" that will have to handle such fast cases while interfacing between the UI msg queue and the logic msg queue.

Once added the difference between your solution and mine will be that the 5th click won't require a LV thinking proccess (the pointer mouse turning into a loading circle).

I would love to post a sample as you requested, however, I'll have time to attend to it only at the end of June.

GOOfy Wires.

Link to comment

Two questions:

1. I assume by "dispatcher" you mean inserting an abstraction layer between the UI and the Model?

2. Do you mean easier than the solution I used given the conditions, or easier if I could only change UI code?

No 1 to start with. A dispatcher enables you to have both the synchronous and asynchronous.

No 2 Not really. I just think it is incomplete and is limited by its message "depth".

The problem with futures is that they can "sometimes" be all of the diagrams you depicted earlier. There has to be logic, state and sequencing somewhere at some point otherwise one of the three can be chosen for all. The key (I think) is to isolate the synchronous/state-full from the asynchronous and, to do that, you need a "broker" or "dispatcher" to consume un-propagated asynchronous messages and to sequence sequential ones.

The argument then becomes how much logic do you put in your dispatcher (or should you rename it "controller"). For small apps you can put it all in there (i.e a sequence engine). I would argue though that there should be none and that the logic should be in a controller so that it is modularised with the device (whether that be a piece of hardware or a UI. This is, however getting more complex.

Saying that though. Dispatcher in the CR uses the former (logic in the dispatcher). So I really should listen to my own arguments :P

I'm fairly certain I'll eventually have to insert another abstraction between the UI and Model. Often I have a Controller component coordinating information between the UI and Model. This project evolved in a way that left the Model with no public representation of individual states. As far as the UI was concerned the Model was a simple message handler rather than a state machine. (Though some Model subcomponents are implemented as state machines and this is known to other subcomponents.) All the application states the user is aware of are implemented in UI code, because they represent changes in UI behavior.

I'm not sure of your terminology here. When I talk about controllers, I think in terms of "device" controllers-a piece of code that sequences and abstracts complex operations into simple API commands (messages). If you are thinking in terms of the Muddled Verbose Confuser, then that is different since it is a logical separation rather than functional and aimed mainly at UIs. I, however view a UI like a "device"so the UI will have a "controller" but no "model". The closest to a "model" would be the messaging construct itself. However, I use string messages so the "logic" involved in the dispatcher is trivial

Inserting a Controller layer just to manage the UI pseudostate between "last TranformData message sent" and "last DataTransformed message received" seems like overkill. I know complex refactorings often have to start by adding code that looks out of place or makes the code harder to follow. This project is at the point where development could stop for years after each new release, but no feature list for the release is comprehensive enough to justify doing the entire refactoring in one fell swoop. I'm reluctant to start a refactoring process that I'm not sure I'll be able to finish before putting the source code on the shelf. It's probably psychological weakness on my part...

Indeed. It all depends on whether you will use it in other apps or whether it is a throw-away architecture. I assume you are looking for a generic slution.

The following is the messaging architecture I have used in all apps for the last 3-4 years. All of the controllers, drivers (the "device 1/2") , the dispatcher, and TCPIP module can be brought directly into other apps. The UI however cannot (every UI is different), but there is only one module to create. This might be overkill for your app, but I hope it demonstrates the role that the dispatcher plays in separating asynchronous from synchronous.

Link to comment

Even though the Logic and Model of my MVC wouldn't change after the 4th click the UI loop might still send user actions that won't be reflected and create a difference between the UI and the Model.

Yeah, coordinating the UI's state (which may or may not be implemented as a state machine) with the Model's state while keeping them lightly coupled is a tricky problem. I think this is the first app where I've implemented the UI as a state machine instead of just as a straight message handler.

While implementing the example solution I discovered a behavior that could be a problem in certain situations. If you click the Set Pt button four times quickly then immediately click the Gather Data button, you'll get an Unhandled Message error from the UI for the debug message returning the four transformed values from the Model. Unhandled messages are intentional in my state machine implementations. Any message a state shouldn't react to is unhandled. But if for some reason that return message were necessary for the application to work right that state machine implementation starts to break down and begins requiring duplicate code. (Identical message handlers in different states.)

There are ways to eliminate duplicate message handlers. Often I'll encapsulate each state as a class with an EntryAction, ExitAction, and MessageHandler methods. The base class implements message handlers for message that should always be handled. Child classes (the states) implement message handlers with a Default case that call the parent class' message handler. That's not a perfect solution either; it works if your states' behavior can be organized as a tree structure, which isn't always true. I've toyed with the idea of creating a separate MessageHandler class structure and injecting the correct MessageHandler object into the state at runtime. Of course, this all adds a lot of complexity and is far beyond what I wanted to illustrate with the example code.

In any event, the debug message wasn't important in the context of this example so I choose to call it "expected behavior" instead of "bug."

I assume you are looking for a generic slution.

Not really. My original post was just to share a neat concept I had run across and used. Now I'm trying to gain a deeper understanding of the applicability of Message Futures. Are they a "good design idea" or simply an "acceptable temporary solution" to a design that is starting to crack around the edges? Originally I thought "good design idea." The recent discussion is pushing me toward "acceptable temporary solution."

I'm not sure of your terminology here.

Aye. Imprecise terminology has got to be the single biggest problem in software engineering. (At least it is for me.) It doesn't help that I'm not very consistent with my terminology, even in this thread. (It's so much easier in mechanical engineering. A gear is a gear whether I'm in Arkansas or Argentina. I don't have to abstract it into "rotational load transfer device" and figure what it looks like in a universe with different laws of physics.)

I don't consider my applications as having an MVC architecture, but that's the closest representation most people are familiar with so I try to accomodate it. Usually they are more of a Model-Mediator/Controller-UI. At the highest level, the Mediator/Controller is the glue code between the UI and the Model. In general I try to write my Model so I could use it as is in an application with similar functionality. It does not necessarily include all the non-UI logic and data the system requires. The difference between a Mediator and Controller is a Mediator's primary responsibility is to translate messages so the UI and Model can talk to each other. It's more of a facilitator. (i.e. A "BtnClicked" message from the UI means I need to send a "DoThis" message to the Model.) A Controller does that as well, but it exerts more direct control and manages higher level states that cannot be completely described in the UI or Model by themselves. There is a lot of fuzziness surrounding the two ideas--I just wave my hands and say when a Mediator starts tracking state information to filter messages or change message handlers it becomes a Controller.

As near as I can tell my high level Mediator/Controller (or "Application Controller") largely fills the same role as your Dispatcher. To add to linguistic confusion, I also use the term Controller for any functional component that manages a subsystem, whether it is a single device or several devices. Each controller has to be written specifically for the subsystems it manages, but most controllers also expose a public messaging api to allow external systems to interface with it. (AppControllers don't typically expose a public api, but the option is there for when I need it.)

(Your message naming convention reveals possible differences between our design philosophies. Having a device controller accept a "MoveBtnClick" message and send a "DisplayPosition" message indicate the device controller has knowledge of the how the device is being used in the application. The message phrasing implies the device controller is responsible adapting to meet the dispatcher's api rather than the other way around. My controller components encapsulate higher level behavior for the calling component, but their messaging interfaces remain agnostic of the code using the controller. The messages between the device controller and dispatcher would have names like "MoveToPosition" and "PositionUpdated.")

Here's a diagram somewhat representative of the current communication map after various evolutions and refactorings. You can see there's no dedicated AppController. The UI Controller has filled that role so far, and while the design is not ideal it has been sufficient.

post-7603-0-58090700-1339962278.png

The argument then becomes how much logic do you put in your dispatcher (or should you rename it "controller").

That's the $100 question. Below is a state diagram for the example problem's UI display states. Using query/response async messages requires recognizing the trivial "user has sent four data points to be transformed but the Model hasn't finished them yet" state. Where should that new state go? Conceptually it doesn't fit with the UI state machine very well and I can't think of a suitable set of transition conditions that would give me the desired behavior without making everything a lot more complicated.

post-7603-0-36122400-1339962279.png

I could have the Model store the transformed data points and avoid the asynchronous problem entirely. In fact early implementations did just that. I switched responsibility for maintaining the transformed data points to the UI because it feels like that's where it should be. The Model doesn't care about each point individually; it only cares about the complete set of points.

That leaves me with the options of using a Future or refactoring to implement an AppController/Dispatcher.

The following is the messaging architecture I have used in all apps for the last 3-4 years. All of the controllers, drivers (the "device 1/2") , the dispatcher, and TCPIP module can be brought directly into other apps.

Okay, this needs clarification. What do you mean by "can be brought directly into other apps?" Do you just mean each of those has no static dependencies on other code? Or are you saying they are generic enough you can pull them in and use them without editing their source code and messaging api? I can see how certain components could be easily reused as-is, such as device 1/2 and the TCPIP module, but I don't see how you avoid having app-specific code in your device controllers or dispatcher.

The thing I thought was unusual about your architecture is the device controllers sit betweeen the device and dispatcher as I'd expect, but the UI and TCPIP controllers do not. Maybe I'm missing something fundamental about what you are doing?

The problem with futures is that they can "sometimes" be all of the diagrams you depicted earlier. There has to be logic, state and sequencing somewhere at some point otherwise one of the three can be chosen for all.

I agree there has to be logic, state, and sequencing somewhere, but I'm having a hard time understanding the rest. Can you explain what you mean by "they can sometimes be all of the diagrams you depicted earlier?" Do you mean a Future could be a synch message, asynch query/response message, or a fire-and-forget message?

Link to comment

Hi Daklu,

A comment:

If your “Model” was a complex, multi-loop construct like your last post diagram, it is possible that you might put your future-filling logic (“TransformData”) in a different loop than the Future-redeeming logic (“DoCalc”). It would then be possible for the future to be redeemed before it is filled, which for your DVR design would return default data, followed by an “invalid DVR" error message from “TransformData". A “future” based on a Notifier would instead just block momentarily if this happened, and would be a much more widely applicable construct because of that. Your DVR future can only be used in cases where it is filled and redeemed in the same loop, or can otherwise be assured it is filled before redeemed.

— James

Link to comment

It would then be possible for the future to be redeemed before it is filled, which for your DVR design would return default data, followed by an “invalid DVR" error message from “TransformData". A “future” based on a Notifier would instead just block momentarily if this happened, and would be a much more widely applicable construct because of that. Your DVR future can only be used in cases where it is filled and redeemed in the same loop, or can otherwise be assured it is filled before redeemed.

I agree if you want blocking behavior a notifier is a better option. I'm a little gun shy about blocking on one-off notifiers like that. If for some reason the notifier is never filled the blocked code will never release. In general I prefer to implement a solution that lets the redeem operation fail if the Future hasn't been filled. The process can be retried if necessary. Failing in TransformData is no big deal; I'd just trap the error in the message handler and discard it.

In certain situations I'd probably even choose polling over blocking. (Blasphemy!) Blocking is clearly an easier implementation though, so in simple applications that can easily be verified for correctness I might go with a blocking notifier.

Link to comment

One could also have a reusable process that waits on an array of futures, then forwards the results to the process that needs them. Then that simple process (that has nothing else it needs to do) would do the blocking. And it would have a timeout, of course, after which it would send an error message.

Link to comment

Yeah, coordinating the UI's state (which may or may not be implemented as a state machine) with the Model's state while keeping them lightly coupled is a tricky problem. I think this is the first app where I've implemented the UI as a state machine instead of just as a straight message handler.

This is why the UI Controller is separate in my architecture. The UI itself is just sends messages (it doesn't generally receive them). The state machines (or whatever) are contained in the UI controller It gets a bit icky at that point since manipulation of the UI is contained in the controller due to LabVIEWs insistence on some controls using references (like tree controls). So the partitioning isn't as I would like. The compromise that I took was the message also contains the control name that is sent and the Controller and it updates the UI with property nodes. What this gives me is the ability to update the display via TCPIP.

Not really. My original post was just to share a neat concept I had run across and used. Now I'm trying to gain a deeper understanding of the applicability of Message Futures. Are they a "good design idea" or simply an "acceptable temporary solution" to a design that is starting to crack around the edges? Originally I thought "good design idea." The recent discussion is pushing me toward "acceptable temporary solution."

I think the issue is more about encapsulation of a non-encapulatable process. Whilst on the surface, it looks like "most" scenarios can be catered for. In reality, there are too many "edge" cases that require detailed knowledge and compliance of other parts of the system to operate effectively. Using a messaging "architecture" rather than "future" messages means that all the scenarios can be catered for. Well. Thats my very cursory impression at least. It just looks like too much effort for little reward (mainly due to the edge cases) and the "Shoot Yourself In The Foot" factor is quite high.

I don't consider my applications as having an MVC architecture, but that's the closest representation most people are familiar with so I try to accomodate it. Usually they are more of a Model-Mediator/Controller-UI. At the highest level, the Mediator/Controller is the glue code between the UI and the Model. In general I try to write my Model so I could use it as is in an application with similar functionality. It does not necessarily include all the non-UI logic and data the system requires. The difference between a Mediator and Controller is a Mediator's primary responsibility is to translate messages so the UI and Model can talk to each other. It's more of a facilitator. (i.e. A "BtnClicked" message from the UI means I need to send a "DoThis" message to the Model.) A Controller does that as well, but it exerts more direct control and manages higher level states that cannot be completely described in the UI or Model by themselves. There is a lot of fuzziness surrounding the two ideas--I just wave my hands and say when a Mediator starts tracking state information to filter messages or change message handlers it becomes a Controller.

As near as I can tell my high level Mediator/Controller (or "Application Controller") largely fills the same role as your Dispatcher. To add to linguistic confusion, I also use the term Controller for any functional component that manages a subsystem, whether it is a single device or several devices. Each controller has to be written specifically for the subsystems it manages, but most controllers also expose a public messaging api to allow external systems to interface with it. (AppControllers don't typically expose a public api, but the option is there for when I need it.)

Indeed. I think we are, in fact, using very similar architectures. In fact my diagram is a little incomplete. Quite often (but not always) there is a "sequence Engine" module. As I discussed previously about "logic in the dispatcher", this removes that logic into a separate unit. In this case, the Dispatcher is merely a "router" passing messages back and forth asynchronously between the various modules. It is the equivalent of your "Subsystem Controller" but operates on messaging rather than devices.

(Your message naming convention reveals possible differences between our design philosophies. Having a device controller accept a "MoveBtnClick" message and send a "DisplayPosition" message indicate the device controller has knowledge of the how the device is being used in the application. The message phrasing implies the device controller is responsible adapting to meet the dispatcher's api rather than the other way around. My controller components encapsulate higher level behavior for the calling component, but their messaging interfaces remain agnostic of the code using the controller. The messages between the device controller and dispatcher would have names like "MoveToPosition" and "PositionUpdated.")

That is probably me trying too hard to fit with your example. In fact the messages look nothing like yours. They would be of the form "TARGET->SENDER->CONTROL->OPERATION->PAYLOAD". They bear little resemblance to the actual UI operation. They are sent from the UI to the UI Controller (via the Dispatcher) and it decides what it means.

Here's a diagram somewhat representative of the current communication map after various evolutions and refactorings. You can see there's no dedicated AppController. The UI Controller has filled that role so far, and while the design is not ideal it has been sufficient.

post-7603-0-58090700-1339962278.png

Indeed. Very, very similar except that I also break the link with the UI so that TCPIP (for example) can manipulate just as the UI would by sending the same messages.

That's the $100 question. Below is a state diagram for the example problem's UI display states. Using query/response async messages requires recognizing the trivial "user has sent four data points to be transformed but the Model hasn't finished them yet" state. Where should that new state go? Conceptually it doesn't fit with the UI state machine very well and I can't think of a suitable set of transition conditions that would give me the desired behavior without making everything a lot more complicated.

post-7603-0-36122400-1339962279.png

Hmm. For most UI stuff I don't use state machines. I rely on LabVIEWs in-built state management. If I were to have a similar feature, If the button where pressed then the aquire module would be launched (dynamic loading) and when it was depressed it would just crowbar that module. Any events that come in between those "two" states would be displayed obviously.

I could have the Model store the transformed data points and avoid the asynchronous problem entirely. In fact early implementations did just that. I switched responsibility for maintaining the transformed data points to the UI because it feels like that's where it should be. The Model doesn't care about each point individually; it only cares about the complete set of points.

That leaves me with the options of using a Future or refactoring to implement an AppController/Dispatcher.

I suppose that is what I'm alluding to. That to implement futures in LabVIEW then you have to use an AppController/Dispatcher anyway. I don't really see any other way of resolving the "sometimes synchronous", "sometimes asynchronous" without it. Looking forward to the result once you have chewd it over because I'm sure if ther is another way-you will find it :)

Okay, this needs clarification. What do you mean by "can be brought directly into other apps?" Do you just mean each of those has no static dependencies on other code? Or are you saying they are generic enough you can pull them in and use them without editing their source code and messaging api? I can see how certain components could be easily reused as-is, such as device 1/2 and the TCPIP module, but I don't see how you avoid having app-specific code in your device controllers or dispatcher.

Wlll. For example. The "Device" is deveoped as a completely seaparte project and runs as a process. It is a toaly self conained module. To Launch it you just lay the controller on the diagram (or more often dynamically) and manipulate it using the messaging API. Dependancy on static code? Well. Difficult to say since it relies on my utilities which are used everywhere, but it is not dependant on any Application specific code.

The dispatcher is the same (if being used as a router as I mentioned earlier). I may modify it with specific code and use it like a "framework", but in the former it doesn't need to know what the messages are, only where they need to go. As I have standardised much of my API messages accross all devices/interfaces, it means that it can be launched and used as-is. Although there is a huge temptation to "quickly" add application specific filtering to it. I persevere with not doing that so it remains generic.

The thing I thought was unusual about your architecture is the device controllers sit betweeen the device and dispatcher as I'd expect, but the UI and TCPIP controllers do not. Maybe I'm missing something fundamental about what you are doing?

The image only shows messaging. It's not really hierarchical it's more of a "Plan View" with the controllers and interfaces sitting around a central "hub" (the dispatcher). If the UI was drawn as you have, it would imply that the UI cannot send messages directly to the other Controllers. It can. The UI Controller is more like the "Sub System" block in your diagram which would have the graceful shutdown code for the "Exit Message", for example. However, for getting a status value from the Device, the UI could send that directly without going through the controller . In fact. It is possible for all interfaces and controllers to communicate directly with each other (device 2 could send a message to Device1). But this feature has to be used sparingly. I consider it OK, for example, for the UI controller to send messages to all devices to exit. But not for Device 2 to send a "Move" command to device 1.

Link to comment

I was thinking about this a while last night, and I wondered if the real value of “futures” is in defining an ordered grouping of otherwise independent asynchronous messages. Imagine, for example, that one process needs to make requests of several other processes, with responses to these requests being dealt with all at once.

post-18176-0-80164700-1340109411.png

The problem here is that the Response messages come individually and in any time order, meaning that “Consumer” needs to have logic to identify and store the messages, and determine when all are available.

The advantage of using an array of Futures here (passed between Requestor and Consumer) is the very fact that it is an array; it is grouped and has a defined order. Thus Consumer need only index out the elements of this array and need not have any complex logic.

The array of Futures serves to predefine a grouping of multiple asynchronous messages that have yet to be sent.

As is, the Futures have the downside of requiring potential blocking or polling in Consumer. However, this can be avoided by using a small helper process that is dedicated to waiting on the array of Futures and forwarding the resulting array of massages:

post-18176-0-33235000-1340110037_thumb.p

Note that the “Wait on Responses” Actor is serving to group and order the messages, before passing them to the Consumer. Requestor makes a set of requests, and Consumer receives a corresponding set of responses.

— James

Link to comment

In fact the messages look nothing like yours. They would be of the form "TARGET->SENDER->CONTROL->OPERATION->PAYLOAD".

I think I need to spend time revisiting your dispatcher in the CR. It looks like your messages include destination information that allows the dispatcher to read and route the message appropriately? That certainly allows you to write a generic dispatcher, but I still don't see how you manage to customize a particular message's destination for each application. There needs to be glue code somewhere. Nevertheless, I'll put off further questions until review your dispatcher code.

Hmm. For most UI stuff I don't use state machines.

I don't either, but in this app the UI very clearly has different display states so I went ahead and refactored it into a state machine.

The state machines (or whatever) are contained in the UI controller.

Same for me. The difference is while my UI controller is logically separated from my UI front panels, it is not necessarily physically separated from them. Often the UI controller is implemented on the same block diagram as the UI. However, the UI controller loop only interacts with front panel controls via messages to/from the other UI loops.

The UI itself is just sends messages (it doesn't generally receive them)... It gets a bit icky at that point since manipulation of the UI is contained in the controller due to LabVIEWs insistence on some controls using references (like tree controls).

I guess I don't understand the benefit of physically separating the code doing the low level UI display control from the display itself. Maybe if you reuse the same display in different applications...

The compromise that I took was the message also contains the control name that is sent and the Controller and it updates the UI with property nodes. What this gives me is the ability to update the display via TCPIP.

This seems... peculiar? The notion of directly updating a display over TCPIP conflicts with how I define the responsibilities of components. For example, the networked component would send event messages to the UI controller (or AppController if there is one) like "OvenSetpointChanged." It's up to the UI/App controller to decide how to respond to the event.

Using a messaging "architecture" rather than "future" messages means that all the scenarios can be catered for.

Messaged Futures aren't intended to be a replacement for an asynchronous messaging system. They are an alternative to synchronous messages when you don't want your message handling loop to block. I agree there is a lot of potential to shoot yourself in the foot if overused.

However, for getting a status value from the Device, the UI could send that directly without going through the controller.

This comment reenforces my thought that we use fundamentally different messaging paradigms. Getting a status value implies query/response messages. I develop using request/event messages. In my apps the UI rarely needs to ask for a status value as it would be automatically notified when it changed. I find it much easier to keep my components decoupled this way.

I was thinking about this a while last night, and I wondered if the real value of “futures” is in defining an ordered grouping of otherwise independent asynchronous messages.

I would say the "real value" of Futures is in the abstract concept. It's the idea of creating and holding an object that doesn't have the value you need yet, but it will sometime in the future. How you use it to solve a specific problem is something different and unique to each developer/project. I've decided to call what I've shown in this example a "messaged Future" (as in "a Future sent via messaging") in an attempt to separate the idea from the implementation.

As is, the Futures have the downside of requiring potential blocking or polling in Consumer. However, this can be avoided by using a small helper process that is dedicated to waiting on the array of Futures and forwarding the resulting array of massages:

I did think about that while writing my solution. Unless I'm misunderstanding you this is an extension of normal query/response messaging patterns. There are organizational advantages (or disadvantages, depending on one's preferences) to using a helper actor in that it physically separates the code responsible for receiving the responses from the Consumer (or Requestor.) Other than that you still have to write all the same functional code that you would if you had the Consumer collecting the responses and Futures become unnecessary. (Unless, as you pointed out, there was some metadata associated with the responses that couldn't be retained in the response itself or reasonably communicated to to the the helper via a message.)

If you're already using an asynchronous query/response messaging paradigm there is probably limited value in messaging Futures. An event-based messaging system occasionally benefits from query/response style messages, and messaging Futures is a way to get the functional equivalent without implementing query/response logic, in certain situations.

Link to comment

I would say the "real value" of Futures is in the abstract concept. It's the idea of creating and holding an object that doesn't have the value you need yet, but it will sometime in the future.

Yeah, but how useful is that in LabVIEW?

The basic use for “futures” in step-by-step text languages is very similar to the dataflow already present in LabVIEW. Only once we’re talking about message-handling loops does a “future” become interesting, and in that case it’s hard to see how useful they are when we’re already using asynchronous messaging. In your example application, it’s only the fact that you need multiple TransformData messages for only one DoCalc message that make the futures solution interesting. It’s that you can pass an array of futures to DoCalc, and thus gather your four separate TransformData responses together that is something you can’t otherwise do as easily.

Other than that you still have to write all the same functional code that you would if you had the Consumer collecting the responses and Futures become unnecessary. (Unless, as you pointed out, there was some metadata associated with the responses that couldn't be retained in the response itself or reasonably communicated to to the the helper via a message.)

Not really. The helper actor I’m thinking of would be fully generic and reusable; it would be dynamically launched and configured with an array of Futures and index over them to get the array of messages. It’s API would be very simple.

...and messaging Futures is a way to get the functional equivalent without implementing query/response logic, in certain situations.

I noticed that your futures were very similar to the “message reply” system I use. I attach a “return address” to the message, and you attach the future. Both allow the direction of responses to arbitrary recipients. Though with futures, the recipient has to be written to specifically handle futures, while with replies it’s just an ordinary message.

— James

Link to comment

The helper actor I’m thinking of would be fully generic and reusable; it would be dynamically launched and configured with an array of Futures and index over them to get the array of messages. It’s API would be very simple.

So I took the time to actually do it. Reworked the prototype “Futures” implementation I mentioned at the start of this conversation so that it had a helper actor.

post-18176-0-16493400-1340204655_thumb.p

The above code implements this diagram (though I didn’t make the “Requestor” a message handler, it could be):

post-18176-0-79753100-1340204689_thumb.p

Note the random delays in the three Actors; the reply messages are sent in arbitrary order, yet the set of messages received by the Consumer are always ordered A, B, C.

The “helper actor” (not really a full actor, just an async subVI) is quite simple (though I have yet to complete error handling):

post-18176-0-80263800-1340205280.png

“Redeem Future Tokens.vi” both waits for the futures to be filled, and destroys the Future Token (internally, the Future is a single-element queue). This deliberately makes it impossible to use polling on the Future.

— James

Link to comment

Yeah, but how useful is that in LabVIEW?

Well, it was really useful for me 7 months ago. :P Beyond that it hasn't been particular useful... yet.

The basic use for “futures” in step-by-step text languages is very similar to the dataflow already present in LabVIEW.

I agree, but you're focusing on how they are implemented and used in other languages, not on the conceptual idea they represent. The whole idea of blocking while waiting for a Future to be filled implies manipulating data in space. In other words, data in this thread needs to be given to that thread. We already have lots of ways of moving data in space: queues, notifiers, network streams, etc. We don't need more of them.

Conceptually the purpose of Futures is to manipulate data in time. I can grab a piece of data and do things with it. I can copy it. I can send it somewhere else. I can even "perform calculations" on it--using decorators--if I want to. I can do all of this before I know what the value of the data is, or even before the value has been assigned. The idea's applicability isn't restricted to asynchronous applications. It can be used any time you know what you want to do with the data before you know what the data actually is, even in single threaded applications.

Manipulating data in space and manipulating data in time are separate (but not completely independent) issues. In my example I'm combining Futures with messaging to manipulate data in both time and space. The simplest way to implement what I needed was by using a by-ref data object, but it's not the only way to implement them, nor is it a requirement for something to be a Future. The implementation details depend on how you need to manipulate the data through space and time.

For me, learning about Futures opened me up to thinking about problems in a different way. That's why I said the real value is in the concept as opposed to any specific way the concept is used.

[The helper actor's] API would be very simple.

The “helper actor” (not really a full actor, just an async subVI)

This is where the potential problem lies. It's API is too simple and loses robustness. I assume Redeem Future Tokens.vi blocks execution waiting until the Future is filled. What happens if a Future is never filled? Maybe an actor has an error and shuts down after the message is sent but before it has a chance to fill the Future. How do you tell the async subvi to quit waiting so your application has a chance to recover? Granted the example is simple enough the correct behavior can be verified visually, but in a larger application that won't necessarily be true.

Off the top of my head I can think of two options if you don't want to poll the Future:

1. Use a "fail and notify" system like I mentioned earlier. The async subvi waits for a finite amount of time for the data, and if the data isn't ready it automatically fails and alerts the Requestor of the failure.

2. Promote the async subvi to an actor by implementing a message handler. Add an "Exit" message so it can accept an external shutdown message. Implement code tracking the progress of the Futures through the application so someone can figure out whether or not they need to send an Exit message to the helper actor.

Both options require adding more overhead code to manage the process. I think overall Option 1 requires roughly the same amount of code as having the Requestor receive by-value reponses from the actors, while at the same time obscuring the code's intent and execution flow a bit more. Option 2 seems to require a lot more code for little benefit. I'd have to have really good reason for choosing that option.

Link to comment

So I took the time to actually do it. Reworked the prototype “Futures” implementation I mentioned at the start of this conversation so that it had a helper actor.

post-18176-0-16493400-1340204655_thumb.p

The above code implements this diagram (though I didn’t make the “Requestor” a message handler, it could be):

post-18176-0-79753100-1340204689_thumb.p

Note the random delays in the three Actors; the reply messages are sent in arbitrary order, yet the set of messages received by the Consumer are always ordered A, B, C.

The “helper actor” (not really a full actor, just an async subVI) is quite simple (though I have yet to complete error handling):

post-18176-0-80263800-1340205280.png

“Redeem Future Tokens.vi” both waits for the futures to be filled, and destroys the Future Token (internally, the Future is a single-element queue). This deliberately makes it impossible to use polling on the Future.

— James

If you also route the "requestor" requests through the "Wait on Responses" (no need for your dotted line then) the you end up with the "Dispatcher" that I've been describing.

Edited by ShaunR
Link to comment

This is where the potential problem lies. It's API is too simple and loses robustness. I assume Redeem Future Tokens.vi blocks execution waiting until the Future is filled. What happens if a Future is never filled? Maybe an actor has an error and shuts down after the message is sent but before it has a chance to fill the Future. How do you tell the async subvi to quit waiting so your application has a chance to recover? Granted the example is simple enough the correct behavior can be verified visually, but in a larger application that won't necessarily be true.

I was just going to use the Timeout, which would throw an error message.

Off the top of my head I can think of two options if you don't want to poll the Future:

1. Use a "fail and notify" system like I mentioned earlier. The async subvi waits for a finite amount of time for the data, and if the data isn't ready it automatically fails and alerts the Requestor of the failure.

Simpler to throw the error message downstream to the Consumer. One could add another input for a queue to send the error messages, but I’m thinking of going the simple route. If the Consumer is a standard Actor design of mine, it will publish received error messages, and Requestor can register for error messages if it wants them.

2. Promote the async subvi to an actor by implementing a message handler. Add an "Exit" message so it can accept an external shutdown message. Implement code tracking the progress of the Futures through the application so someone can figure out whether or not they need to send an Exit message to the helper actor.

I have a “Cancel Future” VI that can be applied to invalidate the future tokens if one needs this. This immediately causes the helper to error out and shutdown, having the same effect as an “Exit” message. If the VI hierarchy that created the futures goes idle, that will also invalidate the queue references inside the futures and shut the helper down. So “Exit” functionality is already there if you want it and there is an automatic exit feature. Otherwise there is the timeout.

Both options require adding more overhead code to manage the process. I think overall Option 1 requires roughly the same amount of code as having the Requestor receive by-value reponses from the actors, while at the same time obscuring the code's intent and execution flow a bit more. Option 2 seems to require a lot more code for little benefit. I'd have to have really good reason for choosing that option.

But the helper is reusable. Once it works, I don’t care how complex it is internally because no-one needs to look inside it. And I only have to write it once; “Requestor” is code that needs to be written for each application. Instead of internal complexity, I care about the clarity and simplicity of the API.

If you also route the "requestor" requests through the "Wait on Responses" (no need for your dotted line then) the you end up with the "Dispatcher" that I've been describing.

I had ment to ask you if your framework supports replies to messages. I would imagine it would if your messages are of the form “Target->Sender…” and can easily be reversed. But can your dispatcher gather replies into ordered groups?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.