Jump to content
Daklu

Futures - An alternative to synchronous messaging

Recommended Posts

Does anyone else use “Futures”-like features in their code?   If so, a question about terminology.  Four years after this conversation “Future Tokens” are a significant part my reuse code (“Messenger Library”) and I use them often for handling multiple “actor” loops as arrays.  I was just reading the docs for the Akka actor extension for Java and Scala and they use slightly terminology for their equivalent feature.    They “await” a “future”, while I “redeem” a "future token”.  Which is better or more intuitive?

I like “token” because this is a physical thing that represents something else (a message) that has yet to exist.  But I like “await” because it stresses the fact that we block until the message actually arrives.

Edited by drjdpowell

Share this post


Link to post
Share on other sites
1 hour ago, drjdpowell said:

Does anyone else use “Futures”-like features in their code?   If so, a question about terminology.  Four years after this conversation “Future Tokens” are a significant part my reuse code (“Messenger Library”) and I use them often for handling multiple “actor” loops as arrays.  I was just reading the docs for the Akka actor extension for Java and Scala and they use slightly terminology for their equivalent feature.    They “await” a “future”, while I “redeem” a "future token”.  Which is better or more intuitive?

I like “token” because this is a physical thing that represents something else (a message) that has yet to exist.  But I like “await” because it stresses the fact that we block until the message actually arrives.

It doesn't matter. If it's for communication then prepare the listener for your definition (whatever that may be) and then use that and move on to important things about the code-just be consistent.

I get fed up being asked to ponder philosophical significances in OOP.

Edited by ShaunR

Share this post


Link to post
Share on other sites

Wow... was it really four years ago that we talked about this?  Time flies when you get old...

I don't think I've used futures again since this thread** so I can't speak to the attitudes of the larger community, but my terminology preference is "future token" and "redeem."  I do agree with Shaun though, it probably doesn't matter as long as you communicate it effectively and are consistent.

[**My development focus over the last several years has shifted from pushing boundaries on actor-oriented and messaging systems to figuring out how to refactor, componentize, and deploy an NI-RT application across different target platforms.]

 

  • Like 1

Share this post


Link to post
Share on other sites

[**My development focus over the last several years has shifted from pushing boundaries on actor-oriented and messaging systems to figuring out how to refactor, componentize, and deploy an NI-RT application across different target platforms.]

Have you built a VM with the NI-RT Linux? I had a go but they are using an old-ass version of Yocto.

Share this post


Link to post
Share on other sites
12 hours ago, Daklu said:

I can't speak to the attitudes of the larger community, but my terminology preference is "future token" and "redeem."  I do agree with Shaun though, it probably doesn't matter as long as you communicate it effectively and are consistent.

Searching “Futures” implementations in other languages, I couldn’t find a standard set of terms.  Some even used the generic and non-evocative “Get” for the part where one waits for a future to become an actual thing.  So I’ll stick with redeeming my tokens.  Thanks.

Share this post


Link to post
Share on other sites

Interesting discussion.

Seeing how I utilise user events for inter-process communication a lot, spawning callbacks dynamically (which then perhaps write to a notifier or whatever method is preferred) means it should be rather simple to implement this feature (I'm hugely in favour of callbacks for this functionality either way due to the ability to properly hide the user event refnum from the listener - a major leak in the otherwise very useful implementation of user event registrations).  I might just give it a try at some stage.

Share this post


Link to post
Share on other sites

My “reply addresses” in “Messenger Library” are also actually callbacks.  Callbacks are very flexible.

Just as a note, my experience with “futures” (or “Write-once, destroy-on-reading Queues” as they are implemented as) is that I mostly use them to easily implement the Scatter-Gather messaging pattern, plus occasionally a Resequencer.  Both come up when one is interacting with multiple loops, and reply messages can arrive in arbitrary order.  “Future tokens” allow enforcement of a specified order.

Share this post


Link to post
Share on other sites
On 11/23/2016 at 2:25 AM, ShaunR said:

I get fed up being asked to ponder philosophical significances in OOP.

You must be constantly fed up :)

Share this post


Link to post
Share on other sites
On 11/27/2016 at 1:44 AM, crelf said:

You must be constantly fed up :)

You haven't noticed? :D It probably has something to do with being one of the 5%. ;)

Edited by ShaunR

Share this post


Link to post
Share on other sites

I didn’t get the OOP reference, and don’t want the casual reader to think that “Futures” are an OOP concept.  They’re a Dataflow concept.  We don’t have “Futures” explicitly in LabVIEW, because every wire is implicitly a future.

Share this post


Link to post
Share on other sites
34 minutes ago, drjdpowell said:

I didn’t get the OOP reference, and don’t want the casual reader to think that “Futures” are an OOP concept.  They’re a Dataflow concept.  We don’t have “Futures” explicitly in LabVIEW, because every wire is implicitly a future.

Why would you want a dataflow construct when the language supports it implicitly, then? Unless it is to fix the breaking of that dataflow because of the LVPOOP ideology.

However. If you are trying to make a distiction between OOP and OOD. Then I am in agreement since OOP is not required for the latter.

Edited by ShaunR

Share this post


Link to post
Share on other sites

Have you built a VM with the NI-RT Linux? I had a go but they are using an old-ass version of Yocto.

So without having actually tried, I suspect I can help out with this.  I have a cDAQ 9132 on hand which is the x64 atom based controller running Linux RT.  I feel like I should be able to install some software that allows me to take an image of the hard drive, and then boot that hard drive image in a VM.  My concern with this the legality, of me sharing it, and the distribution method for a large HD image

Share this post


Link to post
Share on other sites
3 hours ago, ShaunR said:

Why would you want a dataflow construct when the language supports it implicitly, then?

It doesn’t fully support it as message flow between loops.  A message to a loop, that prompts further messages from that loop, is like a subVI call in regular LabVIEW programming, in that the action waits for the “input”to be available before it happens.  However, this only works for ONE input; a subVI can wait for multiple inputs before it executes.  My primary use of Futures is to extend this such that a loop can take an action only when multiple input messages are received.   It also helps with ordering arrays.  If I call three subVIs in parallel and combine their outputs in an array, that array has a defined order.   But if i send a message to three loops, the reply messages can come in arbitrary order, unless I use Futures to specify the order.

There are no OOP concepts involved.

Share this post


Link to post
Share on other sites
On ‎11‎/‎23‎/‎2016 at 2:08 PM, ShaunR said:

Have you built a VM with the NI-RT Linux? I had a go but they are using an old-ass version of Yocto.

No, I haven't tried to do this.  I didn't know ARM-based VMs were available.

Share this post


Link to post
Share on other sites
3 hours ago, Daklu said:

No, I haven't tried to do this.  I didn't know ARM-based VMs were available.

That's not the idea. Getting an ARM emulator to run inside an x86 or x64 VM is probably a pipe dream.

However the higher end cRIOs (903x and 908x) and several of the cDAQ RT modules use an Atom, Celeron or better x86/64 compatible CPU with an x64 version of NI-Linux. That one should theoretically be possible to run in a VM on your host PC, provided you can extract the image.

Share this post


Link to post
Share on other sites
13 minutes ago, rolfk said:

That one should theoretically be possible to run in a VM on your host PC, provided you can extract the image.

We are starting to get a bit off topic, so I posted what I tried on getting a VM of the Linux RT x64. (spoiler I haven't figured it out)

  • Like 1

Share this post


Link to post
Share on other sites
2 hours ago, rolfk said:

That's not the idea. Getting an ARM emulator to run inside an x86 or x64 VM is probably a pipe dream.

I was thinking more along the lines of a direct ARM VM, not an ARM emulator running inside an x86 VM.  There are a few out there... I haven't had a chance to try any of them yet.

Share this post


Link to post
Share on other sites
8 hours ago, Daklu said:

I was thinking more along the lines of a direct ARM VM, not an ARM emulator running inside an x86 VM.  There are a few out there... I haven't had a chance to try any of them yet.

But on which hardware? You can't run an ARM virtual machine on a PC without some ARM emulation somewhere. Your PC uses an x86/64 CPU that is architecturally very different to the ARM and there needs to be some kind of emulation somewhere, either an ARM VM inside an ARM on x86 emulator or the ARM emulator inside the x86 VM.

There might be ways to achieve that with things like QEMU, ARMware  and the likes but it is anything but trivial and is going to add even more complexity to the already difficult task of getting the NI Linux RT system running under an VM environment. Personally I wonder if downloading the sources for NI Linux RT and recompiling it for your favorite virtual machine environment is not going to be easier! :D And no I don't mean to imply that that is easy at all, just easier than adding also an emulation layer to the whole picture and getting to work that as well.

Edited by rolfk

Share this post


Link to post
Share on other sites

I agree none of this is a trivial task and recompiling NI Linux RT to run on an x86 processor might be easier.  I haven't looked at the source code (and have never recompiled Linux), do you have any idea how much code would need to be changed to support an x86 processor?  My gut says more than I'd be comfortable with.

FWIW, QEMU looks the most promising of the VMs I looked at.

Share this post


Link to post
Share on other sites
40 minutes ago, Daklu said:

I agree none of this is a trivial task and recompiling NI Linux RT to run on an x86 processor might be easier.  I haven't looked at the source code (and have never recompiled Linux), do you have any idea how much code would need to be changed to support an x86 processor?  My gut says more than I'd be comfortable with.

FWIW, QEMU looks the most promising of the VMs I looked at.

Actually you should not really need to change anything code wise. The Linux kernel sources support to be compiled for just about any architecture that is out there, even CPUs that you would be nowadays hard pressured to find hardware to run it on. Of course depending on where you got your kernel sources they might not contain support for all possible architectures, but the kernel project supports a myriad of target architectures, provided you can feed the compiler toolchain with the correct defines. Now figuring out all the necessary defines for a specific hardware is a real challenge. For many of them the documentation is really mostly in the source code only. Here come various build systems into play that promis to make this configuration easier by allowing you to select different settings from a selection and then generating the necessary build scripts to drive the C toolchain with.

What is the real challenge, is the configuration that needs to be done to tell the make toolchain for which target arch you want to compile, what hardware modules to link statically and what modules to compile as dynamic kernel modules if any. Without a thorough understanding of your various hardware components that are specific to your target that can be a very taunting task. Obviously there are certain popular targets that you will find more readily some sample configuration scripts than others. To make matters even more interesting, there isn't just one configuration/build system.

Yocto which is what NI uses, used to be a pretty popular one for embedded systems a few years ago but lost a bit of traction some time ago. It seems to be back in activity a bit but the latest version is not backwards compatible with the version NI used for their NI Linux RT system. And NI probably does not see any reason to upgrade to the newest version as long as the old one works for what they need. It uses various other tools from other projects such as Open Embedded or BitBake internally. Buildroot is another such build system to create recipe based builds for embedded Linux.

The real challenge is not to change the C code of the kernel to suit your specific hardware (that should basically be  not necessary except adding driver modules for hardware components that the standard kernel does not have support for out of the box). It is to get the entire build toolchain installed correctly so that you can actually start a build successfully and once you got that, select the correct configuration settings so that the compiled kernel will run on your hardware target and not just panic right away.

This last part should be fairly simple for a Virtual Box VM since the hardware that is emulated is very standard and shouldn't be hard to configure correctly.

 

 

Share this post


Link to post
Share on other sites
On 11/28/2016 at 4:49 PM, drjdpowell said:

It doesn’t fully support it as message flow between loops.  A message to a loop, that prompts further messages from that loop, is like a subVI call in regular LabVIEW programming, in that the action waits for the “input”to be available before it happens.  However, this only works for ONE input; a subVI can wait for multiple inputs before it executes.  My primary use of Futures is to extend this such that a loop can take an action only when multiple input messages are received.   It also helps with ordering arrays.  If I call three subVIs in parallel and combine their outputs in an array, that array has a defined order.   But if i send a message to three loops, the reply messages can come in arbitrary order, unless I use Futures to specify the order.

There are no OOP concepts involved.

It doesn't support it at all because messaging is a method of breaking dataflow and, if I am feeling generous,  it is an equivalent of dataflow with one input to satisfy.

The idea that dataflow is "data flowing" - moving from one place to another - is a simplification used to teach the concepts. In fact, it is about "state". What defines a dataflow language is that program execution continues when all inputs are satisfied. Execution state is manhandled in other languages and concepts, by ordering function calls (procedural) or unwinding a call stack (functional) and still proves the main problem of them today. This is why we say that dataflow languages have implicit, rather than explicit, state. Specifically "execution state" is implicit rather than "system state". 

From this perspective, you have broken dataflow for excellent reasons and are proposing to add back it back in with added complexity so that it "looks" like dataflow again - a problem of your own creation like so many other main-stream, non dataflow,concepts, when applied to LabVIEW. The solution will be a service, actor or whatever you want to call it, that has visibility to global execution state. In classical labview we would just call a VI as non-reentrant from the three loops and allow the implicit nature to take care of ordering and progress of the loops.

However. I understand the desire for "completeness" of your API and that's fine. However. Futures are a fix for yet another self inflicted problem of OOP dogma so I don't agree that there are no OOP concepts involved. In LabVIEW, futures are an architectural consideration; not one of implementation.

Edited by ShaunR

Share this post


Link to post
Share on other sites

Sorry, Shaun, I didn’t really follow (though I happily nod along with you bringing up “execution state”).  I still don’t know what this has to do with OOP (or “services” or “non-reentrant subVIs” for that matter).  

Share this post


Link to post
Share on other sites
2 hours ago, drjdpowell said:

Sorry, Shaun, I didn’t really follow (though I happily nod along with you bringing up “execution state”).  I still don’t know what this has to do with OOP (or “services” or “non-reentrant subVIs” for that matter).  

Without successfully being able to convey the fundamental difference between LabVIEW and, say, C[++] or pascal and the many other procedural languages that OOP was proffered as a solution for. You should perhaps put my comments to one side while you fill out the feature set of the API.

 

I will leave you with this, though. Why isn't a VI an object?

Edited by ShaunR

Share this post


Link to post
Share on other sites
13 hours ago, shoneill said:

Why isnt a VI an Object? It is last time I looked.

Exactly!

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.