Jump to content

Actor-Queue Relationship in the Actor Framework


Recommended Posts

I'm considering a third iteration of a messaging library we use at my company and I'm taking a close look at what I feel are the limitations of all the frameworks I use, including the Actor Framework. What are the perceived limitations? How can they be addressed? What are the implications of addressing them or not addressing them? I have a good laundry list for my own library, but since it's not widely used I'd like to focus on the Actor Framework in this post.

 

I'm wondering why the communication mechanism with an Actor is so locked down. As it stands the Message Priority Queue is completely sealed to the outside world since it's a private class of the Actor Framework library. The classes which are exposed, Message Enqueuer and Message Dequeuer, do not define a dynamic interface that can be extended. This seems entirely by design given how the classes are architected-- and it's one of the things that has resulted in a fair amount of resistance to me applying the AF wholesale to everything I do. Well that and there's a fair amount of inertia on projects which predate the AF.
 
Consider a task where the act of responding to a message takes considerable time. Enough time that one worries about the communication queue backing up. I don't mean anything specific by "backing up" other than for whatever reason the programmer expects that it would be possible for messages deposited into the communication queue to go unanswered for an longer than expected amount of time. There are a few ways to tackle this.
 
1) Prioritize. This seems already built into the AF by virtue of the priority queue so I won't elaborate on implementation. However prioritizing isn't always feasible, maybe the possibility of piling up low priority messages for example is prohibitive from a memory or latency perspective. Or what if the priority of a message can change as a function of its age?
 
2) Timeouts. Upon receipt of a message the task only records some state information, then lets the message processing loop continue to spin with a zero or calculated timeout. When retrieving a message from the communication queue finally does produce a timeout, the expensive work is done and state info is updated which will trigger the next timeout calculation to produce a longer or indefinite value. I use this mechanism a lot with UI related tasks but it can prove useful for interacting with hardware among other things.
 
3) Drop messages when dequeuing. Maybe my task only cares about the latest message of a given class, so when a task gets one of these messages during processing it peeks at the next message in the communication queue and discards messages until the we have the last of the series of messages of this class. This can minimize the latency problem but may still allow a significant back up of the communication queue. The backup of the queue might be desired, for example if deciding whether to discard messages depends on state but if your goal was to minimize the backup dropping when dequeuing might not work.
 
4) Drop messages when enqueuing. Similar to the previous maybe our task only cares about the latest message of a given class, but it can say with certainty that this behavior is state invariant. In that case during enqueuing the previous message in the communication queue is peeked at and discarded it if it's an instance of the same class before enqueuing the new message.
 
These items aren't exhaustive but they frame the problem I've had with the AF-- how do we extend the relationship between an Actor and its Queues?
 
I'd argue one of the things we ought to be able to do is have an Actor specify an implementation of the Queue it wishes to use. As it stands the programmer can't without a significant investment-- queues are statically instantiated in Actor.vi, which is statically executed from Launch Actor.vi, and the enqueue/dequeue VIs are also static. Bummer. Basically to do any of this you're redefining what an Actor is. Seems like a good time to consider things like this though since I'm planning iterating an entire framework.
 
How would I do this? Tough to say, at first glance Actor.vi is already supplied with an Actor object so why couldn't it call a dynamic dispatch method that returns a Message Priority Queue rather than statically creating one? The default implementation of the dynamic method could simply call Obtain Priority Queue.vi as is already done, but it would allow an Actor's override to return an extended Queue class if it desired-- assuming the queue classes are made to have dynamic interfaces themselves. Allowing an actor to determine its timeout for dequeueing would also be nice.
 
I'm not saying these changes are required to do any of the behaviors I've outlined above, adding another layer of non-Actor asynchronicity can serve to get reasonably equivalent behavior into the existing AF in most cases. However the need to do this seems inelegant to me compared to allowing this behavior to be defined through inheritance and can beg the question if the original task even needs to be an Actor in this case.
 
The argument can also be made that why should and Actor care about the communication queue? All an Actor cares about is protecting the queue, and it does so by only exposing an Enqueuer not the raw queue. If a task is using an Actor and gums up the queue with spam by abusing the Enqueuer that's its own fault and it should be the responsibility of the owning task to make sure this doesn't happen. This is a valid premise and I'm not arguing against it.
 
All for now, I'm just thinking out loud...
  • Like 1
Link to comment

Yes, though that version posted is the first-generation. The second generation I created never became public due to some limitations I was never satisfied with mostly stemming from switching to a DVR based architecture. It was used quite successfully though by us internally.

Link to comment
I'm wondering why the communication mechanism with an Actor is so locked down.

 

Because (I believe) Stephen's focus was to create a framework that prevented users from making common mistakes he's seen, like having someone other than the actor close its queue.  I'm not criticizing his decision to go that route, but the consequences are that end users are limited in the kind of customizations they're able to easily do.

 

 

What are the perceived limitations?

 

  1. Lack of extendability and customizability.  More generally, it forces me to adapt how I work to fit the tool instead of letting me adapt the tool to fit how I work.
  2. It encourages a lot of static coupling.  Requires excessive indirection to keep actors loosely coupled. 
  3. Somewhat steep learning curve.  There's a lot of added complexity that isn't necessary to do actor oriented programming.  It uses features and patterns that are completely foreign to many LV developers.

 

 

it's one of the things that has resulted in a fair amount of resistance to me applying the AF wholesale to everything I do.

 

Ditto.  I've talked to several advanced developers who express similar frustrations.

 

 

Consider a task where the act of responding to a message takes considerable time.  There are a few ways to tackle this.

1. Timeouts

2. Priorities

3. Drop messages when enqueueing

4. Drop messages when dequeueing

 

This is typically only an issue when your message handling loop also does all the processing required when the message is received.  Unfortunately the command pattern kind of encourages that type of design.  I think you can get around it but it's a pain.

 

My designs use option 5,

 

5. Delegate longer tasks to worker loops, which may or may not be subactors, so the queue doesn't back up.

 

The need for a priority queues is a carryover from the QSM mindset of queueing up a bunch of future actions.  I think that's a mistake.  Design your actor in a way that anyone sending messages to it can safely assume it starts being processed instantly.

 

 

I'd argue one of the things we ought to be able to do is have an Actor specify an implementation of the Queue it wishes to use.

 

Agreed, though I'd generalize it a bit and say an actor should be able to specify it's message transport whether or not it is a queue.

 

 

How would I do this?

 

I'd do it by creating a messaging system and building up actors manually according to the specific needs.  (Oh wait, I already do that. :) )

 

 

Allowing an actor to determine its timeout for dequeueing would also be nice.

 

That kind of customization perhaps should be available to the developer for special situations, but I'd heavily question any actor that has a timeout on the dequeue.  Actors shouldn't need a dequeue timeout to function correctly.

  • Like 1
Link to comment

Before everything else: Have you looked at experimental version 4.3? Does the option to add actor proxies satisfy your use cases?

 

If that does not address your use cases...

There's a whole lot of thinking behind the walls in the Actor Framework. I'll try to walk through them.

 

Up front, I want to say that I'm totally open to changing parts of the AF... lots of it has already changed over the last two years of user feedback. These are the arguments for why it is the way it is now. They are not necessarily reasons for why it has to stay that way.

 

1) Assertions of correctness. Can you guarantee the correctness of a message queue that drops messages? Maybe but not necessarily... the message that gets dropped might be the Stop message. Allowing the plugability of arbitrary communications layers into the framework breaks the assertions that allow the framework to make promises. I've tried to make sure that no one can accidentally reintroduce the errors that the AF is designed to prevent (a slew of deadlocks, race conditions and failures-to-stop, documented elsewhere). "The queue works like this" is a critical part of those assertions. What I found was that too much flexibility was *exactly* the problem with many of the other communications frameworks. When people tried to use them, they quickly put themselves in a bind by using aspects of the system without understanding the ramifications. This is an area where even very seasoned veterans have shown me code that works most of the time but fails occasionally... generally because of these weird timing problems that cropped up from mixing different types of communications strategies.

 

2) Learnability of apps written with the AF. My goal was to build up a framework that could truly be used by a wide range of users such that a user studying an app written with the AF he or she has certain basics that are known for certain. I wanted debugging to be able to be straightforward. I wanted a module written as an actor to be usable by other apps written as a hierarchy of actors. Plugging in an arbitrary communications link causes problems with that.

 

3) Prevent Emergency Priority Escalation. I went to a great deal of trouble to prevent anyone from sending messages other than Emergency Stop and Last Ack as emergency priority messages. Lots of problems arise when other messages start trying to play at the same priority level as those two. In early versions of the AF, I didn't have the priority levels at all, and when I added them, the successful broadcast of a panic stop was a major problem that I kept hearing about from users developing these systems. An actor that mucks with this becomes an actor that breaks the overall promise of the system to respond instantly to an emergency stop. "But I don't want my actor to respond to an emergency stop instantly!" Well, tough. Don't play in a system that uses emergency stops... play in a system that only sends regular stops or has some other custom message for stopping. Actors are much more reusable in other applications when they obey the rules laid down for all actors.

 

4) Maximize Future Feature Options. The Priority Queue class is completely private specifically because it was an area that I expected to want to gut at some point ant put in something different. Maybe it gets replaced with primitives if LabVIEW introduces a native priority queue. Maybe it gets an entirely different implementation. I did not want anyone building anything that depended upon it because that would limit my ability to change that out for some other system entirely or to open up the API in a different way in the future. I firmly believe in releasing APIs that do *exactly* what they are documented to do and keeping as much walled off as possible so that once user experience feeds back to say, "This is what we would like better," you don't find yourself hamstrung by some decision you didn't intend to make just yet. When I add a flexibility point, I prefer to do it in response to some user need, not just on the off chance that someone might need it because every one of those points of flexibility for an end user becomes a point of inflexibility for the API developer, and, ultimately, that limits the ability of the API to flex to meet use cases.

 

5) Paranoid about performance. Dynamic dispatching is fast on a desktop machine. Very low overhead. But I was writing a very low level framework. Every dispatch, every dynamic binding to a delegate, gets magnified when it is that deep in the code. I kept as much statically linked as possible, adding dynamic dispatching only when a use case required it.

 

5) Auto Message Dropping Is A Bad Idea. There's a long discussion about message filtration in the http://ni.com/actorframework forum. It's generally a bad idea to try to make that happen with any sort of "in the queue" system for any sort of command system. The better mechanism is putting the filtration into the handler by using state in the receiver... things like "Oh, I've gotten one of these recently and I'm still working on it so I'll toss this new one." Or by introducing a proxy message handler... a secretary, you might say... who handles messages. Putting the proxy system together is what I was working with people to put together the networking layer that I published in January as version 4.2. (I added a cut point in response to a use case.)

 

6) Lack of use case for replacing the queues means lack of knowledge about the right way to add that option. Who is the expert about the type of communications queue? The sender? The receiver? Or the glue between them? MJE, you mention querying the actor object for which type of queue to use. Is that really the actor that should have the expertise? Perhaps Launch Actor.vi should have a "Queue Factory" input allowing the caller to specify what the comm link should be. Honestly, I don't know the right way to add it because no actual application that I looked at when modeling the AF had any need to replace the queue. What they generally needed instead was one type of queue instead of the three or four they were using (i.e. communications through a few queues, some events, a notifier or two, and some variables of various repute).

 

And I just noticed Daklu's signature. In light of this discussion, it makes me giggle:

Yes, the QSM is flexible. So is Jello. That doesn't make it good construction material.
Link to comment

In the framework I’ve developed, I get a lot of use out of subclassing the central enqueuer-like class (called, perhaps too simply, “Send”).  Below is the class hierarchy.  But “assertions of correctness”, what’s that?  Breaking down some walls will certainly lose something to what AQ is trying to do.  Personally, I think the tradeoff in flexibility would be worth it, but it would mean that that flexibility would be used to build some problematic code.  

 

post-18176-0-38696600-1361873605_thumb.p

 

 

Link to comment
This is typically only an issue when your message handling loop also does all the processing required when the message is received.  Unfortunately the command pattern kind of encourages that type of design.  I think you can get around it but it's a pain.

 

My designs use option 5,

 

5. Delegate longer tasks to worker loops, which may or may not be subactors, so the queue doesn't back up.

 

The need for a priority queues is a carryover from the QSM mindset of queueing up a bunch of future actions.  I think that's a mistake.  Design your actor in a way that anyone sending messages to it can safely assume it starts being processed instantly.

 

I particularly second this.  Actors should endeavor to read their mail promptly, and the message queue should not double as a job queue.

  • Like 2
Link to comment

Excellent feedback, thank you.
 

Lack of extendability and customizability.  More generally, it forces me to adapt how I work to fit the tool instead of letting me adapt the tool to fit how I work.


Indeed. As soon as I find myself doing this I stop and instead ask, "Am I sure this is the right tool for the job?" Usually the answer is "No".
 

This is typically only an issue when your message handling loop also does all the processing required when the message is received.  Unfortunately the command pattern kind of encourages that type of design.  I think you can get around it but it's a pain.
 
My designs use option 5,
 

5. Delegate longer tasks to worker loops, which may or may not be subactors, so the queue doesn't back up.

 

 
 

I particularly second this.  Actors should endeavor to read their mail promptly, and the message queue should not double as a job queue.


Well the act of delegating to a private subActor (or any private secondary asynchronous task) only hides the extra layer. The public Actor might indeed respond to the message on short order, but there's no getting around to the fact that actually acting on that message takes time. Ultimately if some sort of message filtering has to be done at any layer because it just doesn't make sense to process everything, you're back to the original argument. If I can't do stuff like this easily with an Actor and my Actors are just hollow shells for private non-Actor tasks, I might not see a benefit for even using the Actor Framework in these cases.

The opposite argument of it's the responsibility of the task generating the messages to throttle them at an appropriate rate doesn't help either, the throttling still needs to happen. It's also dangerous because it creates a type of coupling between the two tasks where the source of the messages needs to be aware of the approximate frequency with which it can send messages. Who is to say this frequency doesn't change as a function of implementation or worse, state?
 

 
Agreed, though I'd generalize it a bit and say an actor should be able to specify it's message transport whether or not it is a queue.

 
 Absolutely. I suppose I got tripped up on semantics. I didn't mean to imply any transport mechanism had to be a queue. Ideally I think an Actor should be dealing with an abstracted interface where all it cares about is a method to get the next message.
 

...but I'd heavily question any actor that has a timeout on the dequeue.  Actors shouldn't need a dequeue timeout to function correctly.

 
Interesting assertion, care to elaborate?
 

Before everything else: Have you looked at experimental version 4.3? Does the option to add actor proxies satisfy your use cases?


Nope. I'll give it a spin as soon as possible. Regarding the rest of your reply, I intend to follow up tonight. For now I'm out of time and can't address the post in its entirety as this pesky thing called "work" demands my attention.

Link to comment
And I just noticed Daklu's signature. In light of this discussion, it makes me giggle:

 

Just out of curiosity, giggle in an "I agree" way or giggle in an "how ironic" way?

 

 

The public Actor might indeed respond to the message on short order, but there's no getting around to the fact that actually acting on that message takes time.

 

True, but your original concern was with the message queue getting backed up, not actor responsiveness in general.  Designing an actor such that the message queue routinely gets backed up during normal operation creates the additional complexities you identified:

 

-What is the correct behavior when the queue is full?

-How do you enable senders to send special requirement messages like priority messages or "only do this once regardless of how many times it appears in the queue" messages?

 

Neither of these questions have easy answers, but they must be addressed if your actor operates on the "job queue" principle.  Separating an actor's implementation into a message handling loop and "everything else" makes those questions irrelevant.

 

Ultimately if some sort of message filtering has to be done at any layer because it just doesn't make sense to process everything, you're back to the original argument.

 

Not necessarily.  An actor may receive messages faster than they can be processed sequentially, but if they can be processed concurrently by delegating to multiple loops then no throttling is required.  Fundamentally an actor is just an abstraction of some concurrent computation.  Internally the actor may implement a single loop or it may implement multiple loops.  External code doesn't know or care how many loops it uses.

 

In cases where the messages must be processed sequentially, then yes, some sort of filtering may be required.  Even delegating to a single subactor has benefits.  The filtering can be implemented in the actor instead of trying to encode it in the messaging system or leaving it up to the senders.

 

 

The opposite argument of it's the responsibility of the task generating the messages to throttle them at an appropriate rate doesn't help either, the throttling still needs to happen. It's also dangerous because it creates a type of coupling between the two tasks where the source of the messages needs to be aware of the approximate frequency with which it can send messages. Who is to say this frequency doesn't change as a function of implementation or worse, state?

 

I agree.  I'll argue against any claim that it's the responsibility of either the sender or the receiver to make sure the queue doesn't overflow.  In the general case senders won't know what messages the receiver is getting from other sources so it has no way of choosing an appropriate rate to send messages.  On the receiving side, all messages take some finite amount of time to process and the receiver cannot prevent a hundred senders from flooding its message queue.  Neither the sender nor receiver has the ability to prevent a queue flood; therefore, neither of them have the responsibility to prevent it.

 

So who is responsible?  At the risk of being trite, it's the responsibility of the developer who is combining the actors into a working system.  As a practical matter no component is going to understand a complex system well enough to automatically throttle messages through all possible system states.  I suppose it's possible to build one, but I think it would add a lot of complexity for relatively little value.

 

In a hierarchical messaging system you can implement some throttling/filtering in any of the actors along the message chain.  Common places where I've implemented it is in the sender's owner (first hop,) receiver's owner (last hop,) or in the lowest actor in the hierarchy that owns both the sender and receiver.  In direct messaging systems where each actor sends messages directly to the intended receiver with no intermediate actors the options are more limited. 

 

 

Ideally I think an Actor should be dealing with an abstracted interface where all it cares about is a method to get the next message.

 

I've avoided attempts to create a generalized message transport interface that can use any of the two dozen options available.  I'm of the opinion that there are too many idiosyncracies with each transport that the sender and receiver must know about for it to be worthwhile.  (i.e. An actor whose message queue is a TCP connection is written differently than an actor whose message queue is a global variable.)  Furthermore, I don't see a lot of practical situations where there's a need to be able to change the transport.  I'm sure there are some, but I think it's better to handle them on a case by case basis rather than trying to build a general purpose message transport interface.

 

 

...but I'd heavily question any actor that has a timeout on the dequeue. Actors shouldn't need a dequeue timeout to function correctly
Interesting assertion, care to elaborate?

 

Sure.  One of the fundamental axioms of actor-oriented programming is an actor can receive any message at any time. The actor can't control when it receives messages, so any code set to execute on a timeout runs the risk of being starved.  If the timeout code is critical to the proper operation of the actor and it doesn't get executed, the actor may not behave as expected.

Link to comment
Well the act of delegating to a private subActor (or any private secondary asynchronous task) only hides the extra layer. The public Actor might indeed respond to the message on short order, but there's no getting around to the fact that actually acting on that message takes time. Ultimately if some sort of message filtering has to be done at any layer because it just doesn't make sense to process everything, you're back to the original argument. If I can't do stuff like this easily with an Actor and my Actors are just hollow shells for private non-Actor tasks, I might not see a benefit for even using the Actor Framework in these cases.

 

I was thinking more of the use of a message queue as a job queue for the actor, rather than what to do about filtering messages, but the general idea would be to have the actor’s message handler serve as supervisor or manager of a specialized process loop.   The manager can do any filtering of messages, if needed, or it can manage an internal job queue.  It can also handle aborting the specialized process by in-built means that can be more immediate than a priority message at the front of the queue (like an “abort” notifier, or directly engaging a physical safety device).  It wouldn’t be a hollow shell.

Link to comment
 If the timeout code is critical to the proper operation of the actor and it doesn't get executed, the actor may not behave as expected.

mje was talking about using a “zero or calculated timeout”, a technique I’ve also used.  If a zero timeout never executes your actor’s gonna fail anyway.

Link to comment
mje was talking about using a “zero or calculated timeout”, a technique I’ve also used.

 

It doesn't matter what the value of the timeout is.  If there is code that only executes after n ms of waiting to dequeue a message there is a non-zero chance the code will not execute.

 

 

If a zero timeout never executes your actor’s gonna fail anyway.

 

You must be making this statement in a context I'm not aware of.  I can't figure out what you're trying to say.

 

I don't think you're claiming it is universally true.  That would be akin to saying, "Every actor must have a timeout case that executes or else it will fail."  Clearly that is false.  You could be claiming that's true for implementations that use the "zero or calculated timeout" technique.  But then your statement boils down to, "In implementations where executing the code in the zero timeout is critical to the correct operation of the actor, if the zero timeout code never executes your actor will fail."  That's true, but it's not particularly informative.  Can you explain?

 

I do think putting critical code in a timeout case is a questionable design practice, but I didn't claim using a timeout is always wrong.  I said if I see a timeout attached to the actor's message handling loop it triggers a bunch of other questions that have to be answered before I'm comfortable allowing it.

Link to comment
You must be making this statement in a context I'm not aware of.  I can't figure out what you're trying to say.

I mean if you can’t service the queue as fast as elements are added, then you’ll eventually run out of memory.

 

— James

 

PS, if you recall this conversation we had, one can use a timeout in a way that is guaranteed to execute the desired code on time

  • Like 1
Link to comment
 Personally, I think the tradeoff in flexibility would be worth it, but it would mean that that flexibility would be used to build some problematic code. 

What use cases are you trying to solve? I want paragraphs describing particular functionality that you cannot achieve with the AF as it stands before we introduce new options. I created the AF in response to one repeated observation: many users need to write systems for parallel actor-like systems, but it takes lots of time to design one that is actually stable, and it is incredibly easy to destabilize them with the addition of features. I've built a few of these systems both with the AF and with other communications systems, and they are *hard* to debug, simply because of the nature of the problem. The more options that exist, the more you have to check all the plumbing when considering what could be wrong. We need the plumbing to be invisible!

 

I stuck "learnability" as one of the AF's top priorities. I get mocked for that claim sometimes ("You call this learnable?!") but when compared to the nature of the problem, yes, it is a very approachable solution. Introducing options is a bad thing unless we are solving a real need. So don't tell me "I can't do filtering on the queue," because that's a solution. Instead, tell me "I can't process messages fast enough" or "I need to only handle one copy of a given message every N seconds". And then we can talk through how best to implement it. In the case of filtering, there's a fairly long thread on the AF forums about various ways to do this with the current AF, and general agreement that those are *good* ways, not hacks or workarounds to compensate for a hole in the AF.

 

I'm not criticizing his decision to go that route, but the consequences are that end users are limited in the kind of customizations they're able to easily do.

But are they limited in the types of applications they are able to write? That's the real question. Yes, the AF demands a particular programming style. That consistency is part of what makes an AF app learnable -- all the parts work the same way. If there is something that cannot be written at all with the AF, that's when we talk about introducing a new option.

 

So, please, spell out for me the functionality you're trying to achieve. In terms of filtration, I think that's been amply (and successfully) answered. In terms of proxying, take a look at version 4.3. If there's something else, let me know.

  • Like 1
Link to comment
In the case of filtering, there's a fairly long thread on the AF forums about various ways to do this with the current AF, and general agreement that those are *good* ways, not hacks or workarounds to compensate for a hole in the AF.

Could you point me to that thread?  I had a quick scan but couldn’t find it.

Link to comment
Could you point me to that thread?  I had a quick scan but couldn’t find it.

Here's the most detailed one:

 https://decibel.ni.com/content/message/33454#33454

There have been others, but they've been tangents in the middle of other threads. It's basically just me listing off the same workarounds that have been listed here, but a bit more detail about the options. So far, everyone I've pointed to that thread has found something there that works for them.

Link to comment
PS, if you recall this conversation we had, one can use a timeout in a way that is guaranteed to execute the desired code on time

 

My apologies.  I was thinking of a case structure message handler, not a command-based messaging system.

 

 

But are they limited in the types of applications they are able to write?

 

I doubt it.  There's no reason for me to believe an arbitrary application can't be implemented using the Actor Framework.  However, while I agree that's the first real question, it's not the only real question.  Once the ability to create an arbitrary application using a particular language or style has been met, the decision is based on other factors, like how productive you can be with it.

 

Again, I think the AF does an admirable job of providing a usable framework while supporting your safety goals.  An API can't be all things to all people and every design decision involves tradeoffs.  Recognizing the limitations that result from those decisions isn't a criticism.

Link to comment

OK, a lot to handle here I'll do my best to parse this down into some matter of cohesive reply.

 

I mean if you can’t service the queue as fast as elements are added, then you’ll eventually run out of memory.

PS, if you recall this conversation we had, one can use a timeout in a way that is guaranteed to execute the desired code on time

 
James and I are on the same wavelength here-- I was referring to implementations like this. I will add however that these types of systems do not guarantee code to be executed in time. If a message arrives just prior to the timeout firing and processing the message takes longer than the time remaining, the code can execute late. Similarly a backlogged queue can cause lost executions. Even worse a busy OS can also cause lost executions. I've used mechanisms like this with the caveat of having code in place that can track when execution frames have been lost. In my experience this is very rare, but non-RT OSes are particularly notorious for just grabbing all available CPU in some situations. Invoking Windows authentication services (waking a lock state, remote desktop connections) are cases I see regularly where seconds can go without my sleeping tasks being serviced when I’d otherwise expect them to wake. If you do have the luxury of an RT OS, I’d argue there are better more deterministic ways of tackling this one but I don’t want to diverge too much.
 

1) Assertions of correctness. ...I've tried to make sure that no one can accidentally reintroduce the errors that the AF is designed to prevent (a slew of deadlocks, race conditions and failures-to-stop, documented elsewhere). "The queue works like this" is a critical part of those assertions.

 
I don’t have an answer for this. Any way of exposing an arbitrary transport layer would indeed necessitate that layer being exposed outside the Actor implementation and you’d lose the ability to make those assertions. I’m not criticizing; this is a very good reason to have the implementation the way it is. Thanks for laying it out.
 

4) Maximize Future Feature Options. The Priority Queue class is completely private specifically because it was an area that I expected to want to gut at some point ant put in something different.

 
Now that is very interesting. I was actually admiring your Priority Queue implementation as being quite elegant. I like how it falls back on native queues and how priority is mutated when a task is already waiting on the queue because at that point priority is irrelevant.
 

5) Paranoid about performance. Dynamic dispatching is fast on a desktop machine. Very low overhead. But I was writing a very low level framework. Every dispatch, every dynamic binding to a delegate, gets magnified when it is that deep in the code. I kept as much statically linked as possible, adding dynamic dispatching only when a use case required it.

 
I appreciate this, I just find it shocking to hear, especially in light of other languages and libraries that throw objects at everything, often when functions are by default dynamic/overridable. Granted this is why words like “sealed” come to play...
 

5) Auto Message Dropping Is A Bad Idea. There's a long discussion about message filtration in the http://ni.com/actorframework forum. It's generally a bad idea to try to make that happen with any sort of "in the queue" system for any sort of command system. The better mechanism is putting the filtration into the handler by using state in the receiver...

 
Indeed, I was never implying that things like filtration can’t be done in the AF using such mechanisms.
 

Designing an actor such that the message queue routinely gets backed up during normal operation creates the additional complexities you identified...
 
...but they must be addressed if your actor operates on the "job queue" principle.

 
Yes. I think where I diverge from Daklu and perhaps AQ is that if I’m going to have an Actor delegate tasks to a “job queue” subordinate, I might want to wish this subordinate to be another Actor. For these tasks (Actor or not) the very point of their implementation isn't to service their messages as quickly as possible but to use the queue as a message/command/whatever buffer.
 

6) Lack of use case for replacing the queues means lack of knowledge about the right way to add that option. Who is the expert about the type of communications queue? The sender? The receiver? Or the glue between them? MJE, you mention querying the actor object for which type of queue to use. Is that really the actor that should have the expertise? Perhaps Launch Actor.vi should have a "Queue Factory" input allowing the caller to specify what the comm link should be. Honestly, I don't know the right way to add it because no actual application that I looked at when modeling the AF had any need to replace the queue. What they generally needed instead was one type of queue instead of the three or four they were using (i.e. communications through a few queues, some events, a notifier or two, and some variables of various repute).

 

That is an excellent point. I hadn’t considered it and I can’t argue it-- I redact my suggestion.


I don’t have an answer either. You mention a networking layer, which was one of the things I was thinking of. If I want to have an Actor that can be commanded through a network or locally without the TCP/IP stack the means of communication needs to be decided by the task that instantiates the Actor. Let’s complicate things even more by having a task connect to an Actor that’s already running rather than starting a new one up. Maybe you have multiple tasks communicating with an Actor, some operating in the same application instance, others remotely. Different transports would be required, so I completely agree it’s not the domain of the Actor to dictate. Maybe this is something you've already considered in these 4.2 and 4.3 versions you've mentioned, I can't say. The last version I've had the time to examine is the one shipping with 2012.

 

Finally thank you everyone for keeping this civil. I really want to keep this constructive. I'm not trying to tear down the Actor Framework. It's a very solid way of handling asynchronous tasks. I really want to share what some of my perceived limitations of it are, and get at what others think. I'm not out to write "MJE's Actor Framework", but it would be irresponsible of me not to survey some of the design decisions in the AF to consider how these decisions scope relative to the flaws I see in my own framework I'm reconsidering.

Link to comment
Yes. I think where I diverge from Daklu and perhaps AQ is that if I’m going to have an Actor delegate tasks to a “job queue” subordinate, I might want to wish this subordinate to be another Actor. For these tasks (Actor or not) the very point of their implementation isn't to service their messages as quickly as possible but to use the queue as a message/command/whatever buffer.

 

(I'm sure you--MJE--know most of the stuff I say below, but I'm spelling it out for the benefit of other readers.)

 

I propose implementing an actor with the expectation that its message queue is a de facto job queue violates the one of the fundamental principles of sound actor design.  Message queues transports and job queues have different requirements because messages and jobs serve different purposes.

 

Message are used to transfer important information between actors.  Actors should always strive to read messages as quickly as possible since any unread messages may contain information the actor needs to know to make correct decisions.  ("ReactorCoreOverheated")  Whether or not they immediately act on the message is up to the actor, but at least it is aware of all the information available to it so it can make an informed decision about what action to take. 

 

Jobs aren't serviced the same way messages are.  Jobs are a mechanism to keep worker loops busy.  They're more of a "Hey, when you finish what you're doing can you to start on this" kind of thing rather than an "Here's some important information you should know about" kind of thing.  Commingling these purposes into a single entity is part of the QSM mindset I mentioned earlier and leads to additional complications, like priority queues.

 

So how do you implement a job processor if it isn't an actor?  Make it an internal component of an actor.  Create a helper loop that does nothing but dequeue and process jobs.  When the actor receives an AddJobToJobQueue message in its message handling loop, it places the job on the job queue for eventual processing by the helper loop.

 

Sound suspiciously like an actor?  It's not.  The job processor is directly controlled by the message handling loop.  Actors are never directly controlled by other components; they control themselves.  The job queue can be manipulated by the message handling loop as required, even to the point of killing the queue to trigger the job processing loop to exit.  An actor's message queue is never manipulated or killed by anyone other than the actor itself.

 

There's a lot of gray between an actor and a helper loop.  The implementations can look very similar.  I try to keep my helper loops very simple to avoid race conditions often found in QSMs.  They are very limited in what operations they perform.  They don't accept any messages and send a bare minimum of messages to the caller.  ("Exited" and "Here'sYourData.") 

  • Like 2
Link to comment
(I'm sure you--MJE--know most of the stuff I say below, but I'm spelling it out for the benefit of other readers.)

 

I propose implementing an actor with the expectation that its message queue is a de facto job queue violates the one of the fundamental principles of sound actor design.  Message queues transports and job queues have different requirements because messages and jobs serve different purposes.

<snip>

 

There's a lot of gray between an actor and a helper loop.  The implementations can look very similar.  I try to keep my helper loops very simple to avoid race conditions often found in QSMs.  They are very limited in what operations they perform.  They don't accept any messages and send a bare minimum of messages to the caller.  ("Exited" and "Here'sYourData.") 

This is my fundamental objection to the Actor Framework. It blurs the line between messages and processes. It sort of funnels your application architecture to be the Actor Framework itself.
Link to comment

(Replace the lowercase actor below with process or subsystem or service... it's more general than an Actor Framework Actor.)
 
I had a brief offline conversation about this topic the other day about the semantic difference between a command and a request sent to an actor.
 
An actor receiving requests is generally in charge of its health and destiny; an actor sent commands is subject to DoS attacks or other hazards from external sources, whether incidental or malicious. As Dak mentions, separating incoming messages from the job queue is a great implementation for receiving requests (I owe a lot of my understanding and respect for this concept to the JKI State Machine).
 
The visual below represents an actor (the traffic intersection) and its response to individual messages from four distinct non-owning-but-using actors (the incoming lanes) defining their own concepts of priority (drivers with their own agenda). Were this intersection handling incoming requests rather than commands, it performs its job more effectively (by coordinating order, rate, and even batch-ness by aggregating requests to provide efficiency) and reduces undesirable interaction between the four independent actors.
 
(Finally, I don't assert requests sent to actors are universally better than commands, since they each have merits in different problem domains; just acknowledging the existence of this concept, especially when chain-of-command and ownership does not naturally exist.)
 
post-17237-0-20210100-1362115184.jpg
Originally from: http://chivethethrottle.files.wordpress.com/2013/01/random-t-01_18_13-920-55.jpg

  • Like 2
Link to comment
It blurs the line between messages and processes.

 

Minor point:  I would say it blurs the line between messages and operations (or functions, methods, etc.), not messages and processes.  In my mind process = actor and I think there's a pretty clear distinction between Actors and Messages in the AF.

 

 

(Replace the lowercase actor below with process or subsystem or service... it's more general than an Actor Framework Actor.)

 

+1.  As a general rule I try to use "actor" to mean the abstract concept and "Actor" to mean an AF actor.  I propose we adopt that as a matter of convention.

 

 

I had a brief offline conversation about this topic the other day about the semantic difference between a command and a request sent to an actor.

 

It is *extremely* important to understand this concept if you're going to do any actor-oriented programming, whether you use the AF or not.  (Important enough for me to bold and underline it.  :rolleyes: )  But it's a very subtle difference and I haven't found a really good way to get that idea across to people.  One of the best explanations I've seen was in the Channel 9 video Todd linked a while back on the AF Community Forum.  To (heavily) paraphrase, he said:

 

"Pretend that the other process is a turkey.  If the turkey isn't an actor, when Thanksgiving rolls around you go out and chop off its head.  If the turkey is an actor, then you ask it to cut off its own head."

 

I'll also add that a message phrased as a command may still be a request.  I send "Exit" messages whenever I want another actor to quit, and that sounds suspiciously like a command.  It's still a request though, because the actor may refuse to do it.  To add to the confusion, even if an implemented actor always honors the Exit message--essentially making that message a command--it is also still a request.

 

How do you tell if a message is a command or a request?  It's all about responsibility.  Figure out where the final responsibility for knowing whether a particular message can be honored lies.  If the sender has the responsibility, it's a command.  If the receiver has the responsibility, it's a request.  It is closely related to other threads where I've mentioned "sender side filtering" vs "receiver side filtering."

 

 

Finally, I don't assert requests sent to actors are universally better than commands...

 

If you won't, I will.  ;) A command makes the sender responsible for knowing whether the receiver can process the message without going into some unexpected state.  In the context of the entire system's operation, yes, the sender needs to know when it should send a particular message to a particular receiver, but it should never have to worry about breaking the receiver because it sent a message when it shouldn't have.  (Like I said, lots of gray between actor and non-actor.)

Link to comment
An actor receiving requests is generally in charge of its health and destiny; an actor sent commands is subject to DoS attacks or other hazards from external sources, whether incidental or malicious. As Dak mentions, separating incoming messages from the job queue is a great implementation for receiving requests (I owe a lot of my understanding and respect for this concept to the JKI State Machine).

I haven't worried about this aspect much because it seemed to me that any actor that needs to shield itself from incoming messages from its caller can be implemented as two actors, one that listens to the outside world and drops abusive requests, and an inner one that actually does work. The inner one doesn't even necessarily have to route through the outer one for outbound messages. Thus separating the two queues is trivial for those actors that happen to need it, but most don't.

 

Is there something wrong with that approach?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.