Jump to content

Self-addressed stamped envelopes


Recommended Posts

Hello,The recent posts about "Message Routing" has inspired me to mention a variation on this idea. In particular, I am thinking of Daklu's description of his "Slave Loops" that only communicate with their masters (requiring the master to "mediate" communication between slaves), and Mark Balla's Message Routing Architecture.

I have been working on my own LVOOP Messaging system, one in which there aren't specified "output" channels; instead, each message can contain the method to rely or respond to it. Sort of a self-addressed, stamped envelope. This allows one to build a "hierarchical tree of master/slave loops" as Daklu suggests, but with the ability to direct messages to thier final recipient without mediation. A simple example:

post-18176-0-88887300-1311257202_thumb.p

In this example, the Controller (master) sends a message to Process A (label="SendTimeString") with the attached reply address (command queue) of Process B, along with an alternate label, "CurrentTime", to be used for the reply message.

Thus, Process A directly sends a message to Process B, with no mediating or routing processes in between, yet without either process knowing anything at all about each other (they don'y even use the same message labels). The Controller sets up the communication link between A and B, but doesn't itself need to handle the message.

One can extend this into an "Observer pattern", where the Controller registers Process B as an observer of Process A's regularly occurring "TimeString" events, (complete with message relabeling to "CurrentTime"), again with neither A nor B having to know anything about each other.

-- James

  • Like 1
Link to comment

I think this is a good way of doing things. With a mechanism like what you describe, asynchronous message replies become trivial, and you can even implement synchronous messages* in the same framework if you ever need them. In fact I have my own architecture which does pretty much the same thing you have done (don't we all have our own it seems now a days):

post-11742-0-64391600-1311261791_thumb.p

Any message which is past in is compromised of four things: a message identifier (string), some parameters (LabVIEW Object), an optional parameter to signal a reply with (Callback Object), and an automatically created time stamp. After every message is processed, the operated parameters are passed to the corresponding Callback object's broadcast method (circled in red in the screen shot above). A default Callback Object's broadcast method does nothing, meaning the default configuration for passing message if you don't supply a Callback derived class is to operate in fire and forget mode (pass a message off and don't care about a reply).

The magic of using a class for a reply command (or Callback in my language), is that you can extend the reply mechanism to use any type of object, from native LabVIEW synchronization primitives like notifiers, to other messaging architectures.

For example, if I want to send replies to a generic user event, I create an EventCallback class and implement the broadcast method as:

post-11742-0-91999300-1311262248_thumb.p

Using something like that, message replies can pretty much be sent anywhere, to any type of framework object or primitive construct. All of this of course means that the originator of the message needs to know where the message reply is required to be sent to. That doesn't mean that you can't also have the system configured so the message handler decides what to do with a reply, though I don't advise mixing the two.

*A word about synchronous messaging (where a message is sent, and the sender blocks until a reply is obtained): Be very careful. Deadlock is very easy if you introduce circular dependencies, I've pretty much learned to avoid any form of synchronous messaging, even though it's easily done with the reply-to mechanism.

  • Like 1
Link to comment
In fact I have my own architecture which does pretty much the same thing you have done (don't we all have our own it seems now a days):

Yes, I nearly titled this conversation "Yet another messaging system". biggrin.gif

Looking at your "MessagePump" loop, I can see the great similarity, just with different terminology. My "MSG" message class contains a label (your "message identifier") and a "Send" object (your "Callback"); children of MSG add "Data" (your "parameters") of various kinds (the example I gave used a text-data message). "Send" is a virtual class, who's central method, "Send.vi", does nothing, but it's various children override Send.vi with different communication methods or other functionality. So far, I have Queue and User-Event "Messengers" (hopefully to be extended in future to a network-enabled TCP messenger); "Parallel Process", an active-object design that uses the messengers for communication; and "ObserverSet", an extension of Send intended for allowing multiple processes to receive ("observe") the same message.

Here's an extension of my first example, where I've added a Message Logger (Parallel Process) that "observes" the messages to both processes A and B:

post-18176-0-23809400-1311339340_thumb.p

post-18176-0-73008200-1311339716_thumb.p

Note that I made no changes to A, B or the Controller Loop; none of these processes are aware of the existence of the Message Logger.

I've been experimenting with what kind of interesting functionality I can build into the Messaging system itself, while using only quite simple processes to message between. Here's a further modification of to example where I replace the "Controller" Loop with a "Metronome" parallel process that sends "MetronomeTick" messages periodically to the ObserverSet registered with it. I've used the ability of Observers to substitute an alternate message to have Metronome send the "SendTimeString" message to process A.

post-18176-0-78517600-1311342369_thumb.p

Note that A, B and Metronome know practically nothing about each other. They don't use any compatible message labels and they don't know what underlying communication method they are using. Metronome needs to "send" updates to its ObserverSet, but doesn't know anything about the number or nature of observers, or that it's messages are being rewritten. I had originally started writing a parallel process that maintained arrays of registered observers, with arrays of alternate message labels, but realized that all this functionality could be built into a extension of the "Send" messaging class, and thus added for free to any (much simpler) process that can register one "Send" object for updates.

*A word about synchronous messaging (where a message is sent, and the sender blocks until a reply is obtained): Be very careful. Deadlock is very easy if you introduce circular dependencies, I've pretty much learned to avoid any form of synchronous messaging, even though it's easily done with the reply-to mechanism.

In a hierarchal master-slave tree structure, like Daklu suggested, there is less potential for circular dependancies, as any synchronous messaging would always be from master to slave. And, of course, one needs a timeout error; here's my synchronous "query":

post-18176-0-70834500-1311343507_thumb.p

-- James

Link to comment

I have not examined either this or Mark Balla's messaging framework in depth (and probably won't have time for several weeks) but I did want to make a few comments:

First, I'm thrilled to see more people adopting object-based messaging systems. :thumbup1: I find the natural encapsulation of classes makes it easier to use, read, and extend. Plus it's a relatively safe way for people to get started with object-oriented programming. I wasn't the first to propose them (I think MJE was pitching his message pump before I started using message objects) but I do believe it is a natural fit for LV's dataflow paradigm.

Second, there are a lot of cool features that can be added to a messaging framework, but... these features come at the cost of additional complexity. The complexity is found not only in how the framework internals are implemented, but is also often found in how the framework is used--its api. This can present a significant barrier to entry for new users, especially if they do not have experience using LVOOP. If the messaging system is intended for a small group of co-workers it's probably not a big deal. New employees can ask around to get clarification on how it works and how to use it. It might be too complicated if you're hoping to release it to a broader audience. My "ideal" messaging framework is one small enough to be picked up easily, light enough to be suitable for small projects, and flexible enough to be able to adapt it to more complex situations.

At the risk of sounding like a shill for my own messaging system, those are the things I find so valuable with LapDog's Message Framework. The barebones functionality looks very similar to standard string/variant messages so it is familiar to non-LVOOP programmers. (I've received resistence to LVOOP from most LV programmers I've worked with. Familiarity helps reduce resistence.) If I need something a little more advanced, such as Command Pattern messages, I simply subclass the Message class to create a CommandMessage class, implement an abstract Execute method, and create all my Command classes as children of CommandMessage. Callbacks can be created in much the same way as you and MJE have done. Giving programmers the ability to use and learn only as much as they need goes a long ways towards enabling adoption, imo. (Of course, LapDog Messaging has been available for over a year and I know of exactly one person other than myself who has tried it, so take that for what it's worth.)

Third, passing queues as message data can be very, very useful. If I have a lot of data going from one component to another that is far away on the hierarchy, I'll set up dedicated data "pipe" between them to avoid overloading the control message queues. However, too much direct messaging between components makes it very hard to understand how all the components are interacting. I get a lot of value out of having all the message routing information and slave interactions contained in the mediator loop. It is relatively easy to page through the message handling cases in the mediator loop and understand what conditions messages are forwarded to other slaves and what conditions they are handled differently.

Also, I have found one of the harder things in debugging asynchronous messaging is figuring out where unexpected messages originated. (I use a "Source:MessageName" notation to help with that.) With hierarchical messaging I *know* the message must have come from the master, so I open that vi and figure out what triggers the message. I can rinse and repeat through the hierarchy until I find what caused the unexpected message to be sent. With direct messaging it's much harder to trace a message back to its source.

I think direct-response messages are best used in situations where serveral clients need to interact with a data source on an irregular basis. Suppose we have a time provider but clients don't necessarily want to receive a time update every second. They do need to know the time occasionally, so sending a direct-response message to the time provider seems to be a reasonable solution. Lots of people use singletons or by-ref classes for instruments. I prefer to implement them as slave loops with messaging. Direct-response messages would probably work well there too when many clients need to communicate with the same instrument. However, I don't think I would be terribly thrilled with inheriting an application where the entire messaging system was direct-response.

Don't quit working on it though. One of the most important things I've learned over the past several years is it is impossible to judge how useful and usable a reusable component will be by considering the design and creating sample code. You have to build something real with it--better yet, build several somethings with it. I've built components that worked brilliantly in one project but failed miserably when I tried it in the next.

Link to comment

[...] (Of course, LapDog Messaging has been available for over a year and I know of exactly one person other than myself who has tried it, so take that for what it's worth.)

I'm at the point where I'm ready to begin trying it. I'm finishing an application where I know I could have used a better messaging architecture.
Lots of people use singletons or by-ref classes for instruments. I prefer to implement them as slave loops with messaging.

My application has five CAN channels, each of which is monitored by a re-entrant VI. This VI communicates the messages I'm interested in back to the main VI via User Events.

I read all of these wonderful threads and realize the neither of the "A"s in LAVA apply to me...

Link to comment
Second, there are a lot of cool features that can be added to a messaging framework, but... these features come at the cost of additional complexity. The complexity is found not only in how the framework internals are implemented, but is also often found in how the framework is used--its api. This can present a significant barrier to entry for new users, especially if they do not have experience using LVOOP. If the messaging system is intended for a small group of co-workers it's probably not a big deal. New employees can ask around to get clarification on how it works and how to use it. It might be too complicated if you're hoping to release it to a broader audience. My "ideal" messaging framework is one small enough to be picked up easily, light enough to be suitable for small projects, and flexible enough to be able to adapt it to more complex situations.

I agree, and I've been trying to design my messaging system in that way, so that one can use it for simple things without having to understand everything that might be done with it. My original design goal was to be able to have processes that send messages without having to know whether the messages are being sent by queue, or notifier, or user event, or UDP or TCP. And for processes that receive messages to be able to transparently use network communication (TCP etc.) in place of local communication. That makes things simpler, not more complicated. And my initial goal for the Parallel Processes is to have a simple template for creating "speak-only-when-spoken-to" simple active objects that can be "queried" by higher-level processes. Thus the synchronous, command-reply communication (shown below with text-varient messages) was the first thing I developed (which lead to the idea of having each message carry it's own reply method):

post-18176-0-48682500-1311586737_thumb.p

I find this synchronous communication to be more clear and simple to use than asynchronous. I probably shouldn't have first introduced my messaging design with the complex example of my first post, as one would start out (and perhaps end) with much simpler uses.

Third, passing queues as message data can be very, very useful. If I have a lot of data going from one component to another that is far away on the hierarchy, I'll set up dedicated data "pipe" between them to avoid overloading the control message queues. However, too much direct messaging between components makes it very hard to understand how all the components are interacting. I get a lot of value out of having all the message routing information and slave interactions contained in the mediator loop. It is relatively easy to page through the message handling cases in the mediator loop and understand what conditions messages are forwarded to other slaves and what conditions they are handled differently.

Getting back to the more complex example of my first post, although the messaging is direct from A to B, all the routing and slave interactions are setup by the main VI.

Also, I have found one of the harder things in debugging asynchronous messaging is figuring out where unexpected messages originated. (I use a "Source:MessageName" notation to help with that.) With hierarchical messaging I *know* the message must have come from the master, so I open that vi and figure out what triggers the message. I can rinse and repeat through the hierarchy until I find what caused the unexpected message to be sent. With direct messaging it's much harder to trace a message back to its source.

In my system, one can look at the creator of the process to see who is passed the wire representing the ability to communicate with that process (hopefully, there are only a limited number of such potential message senders). And I'm hoping my "Message Logger" can be developed into a useful debugging tool. Below, I've modified the Logger to add some tracing information (the call chain of the process sending the message and the message timestamp).

post-18176-0-17775500-1311591884_thumb.p

-- James

Link to comment

For completeness, here a simple example of an asynchronous query, where the controller has its own incoming messenger, which it attaches to its outgoing message to Process A, allowing it to receive the reply asynchronously:

post-18176-0-66575000-1311761119_thumb.p

This example also shows a User-Event-based Messenger.

-- James

Link to comment

In practice I find asynchronous mechanisms like the one James just posted to almost always be preferable to a synchronous request. The reason is even in a dedicated hierarchy where each messaging component is clearly owned by a singular owner, and even if the system is designed such that all messages flow from owner to down the hierarchy, there is usually a case where a component will have to send a message in reverse. Maybe a component has stopped (unexpectedly or otherwise) and the owner *must* be notified. If the owner is locked down in a synchronous call deadlock is a very easy possibility, and debugging the deadlock can become nigh impossible as the application complexity grows.

I'm not saying synchronous calls are bad, but even if you think you know everything about a given framework, you still need to be very careful. In my experience there is rarely a true unidirectional messaging system. Some of my components will always care if their message targets return unexpectedly.

Link to comment

In practice I find asynchronous mechanisms like the one James just posted to almost always be preferable to a synchronous request.

Synchronous ONLY is perhaps not a good idea, but synchronous messaging is good for synchronizing things. As an example, suppose I had a spectrometer and a sample handler for deploying samples to be measured. I might want to deploy a calibration sample, THEN calibrate the spectrometer, THEN deploy a sample to be measured, and THEN measure the spectrum of the sample. This would be trivially easy to do with four synchronous queries chained together by their error terminals. If either spectrometer or sample handler had a problem it would return an error message as its reply (such error appear on the error-out terminal of the Query VI).

post-18176-0-95651200-1311782888_thumb.p

Of course, if either component needs to signal something immediately there needs to be a dedicated channel. In which case, neither my synchronous nor asynchronous "reply" examples work, since in both cases the communication channel is a part of the received message. I've been developing a third communication mode as part of a "Active Object" template: registering for event updates (the OOP "Observer Pattern") that would work for this. All three modes should be useful and can be used on the same active object.

-- James

Link to comment

Of course, LapDog Messaging has been available for over a year and I know of exactly one person other than myself who has tried it, so take that for what it's worth.

I use it. What I like most about it is it's simplicity. The community page lists 38 members. I'm sure I am not the only other guy who uses it :cool:

Link to comment

In practice I find asynchronous mechanisms like the one James just posted to almost always be preferable to a synchronous request.

I agree. I don't like locking a loop while it waits for a response from another loop at some unknown time in the future. I've used synchronous messages and sometimes you're kind of forced into them, but I prefer to design my system using asynchronous calls. When I do use synchronous messages I usually put a timeout on them to prevent perpetual hangs.

Synchronous ONLY is perhaps not a good idea, but synchronous messaging is good for synchronizing things. As an example, suppose I had a spectrometer and a sample handler for deploying samples to be measured. I might want to deploy a calibration sample, THEN calibrate the spectrometer, THEN deploy a sample to be measured, and THEN measure the spectrum of the sample. This would be trivially easy to do with four synchronous queries chained together by their error terminals. If either spectrometer or sample handler had a problem it would return an error message as its reply (such error appear on the error-out terminal of the Query VI).

To my way of thinking the situation you're describing is sequencing a series of operations, not sychronizing them. I interpret synchronizing as making sure all the processes start at the same time or have other strict timing dependencies. Personally I'd probably opt for a Rendezvous to provide synchronization instead of a synchronous message. That way the loop sending the message isn't stuck waiting for a response.

Your example shows three different loops: the controller, the spectrometer, and the handler. Assuming sequencing--not synchronization--is the goal, you can accomplish that using asynchronous messages and a state machine in the control loop. Make the response messages from the spectrometer and the handler act as triggers for the state transitions, and then in the next state you'd send the next message in the sequence. It's a little bit more to set up on the front end, and state machine execution flow is a little more abstract than sequentially connected wires. On the other hand, I think the second option is more self-contained and less prone to unforeseen interactions. It also has the advantage that you don't have to invent alternative communication channels to work around the problem of blocking your control loop's input queue.

That's not to say I think synchronous messages are bad or wrong. I don't think they are. I *do* think they are more dangerous to use, just like globals, reference data, and GOTO statements are more dangerous than alternatives. And I certainly don't feel as strongly about synchronous messages as I do about, say... queued state machines. Anyhoo, I'm enjoying your posts and seeing the designs you're implementing.

Of course, if either component needs to signal something immediately there needs to be a dedicated channel.

Psst... priority queue. ;)

Link to comment
Your example shows three different loops: the controller, the spectrometer, and the handler. Assuming sequencing--not synchronization--is the goal, you can accomplish that using asynchronous messages and a state machine in the control loop. Make the response messages from the spectrometer and the handler act as triggers for the state transitions, and then in the next state you'd send the next message in the sequence. It's a little bit more to set up on the front end, and state machine execution flow is a little more abstract than sequentially connected wires. On the other hand, I think the second option is more self-contained and less prone to unforeseen interactions. It also has the advantage that you don't have to invent alternative communication channels to work around the problem of blocking your control loop's input queue.

The alternative communication channel is done: an easy-to-use "Observer Registry" as illustrated in my active object design. Also, although my active object template is a simple queued message handler, one can use more complex designs like a producer-consumer, where the producer loop monitors the input queue and a "Sequencer" consumer does the synchronous calls to the spectrometer and sample handler. Then if the producer were to receive an "Abort" message, say, from the UI it could send "abort" to the spectrometer and sample handler, which would then error-out the synchronous call from the Sequencer. Admittedly, the need to keep the synchronous stuff away from the input queue receiver does illustrate your and mje's point about being careful with synchronous calls.

-- James (off on vacation and internetless for the next week)

Link to comment

Yet Another Messaging System...

In my NI Week presentation on "Trends in LabVIEW Object Oriented Programming", one of the trends is the development of object-based communications schemes that modify the traditional queue-based-state-machine architecture. I'm mentioning Actor Framework, JAMA, and LapDog. Are there others that need mentioning?

Link to comment
  • 4 weeks later...

An update:

As I've had the chance to improve the API for my "ObserverSet" Class, here is the more complex example from above (the one where the Metronome object is used to instruct Process A to periodically send it's time string to Process B) redone with improvements that hopefully make it clearer. Included are some custom probes to see what the Observers and Messages look like.

post-18176-0-88102200-1314113672_thumb.p

post-18176-0-72547400-1314113680_thumb.p

Observers (aka ObserverSets) serve as containers of any number of Messengers (or Active Objects), where sending a message to the Observer sends it to all the contained Messengers/AOs. In addition, an Observer can be set up to automatically alter ("translate") the messages sent by it: in the example they are used to relabel a message, add prefixes, and substitute one message for another. ObserverSets are internally recursive to allow multiple levels of translation.

Observers have the additional features of never throwing errors into the process using them to send (the Messenger throwing the error is just dropped internally), and never allowing the process using them to access the contained Messengers/AOs other than to send to them.

-- James

Edited by drjdpowell
Link to comment
  • 5 weeks later...

This topic seems to be the most similar to what I'm scratching my head over at the moment.

I can't figure out how to create a scenario where a dynamically launched sub-vi creates a data pipeline queue (for offloading processing of high-speed data to another asynchronous parallel process), then registers the queue to the mediator which passes it to a sub-vi that calls for it. Unless I do it by name... This is the only way I can think of to pass the information about what pipeline the receiving VI should listen to without the mediator having to care about what form the data pipeline takes.

Is there something in OOP that addresses this issue? (I'm currently tinkering away in my 4 hour compile times with different ideas. Yet to strike upon a good one).

Link to comment

I've come up with a way to do it without OOP that seems pretty cool to me. Any parallel sub-VI is either a producer or a consumer of data. If it's a producer, when it starts it creates a data queue named after itself (with the word "data" concatenated). Then sends the queue ref back to the mediator as a variant packaged with a command to the host to process "publish data queue", which builds an array of data queue references. When a consumer fires up, it sends a request to the host for a data queue reference. The host then pulls all its current listed data queue references and passes them one by one to the consumer. The consumer knows what type of data it wants so the "Variant to Data" converter acts like the gate keeper, generating an error for each reference that doesn't match until it finds one that does, then it flushes the remaining queue references and goes to wait with it's shiny new queue ref. I intend to implement a score keeping type counter which iterates with each attempt to grab a queue reference, then I have an index of which queue was good which I can return to the host so it can remove that queue from it's list of available data queues (If I want exclusivity).

The passing every data queue ref is a little crude, but I can't think of a good way to know exactly which queue ref to pass without ruining the generality of my mediator processing case.

Edit: Sorry for the side-track, it's only tangentially related to what's going on here.

Edited by AlexA
Link to comment

AlexA: You are reinventing the wheel. No, actually, not the wheel. Messaging frameworks are fairly complex and way outstrip the wheel. But the jet engine... yes, you're reinventing the jet engine. And the problem is, when reinventing the wheel, most people get it right. No so much on the jet engine. This is NOT meant to insult your skills. It's just that I've spent years reviewing many people's messaging frameworks, and all of the ones rolled by individuals had some serious lurking timing issues. The frameworks that were built, shared with the community and refined are solid. I don't know what you're building, but I pretty much guarantee that LapDog, JAMA or Actor Framework can handle it. AF maintains simplicity relative to the other frameworks. LapDog is more powerful and JAMA is more powerful than that. Each power jump comes with additional complexity. Collectively, we give you enough thrust to reach the heights you aim for... or enough rope to hang yourself, depending upon your point of view. :-)

Since AF is my baby, let me lay out how you'd use it for this case:

  • Prime actor is Mediator. You send "Launch Producer" or "Launch Consumer" messages to Mediator.
  • When sending "Launch Producer", you include in the message some ID of the type of data this producer produces.
  • Producers generate data. They collect that data in their local state unless/until they receive a "Here's A Consumer" message, at which point they send all their pent up messages into the consumer's queue. This is important -- and is a bug that I'm guessing exists in what you proposed -- because the lifetime of the queue is tied to the consumer, not the producer, so you don't have vanishing queue problems when a producer disconnects before the consumer is done processing all the values.
  • Consumers consume data. When Mediator spins one up, he gives the consumers the list of producer types and the producer send queues (does this directly during spin up... no need to pass a message to the consumer). The consumer picks the one she wants and sends "Here's a Consumer" message -- no need to go back to the Mediator. Thereafter, consumer just sits in the loop and eats data until she gets a Stop message.

If you give me a couple days, version 3 of the AF all polished up will be posted at the site I linked earlier. If you decide to use one of the others, great. Just don't build a custom system unless you just cannot make the others work. Please. We lose too many programmer heroes when they get sucked into jet engines.

  • Like 2
Link to comment

Hahahaha. Thanks AQ. There is a huge smile on my face atm, just purely because I feel like, while I'm stumbling along by myself, I'm following in the footsteps of people that have been before me. Hard to explain the feeling, but it's nice to be tinkering and know that there are people out there who can look at what you're doing and say, "that's ok... but it won't work and here's why".

You nailed in that post exactly what I've just been dealing with, specifically, I try to shut down everything at program end, but I have no control over the order of Sub VI shutdowns, hence I have been scratching my head over how to gracefully deal with the loss of queue references. In my hacked together approach, I guess I could set some sort of "Producer or Consumer" property and then deal with it like that...

Thanks as well for the succinct overview of how to attack the problem with AF. I'll have a look over it and see if I can get my head around it. I'm definitely in the "huh?" phase of trying to think in terms of OOP.

Your post is why this forum rocks :).

Edit: Is there an AF version available for 2010?

Edit 2: Ahh well, guess this is the kick up the ass I need to finally install 2011 (incidentally, could you have a word to whoever about some sort of bulk "search and install" function which can look at every toolkit I have currently installedr and install the most recent version of it?)

Edited by AlexA
Link to comment

I can't figure out how to create a scenario where a dynamically launched sub-vi creates a data pipeline queue (for offloading processing of high-speed data to another asynchronous parallel process), then registers the queue to the mediator which passes it to a sub-vi that calls for it.

Alex, you can guess my advice: do away with the over-complexity of dynamic launching and the like. But ignoring that...

Whatever way you do this, it is better to think of the consumer making the data pipeline and getting that reference to the producer, rather than the other way round. The consumer is dependent on the queue, the producer is not. If I were designing something like this with my messaging design shown above, the producer would send it's data to an "ObserverSet", which by default is empty (data sent nowhere). The consumer would create a "messenger" (queue) and send it to the producer in a "Hey, send me the data!" message (alternately, the higher-level part of the program that creates both consumer and producer would create the queue and the message). The producer would add the provided messenger to the ObserverSet, allowing data piping to commence.

In the below example, the consumer code registers it's own queue with the producer and is piped 100 data points.

post-18176-0-90101900-1317038602_thumb.p

I had a look over AF, but I just don't think I'm ready (no free time atm) to battle with the OOP concepts.

The Actor Framework is rather advanced. You might want to look at "LapDog" though, as it is a much simpler application of LVOOP that one can "get" more easily (I am surprised AQ thinks the Actor Framework is simpler).

-- James

  • Like 1
Link to comment

Hey James,

I know, I know, look for simplicity. I try, I really do, but I'm also obssessed with this project as a way to learn new tricks. Even though they may not be all that useful. I figure once I've got more of an understanding of what can be done, I can learn what should be done.

Anyway, my solution without getting into any OOP, was to take the message a consumer sends acknowledging a data queue reference, and do some logic in the host to add that VI to a list of consumers. Then, in shutting down. Consumers are shut down before producers, finally the main host cleans itself up.

A quick question though, when the VI that creates a queue is shut down, obviously it destroys that queue reference as all other VI's subsequently return an error. Is this the same as calling the "Release reference" queue command in terms of memory behaviour? In other words, if the queue reference is lost this way, is the memory still freed up?

Link to comment

A quick question though, when the VI that creates a queue is shut down, obviously it destroys that queue reference as all other VI's subsequently return an error. Is this the same as calling the "Release reference" queue command in terms of memory behaviour? In other words, if the queue reference is lost this way, is the memory still freed up?

I believe so. It's "garbage collection"; where LabVIEW frees up the resources of the VI.

Link to comment

It's not garbage collection in the usual technical sense of the word. "Garbage collection" is used in systems where two objects both use a third object. When the first object disappears, all top-level objects objects (like the second object) are traversed and any objects that don't get visited stay in memory. Since the third object gets visited, it stays. When the second object disappears, all objects are again again checked. It's used in systems where there isn't a reference count to determine whether or not to throw the item out of memory because incrementing and decrementing the reference count is too much thread friction. The trick is that the garbage collector isn't actually run after every top-level object gets thrown away... it runs periodically during down time and cleans up large swaths of available memory.

LabVIEW just deallocates the queue when the VI that created it goes idle. There's no garbage collector algorithm that goes hunting for all possible queues to see which ones should live and which ones should die.

And, yes, what the VI does when it goes idle is identical to calling Release Queue. The same function is invoked under the hood.

  • Like 1
Link to comment

Ok, that leads me to another question. Given that my mediator/host is responsible for managing the order of shut down, and hence preserving the integrity of the queue, should I be dropping those remove queue operations on the block diagram as a matter of principle? Or should I just leave it, I guess that's a "best coding practise?" type question.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.