Jump to content

Notifier with buffer


Recommended Posts

Notifiers guarantee that all VIs that wait for a notification will be notified, but they do not buffer the notifications (well, you can get the previous one, but otherwise they are lossy). Ques on the other hand have a buffer, but do not guarantee that all VIs will see the incoming data. If one VI deques data from a que - other VIs that wait for data in that que will miss it (making it rather uncommon to have multiple consumers on the same que).

Has anyone made a design pattern that combines the "delivery to all subscribers guaranteed"-behaviour of the notifier - with the "delivery of all data" behaviour of the (non-lossy) que?

Here is an example of how you would use such a "Notifier with buffer":

You have a loop that measures something every now and then. To communicate the readings you have another loop that takes the readings and writes them to a different app using TCP/IP...You implement this as a producer-consumer pattern based on a que....this way the TCP-communication will not need to poll a list of readings and check which ones are new - instead it will sit idle and wait for new data..great. Later however you find that you want to add another function that needs the data. With a regular que this would mean that you would need to modify your producer to actually deliver data to an additional que....

With a "Notifier with buffer" however, you would not need to change the code of the producer at all. You would just register a new subscription to "Notifier name" - and voila, your new code has a lossless stream of the data it needs. The fact that you do not need to edit the producer makes it possible to add such expansions to built applications, if you have given them a plugin architecture.

Realising this idea would e.g. be possible by making a Que-manager that pipes data from incoming ques out to multiple subscriber ques. Whenever you have a producer of data - you create a que and give a reference to it to the que manager. If a VI wants to subscribe to the data, it aks the manager, using the name of the que used by the producer, for a reference. The que manager looks up its own list of ques - and when it finds it - it creates an outgoing que and gives the subscriber that reference. It also adds the subscriber to a list it gives to a dynamically created feeder. A feeder VI would be a VI that waits for data from a given incoming que (producer) - and then writes that data to its list of outgoing ques...

So far I've only played with this idea..and perhaps I'm overlooking an already existing alternative, or a big flaw?

Edited by Mads
Link to comment

So far I've only played with this idea..and perhaps I'm overlooking an already existing alternative, or a big flaw?

I think User events are an already existing alternative, just split the user event wire, each listener should register itself with a seperate User Event Reference, and every event sent after the registration is deliverd to the event window:

post-2399-125095530615_thumb.png

Ton

Link to comment

Has anyone made a design pattern that combines the "delivery to all subscribers guaranteed"-behaviour of the notifier - with the "delivery of all data" behaviour of the (non-lossy) que?

I prototyped an Event Manager class very much like what Ton describes. The Event Manager had a separate accessor vi for each user event consumers could monitor. It followed a broadcast model. The Event Manager had no idea how many subscribers there were for each event; it just fired the event when it needed to. Consumers were responsible for handling all registering/unregistering processes.

I don't remember how the event producers notified the Event Manager class. If you're interested I'll try to dig it up and give you some more details.

Link to comment

User events will give you much of the "feeder"-logic for free - that's a good idea, however you still need a que manager that lets VIs register their feeds or subscribe to a feed ...The manager would have an event reference lookup table. The point of this is that you do not know how many feeds or subscribers you have at compile time, instead the code will allow you to add these dynamically...e.g. when you create plugins to your application that wants access to a data stream.

Ideally this could be something that could be packaged with LV...or OpenG, as it would be a very useful design pattern.

Daklu - if there is not too much digging required...that would be very interesting to see yes:-)

Mads

I prototyped an Event Manager class very much like what Ton describes. The Event Manager had a separate accessor vi for each user event consumers could monitor. It followed a broadcast model. The Event Manager had no idea how many subscribers there were for each event; it just fired the event when it needed to. Consumers were responsible for handling all registering/unregistering processes.

I don't remember how the event producers notified the Event Manager class. If you're interested I'll try to dig it up and give you some more details.

Link to comment

I had this problem recently.

My scenario was that the asynchromous consumers could either send a "completed" notifier immediately (no part present) or after some arbitrary amount of time (part has been processed). The sequence engine would wait for all notifications from the subsystems, index a carousel and then send a "Go" notifier to all of them . The problem was that if the subsystems sent it immediately, the chances were that the sequence engine wasn't waiting for the notifier at that moment in time, so on the next round would wait indefinately.

The elegant solution would have been the "Wait on notifiers with history". But this doesn't wait again unless you've cleared the notifier and since you don't know which is the last one in an asynchronous system, you don't know when to clear it. If it executed the number of times there were notifications in the history....it would be perfect! The normal notiifer will only execute when you are actually waiting (if ignoreprevious is true) or again, you need to clear it (if it is false). So that didn't work either since the sequence engine was unlikely to be already waiting if a subsytem returned immediately.

So I ended up with all the subsytems waiting on a notifier to "go" and a separate 1 element lossy queue for each subsystem to send back the completed (basically a notifier with a history of 1). Luckily I have a Queue.vi so it wasn't messy and just meant replacing my Notifier.vis with the queue one.

Edited by ShaunR
Link to comment

Daklu - if there is not too much digging required...that would be very interesting to see yes:-)

Here's one of the prototypes I worked up. As I was looking through it I realized my memory about what it did wasn't quite right. (Getting old is the main disadvantage of getting older.)

I developed this as a way to explore using User Events to send messages across package boundaries. My main goal was to make it as easy as possible for the subscriber while maintaining reasonably loose coupling between the producers/event monitor package and the consumer. Although you can use it for a many-to-many model, it is more geared towards registering/unregistering events at runtime. I've put more details in the Readme file in the project. As always, comments and questions are welcome.

If you need a more general purpose solution the link ESST provided is probably a better fit, although I haven't looked at the code in detail.

Publish-Subscribe with User Events.zip

Edited by Daklu
Link to comment

Option 1: Create a queue, a notifier and a rendezvous. Sender Loop enqueues into the queue. All the receiver loops wait at the rendezvous. Receiver Loop Alpha is special. It dequeues from the queue and sends to the notifier. All the rest of the Receiver Loops wait on the notifier. Every receiver loop does its thing and then goes back around to waiting on the rendezvous.

Option 2: Create N + 1 queues, where N is the number of receivers you want. Sender enqueues into Queue Alpha. Receiver Loop Alpha dequeues from Queue Alpha and then enqueues into ALL of the other queues. The other receiver loops dequeue from their respective queues.

Option 1 gives you synchronous processing of the messages (all receivers finish with the first message before any receiver starts on the second message). Option 2 gives you asynchronous processing (every loop gets through its messages as fast as it can without regard to how far the other loops have gotten in their list of messages).

  • Like 2
Link to comment

Option 1: Create a queue, a notifier and a rendezvous. Sender Loop enqueues into the queue. All the receiver loops wait at the rendezvous. Receiver Loop Alpha is special. It dequeues from the queue and sends to the notifier. All the rest of the Receiver Loops wait on the notifier. Every receiver loop does its thing and then goes back around to waiting on the rendezvous.

Option 2: Create N + 1 queues, where N is the number of receivers you want. Sender enqueues into Queue Alpha. Receiver Loop Alpha dequeues from Queue Alpha and then enqueues into ALL of the other queues. The other receiver loops dequeue from their respective queues.

Option 1 gives you synchronous processing of the messages (all receivers finish with the first message before any receiver starts on the second message). Option 2 gives you asynchronous processing (every loop gets through its messages as fast as it can without regard to how far the other loops have gotten in their list of messages).

I'd prefer a "Wait on notifier history" that only executed the number of elements in the history. LV 2010?

Link to comment

I'd prefer a "Wait on notifier history" that only executed the number of elements in the history. LV 2010?

Mee too:-) That way getting notifier references by name - and handling different data types, would be as clean and effective as it could be -

and it's a functionality that complements what is already available.

Link to comment

A side note - but anyway:

I did some experiments today just to see how I could implement this "Notify with History", and I played with the idea of using LVOOP for this.

For some reason I half-expected to be able to produce unlimited polymorphism this way, ie. in this case the ability to create a notifier of any data type, but to have the top public VIs (obtain notifier, wait on etc.) virtually be the same regardless of the data type of the notifier - but as far as I can see that is still not possible.

Even if you limited the number of data types you supported, and had different sub-classes for notifiers of those different data types (to then extract and handle the standard que or event reference that in fact would be at the core of the system) - you would still need an old style polymorph VI to let the callers use the public VIs without them having hard coded the creation of the correct sub-class...right?

(Have not used LVOOP much yet...)

Link to comment

For some reason I half-expected to be able to produce unlimited polymorphism this way, ie. in this case the ability to create a notifier of any data type, but to have the top public VIs (obtain notifier, wait on etc.) virtually be the same regardless of the data type of the notifier - but as far as I can see that is still not possible.

Even if you limited the number of data types you supported, and had different sub-classes for notifiers of those different data types (to then extract and handle the standard que or event reference that in fact would be at the core of the system) - you would still need an old style polymorph VI to let the callers use the public VIs without them having hard coded the creation of the correct sub-class...right?

Make the data type of your notifier LabVIEW Object. That's the constant you can find in the palette and is the common ancestor of all the other classes. You can now wire ANY class into that input, so you don't need to write these VIs more than once.

Note - I didn't read the rest of the thread, but I'm assuming each class (data type) will also need an overriding VI which will handle the actual data and might need a constructor.

Link to comment

Sure, making VIs that accept all of the related objects is not a problem...but you still need to create the object, and that means that the callers must either deal with that directly (which we want to avoid, they should just hand off a Notifier name and a user event or que ref of any type)...or you need to make one object creator for all the data types and join them into a polymorph VI. The latter makes the new "Notifier with History" limited to the number of types you have VIs for in the polymorph creator VI (another thing we do not want).

The primitive notifier on the other hand has an obtain function that will accept ANY data type, not just objects of one of the correct classes.

I'll probably make my Notifier with History limited to just strings this time...that will at least give more or less the same behaviour as the old Notifiers/Ques that used to be string-only too.

Make the data type of your notifier LabVIEW Object. That's the constant you can find in the palette and is the common ancestor of all the other classes. You can now wire ANY class into that input, so you don't need to write these VIs more than once.

Note - I didn't read the rest of the thread, but I'm assuming each class (data type) will also need an overriding VI which will handle the actual data and might need a constructor.

Link to comment

With a "Notifier with buffer" however, you would not need to change the code of the producer at all. You would just register a new subscription to "Notifier name" - and voila, your new code has a lossless stream of the data it needs. The fact that you do not need to edit the producer makes it possible to add such expansions to built applications, if you have given them a plugin architecture.

Realising this idea would e.g. be possible by making a Que-manager that pipes data from incoming ques out to multiple subscriber ques. Whenever you have a producer of data - you create a que and give a reference to it to the que manager. If a VI wants to subscribe to the data, it aks the manager, using the name of the que used by the producer, for a reference. The que manager looks up its own list of ques - and when it finds it - it creates an outgoing que and gives the subscriber that reference. It also adds the subscriber to a list it gives to a dynamically created feeder. A feeder VI would be a VI that waits for data from a given incoming que (producer) - and then writes that data to its list of outgoing ques...

So far I've only played with this idea..and perhaps I'm overlooking an already existing alternative, or a big flaw?

I think that network shared variables do everything that you've requested.

You can create a shared variable on machine 1, have machines 2 and 3 subscribe to it, and machines 2 and 3 will both get the same data.

The queue manager that you have hypothesized about is the shared variable engine, which knows about all of the publishers and subscribers and buffers the data to make sure that all of the subscribers get the same data. You can set buffer lengths to arbitrary lengths.

In a consumer VI you can watch for a change event for the shared variable in question, and your consumer VI will be able to act on the data as it changes without sucking data away from any other VI. Although shared variables cannot (yet) accept custom types, you can flatten custom types to a string, pass them into the variable, and unflatten them on the other side.

Check out this link, which describes the entire process in detail: http://zone.ni.com/d...a/tut/p/id/4679

My project is using shared variables in this way to do what you have described.

Link to comment

I think that network shared variables do everything that you've requested.

I don't believe they intrinsically suspend execution until an update is received. They also have huge caveates (e.g cannot be dynamically created at run-time). Its a bit of a sledgehammer to crack a nut IMHO.

Link to comment

I don't believe they intrinsically suspend execution until an update is received. They also have huge caveates (e.g cannot be dynamically created at run-time). Its a bit of a sledgehammer to crack a nut IMHO.

To the best of my knowledge, ShaunR is correct. The shared variables do not provide synchronization behaviors. On a local machine they are equivalent to global VIs. Over the network, they are a broadcast mechanism, which, by polling, you can use as a trigger, but I don't think you have any way to sleep until message received.
Link to comment

I don't believe they intrinsically suspend execution until an update is received. They also have huge caveates (e.g cannot be dynamically created at run-time). Its a bit of a sledgehammer to crack a nut IMHO.

This is correct, but i think that creating this behavior is simple. You could use events to create this situation - for instance, use an event-driven loop to wait upon a variable change event and not allow any other code execution until the event takes place (say, by using a notifier to notify local VIs!).

Edited by code ferret
Link to comment

I don't believe they intrinsically suspend execution until an update is received. They also have huge caveates (e.g cannot be dynamically created at run-time). Its a bit of a sledgehammer to crack a nut IMHO.

The DSC Module allows one to create shared variable value change events that one can wire into the dynamic event terminal of an event structure.

The DSC Module also allows one to create shared variables programmatically at run-time. (I see jgcode just mentioned this.) Currently this feature only supports basic shared variable types, unfortunately.

In our code we use shared variable events a lot and they work great. In practice we haven't needed to create SVs from scratch at run-time yet. We have done something similar by programmatically copying existing shared variable libraries (with new SV names) and then deploying the copies, which is a useful way to work with multiple instances of a component.

Shared variables have come a long way from their original instantiation and I think networked shared variables are a pretty reasonable implementation of a publish-subscribe paradigm. They can be pretty easy to implement. (Don't get me wrong, there are some things I still want to change, but we find them quite useful.) I recommend taking a fresh look at them.

Paul

Link to comment

This is correct, but i think that creating this behavior is simple. You could use events to create this situation - for instance, use an event-driven loop to wait upon a variable change event and not allow any other code execution until the event takes place (say, by using a notifier to notify local VIs!).

Ooooh. Its all got very complicated, very quicky. Now we have variables, events AND notifiers :P

Take your proposed topology. Replace the variable with a queue. Don't bother with events since the queue will wait and you have what I said a few posts ago about a 1 element lossy queue!

The only reason you can't use a single notifier is that you have to already be waiting when it fires or (if you use the history version) you have to clear it before you wait.

The DSC Module allows one to create shared variable value change events that one can wire into the dynamic event terminal of an event structure.

The DSC Module also allows one to create shared variables programmatically at run-time. (I see jgcode just mentioned this.) Currently this feature only supports basic shared variable types, unfortunately.

In our code we use shared variable events a lot and they work great. In practice we haven't needed to create SVs from scratch at run-time yet. We have done something similar by programmatically copying existing shared variable libraries (with new SV names) and then deploying the copies, which is a useful way to work with multiple instances of a component.

Shared variables have come a long way from their original instantiation and I think networked shared variables are a pretty reasonable implementation of a publish-subscribe paradigm. They can be pretty easy to implement. (Don't get me wrong, there are some things I still want to change, but we find them quite useful.) I recommend taking a fresh look at them.

Paul

Not every one has (or can afford) the DSC module. Queues, notifiers and network shared variables all come as standard. Coding around it isn't difficult with the built in tools. Its just bloody annoying when a single notifer with a history that gets checked off everytime it executes would halve the code complexity. In fact, it shouldn't really be called a "notifier with history", perhaps a better name would be "notifier that gets round the other notifier bug"....lol.

Edited by ShaunR
Link to comment

Ooooh. Its all got very complicated, very quicky. Now we have variables, events AND notifiers :P

True, it is complex! You may be right that using queues may be desirable when running software on only one machine or VI. But, I'm not ready to lower the shared variable battle standard :-) . The advantage of shared variables is that you can use them across PCS on a network.

Link to comment

This is correct, but i think that creating this behavior is simple. You could use events to create this situation - for instance, use an event-driven loop to wait upon a variable change event and not allow any other code execution until the event takes place (say, by using a notifier to notify local VIs!).

Put a single network shared variable in a vi. Save it. Build it into an exe and then build an installer. How big is the installation?

Link to comment

Not every one has (or can afford) the DSC module.

True. If, on the other hand, it is an option it is a good one. For the record, I think for a number of reasons that the functionality in the DSC Module ought to be part of the LabVIEW core. In particular, the functionality in the DSC Module is an extension of existing functionality in such a way that it can take a while to figure out where the boundary lies. Moreover, the publish-subscribe option (Observer pattern) is extremely useful--and pretty much a common programming standard--and NI ought to promote its use in most applications. I think doing so would make LabVIEW development more effective and presumably enhance its marketability in turn.

Put a single network shared variable in a vi. Save it. Build it into an exe and then build an installer. How big is the installation?

I didn't try this, but I'm guessing from your question that it will be big. Then the developer must choose whether the larger memory footprint justifies the ease of development for the particular application. (I also presume that the footprint does not scale linearly with the number of shared variables.) I think for many (most?) applications the larger footprint is not a serious issue.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.