Jump to content

Recommended Posts

Has anyone used LapDog to pass messages across a network? In my case I'm thinking of an RT system controlled from a Windows computer. The only similar feature I've seen is in the Asynchronous Message Communication Reference Library - has anyone already added something like that to LapDog? I just didn't want to spend too much time developing something that's already available. I haven't used either AMC or LapDog extensively before, but my impression is that LapDog gives a little more control and flexibility. (And the Actor framework perhaps a little too much, as a poor reason for not going that route!).

Link to comment
  • Replies 63
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I have sent Objects using TCP/IP or Shared Variables and it works fine.

Making the network comms transparent to the local messaging framework via some proxy makes sense.

Given your preference - if AMC has TCP/IP support built in (?) then you may be able to combine it with Lapdog to achieve the functionality you are after.

Additionally it may be worth looking at LVx etc... - as all this stuff may already be done for you :)

Paul posts a lot about this architecture (LVOOP over TCP/IP) I would check out this previous posts too - it may help.

Cheers

-JG

Link to comment

I believe AMC uses UDP, instead of TCP, for network communication. Depending on what you are doing, you might require the lossless nature of TCP.

You should also have a look at ShaunR’s “Transport” and “Dispatcher” packages in the Code Repository; they do TCP.

I’ve done some (limited) testing of sending objects over the network (not Lapdog, but very similar), and the only concern I had was the somewhat verbose nature of flattened objects. I used the OpenG Zip tools to compress my larger messages and that worked quite well.

Link to comment

I’ve done some (limited) testing of sending objects over the network (not Lapdog, but very similar), and the only concern I had was the somewhat verbose nature of flattened objects. I used the OpenG Zip tools to compress my larger messages and that worked quite well.

Out of curiosity, what kind of throughput do you see? How big (generally) are your payloads?

Link to comment

Hey Greg,

One minor issue I've had with sending lapdog type messages over the network (especially custom message types), is that running the program to test will lock out the classes and prevent editing. As such, I use lapdog on the RT machine, but avoid sending any lapdog messages over the network.

Link to comment

Thanks for these responses.

I believe AMC uses UDP, instead of TCP, for network communication. Depending on what you are doing, you might require the lossless nature of TCP.

You should also have a look at ShaunR’s “Transport” and “Dispatcher” packages in the Code Repository; they do TCP.

Yes, AMC uses UDP, and yes, lossless messaging is required, but I hadn't realised UDP could be lossy, so that's very helpful. I'll have a look at those CR packages you mention.

I’ve done some (limited) testing of sending objects over the network (not Lapdog, but very similar), and the only concern I had was the somewhat verbose nature of flattened objects. I used the OpenG Zip tools to compress my larger messages and that worked quite well.

Do you flatten to string or to XML (and does it really matter)? Sorry, I'm rather new to all this - finally stepping away from the JKI State machine architecture :)

One minor issue I've had with sending lapdog type messages over the network (especially custom message types), is that running the program to test will lock out the classes and prevent editing. As such, I use lapdog on the RT machine, but avoid sending any lapdog messages over the network.

Can you elaborate? Do you create custom message types for each command? I had thought of using the built-in native and array types, with variants used for more complex data (clusters etc), but can see that custom message types could be a lot neater. Also, I can't quite see why the locked classes is an issue - once the message types are defined, I thought they shouldn't need to be edited??

Thanks.

Link to comment
Do you flatten to string or to XML

I am pretty sure there were a few bugs with XML in previous versions of LabVIEW that flattening to string avoided. Not sure if/when fixed e.g. here and I think there was a name-spacing issue (with classes wrapped in.lvlibs) too.

Link to comment

Can you elaborate? Do you create custom message types for each command? I had thought of using the built-in native and array types, with variants used for more complex data (clusters etc), but can see that custom message types could be a lot neater. Also, I can't quite see why the locked classes is an issue - once the message types are defined, I thought they shouldn't need to be edited??

Hi,

Yeah, for some commands that carry with them data (specifically things that interact with the image receive a message which contains the header information and the image data), I create custom messages. The locked classes is only an issue when you haven't finalised the object data of some custom message. You're right, with good design, it shouldn't be an issue.

Link to comment

Out of curiosity, what kind of throughput do you see? How big (generally) are your payloads?

My limited testing was mostly functional (and over a slow-uploading home broadband connection) so I can’t answer for throughput. My messages vary in size, but common small ones have at least two objects in them (the message itself and a “reply address” in the message) and so flatten into quite a long string: a “Hello World” message is 75 bytes. I wrote a custom flattening function for the more common classes of message which greatly reduced that size. Also, for larger messages that can involve several objects I found that ZLIB compression (OpenG) works quite well, due to all the repeated characters like “.lvclass” and long strings of mostly zeroes in the class version info.

Do you flatten to string or to XML (and does it really matter)? Sorry, I'm rather new to all this - finally stepping away from the JKI State machine architecture :)

I use “Flatten to String”, which works great communicating between LabVIEW instances. If you need one end to be non-LabVIEW then you’ll want something like XML or JSON.

— James

Link to comment

I haven't used LapDog for communicating with RT targets. For several years I've had it in mind to wrap the network stream functions in a class similar to how the queues are wrapped up, but I haven't actually done it yet. Mostly because I don't get to do many cRIO apps so I haven't had enough time to explore good implementations.

However, I have heard unconfirmed reports from a CLA I know and trust that there is a small memory leak in RT targets when using objects. The issue has to do with LV not releasing all the memory when an object was destroyed. It wouldn't be a problem for a system with objects that are supposed to persist through the application's execution time, but it could be an issue if objects are frequently created and released (like with an object-based messaging system.)

Like I said, this is an unconfirmed report. The person I heard it from got the information from someone at NI who should be in a position to know, but AQ hasn't heard anything about it. (Or at least he hasn't admitted to it. :shifty:) Maybe it's a miscommunication. Once I get the name of the NI contact I'll let AQ know who it is and hopefully we'll get it cleared up. In the meantime, I'm choosing not to use LapDog on RT targets.

--- Edit ---

I added variant messages to support James' request for values with units attached to them and to give users the ability to use variants if they want to. In general creating custom message classes for different data types will give you better performance by skipping the variant conversion.

Link to comment

I haven't used LapDog for communicating with RT targets. For several years I've had it in mind to wrap the network stream functions in a class similar to how the queues are wrapped up, but I haven't actually done it yet. Mostly because I don't get to do many cRIO apps so I haven't had enough time to explore good implementations.

Wrapping the network stream functions sounds interesting, though I imagine it may not be easy. It would help if the network stream functions directly accepted a class as the data type, but I guess there are too many difficulties maintaining identical classes across different machines. I can't see a MessageStream class inheriting from the existing MessageQueue class, so I guess one question is whether you'd approach it by defining another MessageTransfer(?) class which both would be subclasses of? There's also a small issue of separated Reader and Writer streams.

However, I have heard unconfirmed reports from a CLA I know and trust that there is a small memory leak in RT targets when using objects.

Is this related solely to cRIO, or to all RT targets (such as the R-series board I'm using)?

Link to comment

I added variant messages to support James' request for values with units attached to them and to give users the ability to use variants if they want to. In general creating custom message classes for different data types will give you better performance by skipping the variant conversion.

I use variants mostly for simple values; to avoid having 101 different simple message types. I have four or five polymorphic VIs for simple messages, and having so many different message types would be unmanageable. But for any complex message, I usually have a custom message class. It’s no more complex to make a new message class then make a typedef cluster to go inside a variant message.

— James

However, I have heard unconfirmed reports from a CLA I know and trust that there is a small memory leak in RT targets when using objects. The issue has to do with LV not releasing all the memory when an object was destroyed. It wouldn't be a problem for a system with objects that are supposed to persist through the application's execution time, but it could be an issue if objects are frequently created and released (like with an object-based messaging system.)

I’m hoping to work up to testing my object messages on an sbRIO next week. I will be sure to run a memory-leak test.

Link to comment

Wrapping the network stream functions sounds interesting, though I imagine it may not be easy. It would help if the network stream functions directly accepted a class as the data type, but I guess there are too many difficulties maintaining identical classes across different machines.

I used network streams for the first time a couple months ago and didn't realize they did not accept classes. I'd have to play around with different implementations to see what worked. If the app is simple my first instinct is to flatten the message object to a string and send that. On the receiving end you could (I think) unflatten it to a Message object, read the message name, and then downcast it to the correct message class. That does present problems if the app isn't deployed to all targets at once.

I can't see a MessageStream class inheriting from the existing MessageQueue class, so I guess one question is whether you'd approach it by defining another MessageTransfer(?) class which both would be subclasses of?

Close. You are correct the MessageQueue class is not intended to be subclassed for different transport mechanisms. The methods it exposes are specific to dealing with queues, not message transports in general.

If you wanted to create a class that completely abstracts whether the transport is a queue or a stream, I'd start by creating a MessageStream class that closely mimics all the functionality of the MessageStream functions. Then I'd create a high level abstract MessageTransfer class with simplified Send and Receive methods. Finally, I'd create QueueTransfer and StreamTransfer classes that override the MessageTransfer methods. These overriding methods would in turn call the MessageQueue and MessageStream objects contained within the QueueTransfer and StreamTransfer objects. (The MessageQueue and MessageStream objects are passed in as part of the Transfer object's Create method.) I'm thinking of something like this...

post-7603-0-07975800-1342471944.png

Would this work? I have no idea. I'd have to implement it to find out. Personally I'm not convinced creating a generalized MessageTransfer class is worth the effort. You don't want to read from multiple transports in the same loop anyway (except perhaps in very specific situations.) For example, suppose I have loop A that needs to be aware of messages from loop B (via queue) AND from loop C on another target (via streams,) rather than trying to route messages directly from loop C to loop A I'd create loop D, whose job it is to get messages from the stream and put them on loop A's queue. This also gives me someplace to put all the network connection management code if I need that without cluttering up the loop A with lots of extra stuff.

Is this related solely to cRIO, or to all RT targets (such as the R-series board I'm using)?

My impression is that it applied to all RT targets. The person was working specifically with an sbRIO device though, and for all I know it was limited to that hardware model.

Again, this is an unverified rumor I heard through the grapevine. I don't want to cause NI unnecessary troubles, but with object-based messaging gaining in popularity I wanted people to be aware of the possiblity of there being an issue so they can test for it.

I’m hoping to work up to testing my object messages on an sbRIO next week. I will be sure to run a memory-leak test.

Any chance you could write some simple test code that creates and releases thousands of random number objects per second? I'd be a bit surprised that revealed the leak but it's the first step I'd take.

Link to comment

For example, suppose I have loop A that needs to be aware of messages from loop B (via queue) AND from loop C on another target (via streams,) rather than trying to route messages directly from loop C to loop A I'd create loop D, whose job it is to get messages from the stream and put them on loop A's queue. This also gives me someplace to put all the network connection management code if I need that without cluttering up the loop A with lots of extra stuff.

That’s the route I took in adding TCP message communication to my system, with “loop D” being a reusable component dynamically launched in the background. Incoming TCP messages are placed on a local queue, which can also have local messages placed on it directly.

post-18176-0-02621800-1342527120.png

Because “TCPMessenger” is a child of “QueueMessenger”, I can substitute it into any pre-existing component that uses QueueMessenger, making that component available on the network without modifying it internally.

— James

Link to comment

Because “TCPMessenger” is a child of “QueueMessenger”, I can substitute it into any pre-existing component that uses QueueMessenger, making that component available on the network without modifying it internally.

How do you write your components so they terminate in the case of unusual conditions? For example, if a bug in a governing (master) actor exits incorrectly due to a bug, how do the sub (slave) actors know they should exit? Typically my actors will execute an emergency self termination under two conditions:

1. The actor's input queue is dead. This can occur if another actor improperly releases this actor's input queue. Regardless, nobody can send messages to the actor anymore so it shuts down.

2. The actor's output queue is dead. An actor's output queue is how it sends its output messages to its governing actor. In fact, a sub actor's output queue is the same as its governing actor's input queue. If the output queue is dead, then the actor can no longer send messages up the chain of command and it shuts down.

This scheme works remarkably well for queues, but I don't think it would be suitable for other transports like TCP. What logic do you put in place to make sure an actor will shut down appropriately if the messaging system fails regardless of which transport is used?

Link to comment

How do you write your components so they terminate in the case of unusual conditions? For example, if a bug in a governing (master) actor exits incorrectly due to a bug, how do the sub (slave) actors know they should exit? Typically my actors will execute an emergency self termination under two conditions:

1. The actor's input queue is dead. This can occur if another actor improperly releases this actor's input queue. Regardless, nobody can send messages to the actor anymore so it shuts down.

2. The actor's output queue is dead. An actor's output queue is how it sends its output messages to its governing actor. In fact, a sub actor's output queue is the same as its governing actor's input queue. If the output queue is dead, then the actor can no longer send messages up the chain of command and it shuts down.

This scheme works remarkably well for queues, but I don't think it would be suitable for other transports like TCP. What logic do you put in place to make sure an actor will shut down appropriately if the messaging system fails regardless of which transport is used?

Well, my current TCPMessenger is set up as a TCP Server, and is really only suitable for an Actor that itself acts as a server: an independent process that waits for Clients to connect. A break in the connection throws an error in the client, but the server continues waiting for new connections. This is the behavior I have in the past needed, where I had Real-Time controllers that must continue operating even when the UI computer goes down.

However, most of my actors (in non-network, single-app use) are intended to shutdown if their launching code/actor shutdown for any reason. For this I have the communication reference (queue, mostly) be created in the caller, so it goes invalid if the caller quits, which triggers shutdown as in your case 1. Case 2 doesn’t apply in my system as there is no output queue.

Now, if I wanted auto-shutdown behavior in a remote actor, then I would probably need to make a different type of TCPMessenger that worked more like Network Streams than a TCP server. So a break in the connection is an error on both ends, and the remote actor is triggered to shutdown.

— James

Link to comment

Well, my current TCPMessenger is set up as a TCP Server, and is really only suitable for an Actor that itself acts as a server...

I see. I took your statement to mean you can take an arbitrary actor, replace the QueueMessenger with a TCPMessenger, and all of a sudden have an actor suitable for network communication. I think that would be a tough trick to pull off without adding a lot of potentially unnecessary code to every actor.

Link to comment

I see. I took your statement to mean you can take an arbitrary actor, replace the QueueMessenger with a TCPMessenger, and all of a sudden have an actor suitable for network communication. I think that would be a tough trick to pull off without adding a lot of potentially unnecessary code to every actor.

I was a reaching a bit with the “any actor” thing. Though my actors tend to be a quite “server” like already, much more than your “SlaveLoops” or the Actor Framework “Actors”. They publish information and/or reply to incoming messages, and don’t have an “output” or “caller” queue or any hardwired 1:1 relationship with an actor at a higher level. They serve clients. The higher-level code that launches them is usually the main, and often only, client, but this isn’t hardcoded in. Thus, they are much more suitable to be sitting behind a TCP server.

— James

Link to comment

How do you write your components so they terminate in the case of unusual conditions?

It is possible to use a heartbeat or something similar. There are some ideas in this thread: http://lavag.org/topic/15531-detecting-network-drop-out-from-crio/page__fromsearch__1.

You could combine this with a watchdog, I guess. I haven't done that myself.

Paul

Link to comment

Yes, but you I wouldn't want to use a heartbeat to keep all your my actors alive just in case you I ever decide to use it over a network.

The heartbeat can be built in to your reusable TCP component (I’ve considered adding it to my background processes running the TCP connection). You don’t have to make them part of the actor code itself.

Link to comment
  • 10 months later...

I thought I'd just put feelers out again on this topic to see if anyone has stumbled on some new wisdom in the last year or so. I thought I'd outline what I'm doing and ask for advice on how to make it better.

 

Currently I have my program set up so that I have what amounts to a message routing loop on an RT machine and the equivalent on a Windows machine. Each of these routing loops is in effect, a master in control of a TCP slave. The RT machine runs the server, the Windows machine runs the client.

 

I use Lapdog as my primary communication method. When a message needs to be sent from a process running on the RT machine, to a corresponding UI running on Windows or vice versa, the message is explicitly wrapped in a series of layers like an onion. The message is then passed up and each layer which touches it unwraps it and redirects it. The encapsulating messages are called things like "RouteTo7813R" and "RouteToLVRT". So if I wanted to send "EnableMotor", I would wrap it in "RouteTo7813R" THEN "RouteToLVRT". When this message goes up to the Windows routing loop it sees "RouteToLVRT" casts it, strips out the payload and in turn wraps that payload in a "SendViaTCP" message.

 

The "SendViaTCP" message is sent to the TCP Client which once again, casts, and strips the payload, then flattens and sends. When it arrives at the TCP Server, it's reconstituted according to size in a fairly typical manner for an LV TCP process (I believe) as a generic Lapdog message. Which is immediately sent out of the TCP server to the RT routing loop. This reads "RouteTo7813R", casts appropriately, strips payload and sends this payload ("EnableMotor") directly to the appropriate process.

 

I know this is bad coupling as which UI to send updates to is explicitly hard coded into the "Notify" method, BUT, I don't know how to break this dependency. Does anyone have any good advice on how to do this without adding significant complexity?

Link to comment

I'm not sure I'm following your message topology.  Here's a diagram showing what I think you're saying.  Is it close?

 

post-7603-0-03699700-1371070876_thumb.pn

 

I know this is bad coupling as which UI to send updates to is explicitly hard coded into the "Notify" method, BUT, I don't know how to break this dependency. Does anyone have any good advice on how to do this without adding significant complexity?

 

It's only "bad" coupling if it prevents you from easily extending the application in ways that you want to.  If the app design fits your current and reasonably expected future needs, call it good and be happy.  However, there is value in knowing how to break the structural dependencies you've created by using onion routing.

 

Instead of encoding the entire message route when the message is created as you have done, I prefer to have each loop only know the next step in the message's route:

 

post-7603-0-87399800-1371070840_thumb.pn

 

Here, the UI sends a EnableMotorBtnPressed message to the Windows Message Routing loop.  The WMR loop knows that when it receives an EnableMotorBtnPressed message, it should send an EnableMotor message to the TCP Send loop.  It has no idea (nor cares) what the TCP Send loop does with the message.  Using this pattern with each loop along the route to eliminates the structural dependencies.

 

In addition to eliminating the structural dependencies, in my opinion stepwise routing is easier to understand and maintain compared to onion routing.  Here are a few examples:

 

1. I have an app that uses a cRIO 9237 load cell bridge modules.  This module uses a 30+ element typedeffed enum to set the sample rate, which is something users need to be able to do.  With onion routing the UI component will be statically coupled to that enum.  With stepwise routing it is easier to send SetSampleRate messages as DoubleMessage objects and convert it to the enum in the final loop.  In diagram 2 above, even though the messages have the same name, the different color indicate they can be unique message objects instantiated from different message classes.

 

2. Sometimes UI events trigger actions that need data from various other components in order to execute.  With onion routing, the tendency is to get all that data to the UI loop so the message can be created and wrapped up.  It doesn't matter that the data doesn't have anything to do with the UI, the UI needs it before it can send the message.  With stepwise routing each loop along the way can add the data it is responsible for to the message before sending it along to the next loop. 

 

As always, there are tradeoffs.  The disadvantage of stepwise routing is that it is usually a bit harder to trace a message from source to destination because you have to visit every step along the way.  (Search for text is your friend.)  With onion routing you can figure out the destination by unwrapping the layers where the original message is created.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.