Jump to content

Network Messaging and the root loop deadlock


Recommended Posts

I would make another vote for a TCP version. VI server isn't going to be designed for messaging performance so there is going to be trade offs but the performance seems like the biggest problem. Yes calling is simple but you don't get anything for free, each message requires:

1. Connection made to LabVIEW application.

2. Remote VI found, loaded and called.

3. Reference opened to existing queue.

Each of these offers a new point of failure in the system.

What you could do as the best of both worlds. You should be able to make some TCP code to accept requests and package this into a single VI which you can just drop to any application to add the ability. This could then maintain connections to clients to give the best performance but there's nothing to say you can't still make a calling VI that opens a reference, sends the message and and closes the connection again.

Link to comment

We use networked shared variables and also ActiveMQ (an implementation of Java Message Service) -- via a custom LabVIEW interface -- for messaging much as John describes. ... John, you might want to look at the publish-subscribe approach. Isn't publish-subscribe communication really what you are after? That seems to me to be more or less what you are describing.

I played around with RabbitMQ a bit yesterday. Getting a RabbitMQ broker installed and running was quite fast and easy. LabbitMQ needs polishing and is lacking in documentation, but seems pretty solid. RabbitMQ should do publish-subsribe very cleanly, especially with its “topic” exchange type where one can specify the messages one wants by pattern. And at least according to the documentation, one can cluster multiple computers into a single message broker, providing redundancy against single-point failure (though with the possibility on failure of some messages being delivered twice).

— James

Link to comment

My 2 cents, from experience:

Avoid developing your own TCP-connection-managing / messaging architecture!

..when I was talking about polling I didn't mean the server to poll the clients but rather... adds incoming connection requests to an array of connections that is then processed inside a loop continously with a very small TCP Read timeout. Polling is probably the wrong name here.

Polling is definitely the right name here, and this kind of scheme proved to be quite messy for me as requirements evolved. I initially thought the only challenge would be dynamically adding and removing connections from the polling loop and efficiently servicing the existing connections. Before long, I had dozens of connections, some required servicing small amounts of data at rapid rates (streaming, essentially), while others were large chunks published infrequently. While the polling loop was busy waiting-then-timing-out for non-critical items, some critical items would experience buffer overflow or weren't being replied to fast enough (my fault for architecting a synchronous system). So I incorporated dynamically configured connection prioritization to scale the time-out value based on assigned priority level. I also modified the algorithm to exclusively service, for brief-periods, connections flagged as potential data-streams when any data would initially arrive from these connections.

This quickly became the most complex single piece of software I had ever written.

Then I began using Shared Variables, and the DSC Module for shared variable value-change event handling. It was a major burden lifted. I realized I had spent weeks developing and tweaking a re-invented wheel and hadn't even come close to having the feature set and flexibility Shared Variables offer.

[whatever]MQ is a great solution if you need to open communications with another programming language. But why take your messages out-and-back-in to LabVIEW environment if you don't need to? Sure, RabbitMQ was easy to install and configure for you... but what about the end user? Complex deployment = more difficult maintenance.

I would only recommend TCP messaging if you need high-speed point-to-point communications, but for publish-subscribe you ought to highly consider Shared Variables + DSC module. If you do go the route of DIY TCP message handling, I recommend lots of up-front design work to take into account the non-homogenous nature of messaging.

Link to comment
It still bugs me that you only get the value change event with the DSC Module... Seems like a basic feature of shared variables.

I couldn't agree more. Publish-subscribe messaging is an essential feature of many types of today's applications, and to do publish-subscribe messaging without events (or the equivalent) is pointless.

Link to comment
[whatever]MQ is a great solution if you need to open communications with another programming language. But why take your messages out-and-back-in to LabVIEW environment if you don't need to? Sure, RabbitMQ was easy to install and configure for you... but what about the end user? Complex deployment = more difficult maintenance.

By “easy” I meant dead easy. Google instructions, run two installers, run LabbitMQ examples. Now, configuring a cross-machine, robust-against-failure message broker would be a whole higher level of complexity, but then John’s N-client, M-server system with the requirement of robustness is going to be complex regardless.

I couldn't agree more. Publish-subscribe messaging is an essential feature of many types of today's applications, and to do publish-subscribe messaging without events (or the equivalent) is pointless.

Can one do it the poor man’s way; have a reentrant subVI that waits on a Shard Variable and forwards message to a User Event?

Polling is definitely the right name here, and this kind of scheme proved to be quite messy for me as requirements evolved. I initially thought the only challenge would be dynamically adding and removing connections from the polling loop and efficiently servicing the existing connections. Before long, I had dozens of connections, some required servicing small amounts of data at rapid rates (streaming, essentially), while others were large chunks published infrequently. While the polling loop was busy waiting-then-timing-out for non-critical items, some critical items would experience buffer overflow or weren't being replied to fast enough (my fault for architecting a synchronous system). So I incorporated dynamically configured connection prioritization to scale the time-out value based on assigned priority level. I also modified the algorithm to exclusively service, for brief-periods, connections flagged as potential data-streams when any data would initially arrive from these connections.

This quickly became the most complex single piece of software I had ever written.

I wrote a TCP server the other way, using dynamically-launched processes, and it actually came together quite well, and seems scalable (though I have yet to had the use case to really test it). There is a “TCP Listener Actor” that waits for connections and launches a “TCP Connection Actor” to handle each one. The Connection Actors forward incoming messages to a local queue (or User Event). As each actor only has one thing to deal with, they are conceptually simple, and don’t need to poll anything (this is in my “Messenging” package in the CR if your interested).

An advantage of making your own TCP server is that you can customize things; my server is designed to seamlessly support my messages, which carry reply addressed (callbacks) and have a publish-subscribe mechanism. Supporting both with Shared Variables would (I suspect) be just a complex in the end as going straight to TCP.

— James

Link to comment

Avoid developing your own TCP-connection-managing / messaging architecture!

Polling is definitely the right name here, and this kind of scheme proved to be quite messy for me as requirements evolved. I initially thought the only challenge would be dynamically adding and removing connections from the polling loop and efficiently servicing the existing connections. Before long, I had dozens of connections, some required servicing small amounts of data at rapid rates (streaming, essentially), while others were large chunks published infrequently. While the polling loop was busy waiting-then-timing-out for non-critical items, some critical items would experience buffer overflow or weren't being replied to fast enough (my fault for architecting a synchronous system). So I incorporated dynamically configured connection prioritization to scale the time-out value based on assigned priority level. I also modified the algorithm to exclusively service, for brief-periods, connections flagged as potential data-streams when any data would initially arrive from these connections.

This quickly became the most complex single piece of software I had ever written.

At the time I had started with this (LabVIEW 5) there were no shared variables (and when they arrived much later they had a lot of initial troubles to work as advertized) so the only options were to integrate some external code or DIY.

An additional feature of my own simple TCP protocol is that I could fairly easily add support for other environments like Java on Android without having to wait for NI to come up with this, if they ever will.

Of course I agree that it is probably a very bad idea to mix large high latency messages with low latency messages of any sort in the same server, be it LabVIEW or something else. They tend to have diametrical requirements that are almost impossible to solve cleanly in a single design. Using clone handler VIs does allow to alleviate this limitation but at some serious cost as each VI clone takes up considerable resources and it still complicates the design quite a bit when you mix these two types in the same server. Also using VI clone handlers involves VI server and the potential to get locked up for extended times by the UI thread/root loop issue. And shared variables are not the contra-proof to this, I wouldn't consider them really low latency although they can work perfect in many situations.

So yea if I would have to start again now I might go with shared variables instead, although without DSC support they still have some limitations like real dynamic deployment and also the aforementioned event support, but that last one is not something my own TCP based protocol would offer out of the box either.

I couldn't agree more. Publish-subscribe messaging is an essential feature of many types of today's applications, and to do publish-subscribe messaging without events (or the equivalent) is pointless.

I wouldn't go as far as declaring that as pointless, but it is a limitation of course.

Link to comment

Can one do it the poor man’s way; have a reentrant subVI that waits on a Shard Variable and forwards message to a User Event?

On RT, which the DSC module does not support, we have used an approach similar to what you suggest, yes (as I have previously mentioned in another thread). We use the "Read Variable with Timeout" VI in the shared variable API and then create a user event. In our case on the RT side all messages arrive via a single shared variable (since we are using the Command Pattern) so we haven't attempted to extend this to support n variables, but I would guess this is possible. We haven't explored this further since we use the logging features of DSC anyway, but from a technical standpoint it might in fact be a way to do event-driven programming with shared variables without purchasing the DSC module. (Disclaimer: From a business standpoint I'm not advocating either approach at this time. Lol)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.