Jump to content

Re-Designing Multi-Instrument, Multi-UI Executable


Recommended Posts

I am working on a complete redesign of our instrument control software. I'll be using LVOOP and some form of messaging/queue system. The program must control 10+ different types of instruments, each having variations in I/O (pressure control, temp control, motors, different types of controllers, etc...). Most of our instruments run from a touchscreen attached to an embedded PC. A few run from a desktop and we have some that need the desktop version to control multiple instruments at the same time (using a tab control in my old program).

So far I have a top-level vi that decides if this is a touchscreen, desktop, or simple test-viewer and launches the proper set of user interfaces. There are UI's for IDLE state, Test Setup, Config, Calibrate, Run Test, and so on... I have been studying discussions here from the super-megadudes :worshippy: on frameworks, lvoop, messaging, and how NOT to do things. After giving my best shot to AQ's AF I'm now using LapDog messaging from Daklu & it is time to ask some questions!

1. Would you recommend using a single top-level UI and then plugging in different pages (Test Setup, Idle, Run Test, etc...) via sub-panels? In the past I have found that complex sup-panels can really slow things down. However, without them I end up seeing the desktop when switching between UI's. Not a big deal but not very professional looking.

2. On the framework subject, is it better to have my I/O channels messaging a mediator who then messages the current UI or should they message the UI directly?

3. What about updating indicators? It seems that passing UI indicator references to CHx is faster than CHx sending messages to the UI (or to mediator then to UI). I need good response time: Example - I want an LED to light up on the front panel when a heating relay is turned on and off. The relay may only be on for 50ms every second. Can I really send a LED ON msg and then an LED OFF msg and expect it to work smoothly? For 2-8 channels at once?

3. Should I be re-opening the channels every time I switch UI pages or should I initialize them once and leave them running even when they are not doing anything? If the latter, what happens to the queues when the UI closes and another opens? I could pass the callee queue but what about the caller queue?

4. I have a CHx parent class with children for each I/O "type". At runtime, some stored config information would tell CHx what child to use, which channel # this is, what to label the UI controls and indicators, and, according to the type of UI (also using classes), which controls to show/hide for appropriate functionality. There was a thought of giving each I/O class a UI and then plugging them into sub-panels on the bigger UI but I thought that may be too confusing for the poor sucker that inherits my code. It already seems that using LVOOP and a messaging framework distorts what I think of as "dataflow" drastically. Any quick thoughts on this or pointers to similar discussions?

Here are some UI screenshots so you can get an idea of what I am doing:

post-15786-0-63804200-1328228603_thumb.j post-15786-0-54347600-1328228607_thumb.j

I truly attempted using the Actor Framework from the bottom up but I just cannot wrap my head around implementing it on this scale. Everything is so deeply encapsulated that I cannot figure out how to actually DO anything! LapDog allows me the freedom to implement a moderate amount of LVOOP without having to wrap every aspect of the program into classes and messages.

I know that's a mess of vague questions for a single post, sorry! I'm new to all this. :frusty:

Link to comment

Hi.

Posting updates from many channels into a GUI is not a problem if you are careful about the design.

One recent large application of mine (3-4,000 VIs) runs on RT but facilitate a UI on a Host PC that can be connected and disconnected from the running RT application. The UI is used for channel value and status display, as well as giving the operator an opportinuty to override each channel with a user specified value or in some cases a programmed waveform for instance. The RT system generates something like 100,000 events/s (equivalent to channel updates) and the UI just plugs in and out of this event stream as a module by dynamically registering and unregistering the wanted events. There is a module on RT that receives each event that it knows the GUI wants, then the data is transmitted over TCP/IP to the Host PC, where new events are generated to be received by the GUI module. All data is timestamped, so propagation delay is not a problem. Similarly for override-data the other way.

It is no problem receiving 100,000 updates/s in the UI module without backlogging at all, but I do filter the events such that each event gets its own local register in the UI module which it may update at any rate, and then 10 times/s all changed registers are written to the GUI front panel. At almost 1,000 channels (and thus 1,000 controls on the GUI in tabs) this still works really smooth. So 8 channels of mechanical relays is no problem.

The clue is to filter the updates so you don't update the GUI thousands of times per second, but remember to always maintain the most current value for when you actually do update. And if you run the UI on the same machine as the actual business logic then you need to be careful not to block that business logic - updating the UI will most often switch to the user interface thread or may even demand root loop access. Head that.

Cheers,

Steen

Link to comment
  On 2/3/2012 at 12:28 AM, jbjorlie said:

3. What about updating indicators? It seems that passing UI indicator references to CHx is faster than CHx sending messages to the UI (or to mediator then to UI). I need good response time: Example - I want an LED to light up on the front panel when a heating relay is turned on and off. The relay may only be on for 50ms every second. Can I really send a LED ON msg and then an LED OFF msg and expect it to work smoothly? For 2-8 channels at once?

Actually, I’d expect sending a message via a queue and updating a terminal to be faster than using an indicator reference and a property node. Property nodes are quite slow.

I did a quick time test with the messaging system I use (which is very similar to Lapdog); a round-trip command-response (i.e send a message via one queue and get a response back via another) was about 250 microseconds, or about 130 microseconds per leg. A single Value Property node on a Numeric indicator was between 300 and 500 microseconds.

Also, note that a delay is unimportant if it’s equal at each end; the relay indicator will still be on for 50ms. One could also switch to indicating heater duty cycle (x%) rather than flash an LED.

— James

Link to comment

Nice, 2 of the super-megadudes I've been learning a lot from. Thanks fellas.

I'm glad to hear the messaging system can be so fast if done properly. Steen, that module control toolset looks pretty interesting. I was very interested in your vi registers idea until I read the whole thread...still seems like a brilliant way to share data. 1000 channels eh? are you controlling those or just monitoring them? Do you place each data value into a waveform to attach the timestamp? I was using waveforms in the last revision but it seemed to use up too much space. A file with only 30-40 data points per minute would be megabytes in size after a couple of days. I don't know why, it was in .tdms format and every time I took x readings I would average them, convert the result to waveform data, and append it to the file. Since it was not streaming data I didn't know of any better way. Now I just store a double value in the .tdms file which takes much less space than the waveform. I would love to store a timestamp for each value so if you have a good compact method let me know please. I need my data file to be tens of kilobytes, not megabytes.

I am running the UI on the control machine but the operator can also access a GUI from a remote machine which will plug in much like Steen's system noted above. I do keep the "business" seperate by making each channel the commander of it's own routines. It runs by itself only receiving change commands through a queue from the mediator. My education from this forum lead me to believe that was the best way to do it. I was going to have each channel write its data to the tdms file as well but maybe I should message the data to a central "Storage" vi and have it write the data??

Can either of you comment on the sub-panel vs. opening/closing GUI's question? If sub-panels is OK for a 500MHz XPe machine, what's the most efficient way to open and close the GUIs? I was going to use the new asynchrounous call node through the mediator but maybe I need a GUI mediator with the sub-panel control and a separate "business" mediator to handle the channel I/O?

I have a very hard time finding good examples of applications using multiple UI screens while controlling multiple I/O channels with different sample rates and different hardware! Everything NI puts out seems to rely on DAQmx or some unrealistic $$package$$$$.

Also, any suggestions on a better way to customize controls/indicators on the GUI at runtime. Right now I have to store all the captions, limits, marker spacing, etc...in a file/cluster and load them in using property nodes for each control.

Thanks again for the help. I have no mentor (or even co-creators) here so aside from my Bloomy Book & NI's outdated propaganda there's now way to learn proper architecture/style. LabView seems to be experiencing a renaissance since 8.5 and I want to be the best architect in the state :shifty:

Link to comment

Which state? :-)

On subpanels:

We are using subpanels more and more.

Here are some reasons why:

Encapsulation: A set of controls or indicators handles some closely related functionality. We can focus on that one thing without worrying about (or worse, breaking) other functionality.

Reusability: We can use this same VI in a subpanel in another view in our system (e.g., for a higher-level component) as appropriate.

Grouping: We can make an entire subpanel visible or invisible in certain states. (Certainly this means we need to encapsulate the proper information together!)

We have not observed any performance issues with such an approach.

In general, we don't see performance issues with updating numeric displays for modern computers at rates we use (up to 62.5 Hz). Of course, users won't be able to see things that fast. (Your 50 ms update rate is barely observable, if at all.) When we do want to limit the updates (e.g., complex graphs or charts that we don't want to scroll at ridiculous rates) we either update at specific intervals (making sure we always display valid data, as mentioned above) or we apply a value deadband.

For the record, we use shared variables (a button event writes to a shared variable; a shared variable value change events triggers an update to one or more display elements; shared variable binding for the simplest cases). This works quite well. The event handling deals with initial reads cleanly. We do use the DSC module for the shared variable events.

By the way, you are asking a lot of good questions! :-)

On communication:

As I said, we use shared variables (networked shared variables). We use this approach because we communicate over Ethernet (between various computers and locations and between controllers--often on cRIOs--and views) and because we want to use a publish-subscribe paradigm--that is, multiple subscribers will get the same information (or a demand may come from a view or a higher level component), and shared variables handle both of these needs well. (We also don't need to create or maintain a messaging system.)

Link to comment
  On 2/3/2012 at 4:31 PM, jbjorlie said:

I'm glad to hear the messaging system can be so fast if done properly.

Who you calling fast? That’s the first time I measured it and I thought it was slow. Something built for speed would be much faster. But Property Nodes, and anything that runs in the UI thread, are REALLY slow.

  Quote
Do you place each data value into a waveform to attach the timestamp? I was using waveforms in the last revision but it seemed to use up too much space. A file with only 30-40 data points per minute would be megabytes in size after a couple of days. I don't know why, it was in .tdms format and every time I took x readings I would average them, convert the result to waveform data, and append it to the file. Since it was not streaming data I didn't know of any better way. Now I just store a double value in the .tdms file which takes much less space than the waveform. I would love to store a timestamp for each value so if you have a good compact method let me know please. I need my data file to be tens of kilobytes, not megabytes.

Using a waveform for individual reading does NOT seem like a good idea. Why not just store two channels: “Measurement” and “Measurement_Time”? That’s what I do.

  Quote
Can either of you comment on the sub-panel vs. opening/closing GUI's question?

I’ve never noticed a problem with subpanels, but I’ve never looked into the speed.

  Quote
Also, any suggestions on a better way to customize controls/indicators on the GUI at runtime. Right now I have to store all the captions, limits, marker spacing, etc...in a file/cluster and load them in using property nodes for each control.

I’ve never done anything like that. Usually, I have most of the detail of an instrument in it’s own Front Panel, with only the key summary on a main screen of the whole application. So, for example, if I have a temperature controller, the main app might just show the temperature, the set point and some other summary info (like “ramping”). Clicking on the temperature indicator (or selecting from a menu or whatever) would bring up the “Temperature Controller Window” (could be in a subpanel) with a full set of controls (PID settings, maximum temperature, alarms) and indicators (chart of last ten minutes temperature and heating). The secondary window would be specific to that controller, so it doesn’t need adjusting, while the main app window would have only the general info that doesn’t need to be changed if you use a different controller.

Another technique is to realize that most simple indicators are really just text, or can easily be represented by text, and thus one can do a lot with a multicolumn listbox or tree control to display the state of your instruments.

— James

Link to comment

OK, I think you have all helped me quite a bit. I now see no reason to avoid shared variables for sharing data between vis when only one vi is ever going to write to them. That also gets me out of filtering out old data in queues and I can update the indicators only when I really need to. I can use messaging for all the commands and other requests. I do hate to force the inclusion of the Shared Var Engine on my builds though.

  Quote
Using a waveform for individual reading does NOT seem like a good idea. Why not just store two channels: “Measurement” and “Measurement_Time”? That’s what I do.

That's what I do now, just thought there might be some slick way to stick them both together and keep it compact. Sometimes we have processes on the same test that would benefit from recording at different sample rates and I don't like to store time info as a separate value for every channel. One x scale is enough. However, I think just storing the sample rate in the attributes will be best.

  Quote
Another technique is to realize that most simple indicators are really just text, or can easily be represented by text, and thus one can do a lot with a multicolumn listbox or tree control to display the state of your instruments.

That's a great idea but most of my GUI labels are for booleans, rings, and other controls. Things like: CH3 on the GUI has a Menu Ring, 2 Booleans, an indicator, and a Set Point control. Sometimes CH3 is controlling a Hydraulic Pump, pressurizing a test cell. Another time it may be controlling a Motor or a PID Heat/Cool process. I need to label everything accordingly. My object with this software has always been to have a single executable that can be configured to run many instruments while giving the user a similar experience. Training is a huge problem in our field as operators of our equipment come and go all the time. I want it to be easy, intuitive, and well labeled so they know what buttons to press and don't blow anything up.

Oh, and % output is great if you know what it means but try explaining it & why it jumps around so much to a Uzbeki lab tech without spending 22 hours summarizing PID control. MUCH easier to just ask them: You say it's not heating? Do you see the top LED blinking? You do? OK, that means power is going to the relay controlling the heater...I promise it is not a software bug, check your relays, etc...

  On 2/3/2012 at 8:40 PM, Paul_at_Lowell said:
The event handling deals with initial reads cleanly. We do use the DSC module for the shared variable events.

Not gonna happen on the budget in this office! :lol: DSC module does look quite nice for many things.

After all your great feedback thus far:

I feel the future is in each I/O "class" having it's own set of GUI plugins instead of customizing a set of controls/indicators at run time. Then I can just plug them into appropriately sized sub-panels on the main displays. That may be another generation away due to the work involved but I will perform this revision with the objective in mind. Queues seem the safest for sending cmds and non-recordable data between different vi levels but I need to find some sort of shared variable, functional global, or ViRegister solution for the process value updates and turning PID lights on/off.

Still not sure how to best switch between UI modes (keeping channels running through mediator or closing them and re-initializing once new screen is up) and whether or not to have the mediator for channel-GUI communication also be the mediator for GUI-GUI communication.

PAUL, I am in Tulsa, OKLAHOMA! (always add the ! after Oklahoma! for dramatic effect). Thank you for the well thought response and encouragement. Of all my responsibilities at this small company, none is more enjoyable than learning from you guys and writing LabView code. I suspect it would be great Full-Time work.

Lastly, JAMES, I have enjoyed your hassling with Stephen on the Actor Framework forum :thumbup1: I love the idea of the AF and he must be quite a scientist to have created it. However, it seems on the verge of too much "safety" at the cost of usability, especially when introducing it into existing code. I hope the discussions there result in some small modifications to lower the entry barrier before it gets plopped into the <vi.lib> for eternity. LabView will always be a coding toolbox for the engineer and the tool sets need to be somewhat flexible. IMNO (newbie).

Link to comment
  On 2/3/2012 at 8:40 PM, Paul_at_Lowell said:

For the record, we use shared variables (a button event writes to a shared variable; a shared variable value change events triggers an update to one or more display elements; shared variable binding for the simplest cases). This works quite well. The event handling deals with initial reads cleanly. We do use the DSC module for the shared variable events.

...

On communication:

As I said, we use shared variables (networked shared variables). We use this approach because we communicate over Ethernet (between various computers and locations and between controllers--often on cRIOs--and views) and because we want to use a publish-subscribe paradigm--that is, multiple subscribers will get the same information (or a demand may come from a view or a higher level component), and shared variables handle both of these needs well. (We also don't need to create or maintain a messaging system.)

Well, I really hate Shared Variables due to their weight (the SVE among other things), their black-box nature (when do updates happen? Not at 10 ms/8 kB as the white paper says, that's for sure) combined with known bugs (for instance will a buffer overflow warning in one node flow out of error out of all the other SV instances as well, even though their buffer didn't overflow), and due to their inflexibility (you can't programmatically deploy SVs on LV Real-Time for instance, and you need to be able to, since we sometimes experience SVs undeploying themselves spontaneously when the SVE is heavily taxed).

There are many better alternatives for intra-process communication (queues, events, and even locals). For network we have developed our own sticky client/multi-server TCPIP-Link solution which is much better (faster, more stable, more features, slimmer) than SVs and Network Streams for network communication. Granted, TCPIP-Link wasn't trivial to make, it currently has clocked up in excess of 1000 man-hours. The only killer feature of SVs is how simple they are to bind to controls. But one or two wrapper-VIs around TCPIP-Link will get you almost the same...

And it's a pity you need the DSC module for enabling events on SVs.

  On 2/4/2012 at 12:14 AM, drjdpowell said:

Using a waveform for individual reading does NOT seem like a good idea. Why not just store two channels: “Measurement” and “Measurement_Time”? That’s what I do.

We transmit waveforms on network without problems for data rates up to maybe 10 kS/s - a cRIO can generate data at that rate as waveforms for 40 channels without any backlogging due to either network or other system resources. For higher data rates we usually select a slimmer approach, but that aslo means more work to make data stay together. On PXI we can easily output data as clusters each with timestamp, some parameters like data origin, data flow settings (usually a couple of Booleans) and the data itself as an array of DBL for instance. This works fine for several MB/s payload. For Gb/s we must pack data in an efficient stream, usually peppered with codes to pick out control data (time/sync/channel info) that flows in a slimmer connection next to it. The latter approach can be rather difficult to keep contiguous if network dropouts can occur (which they will). Every application demands its own pros/cons decission making.

  On 2/4/2012 at 12:14 AM, drjdpowell said:

Another technique is to realize that most simple indicators are really just text, or can easily be represented by text, and thus one can do a lot with a multicolumn listbox or tree control to display the state of your instruments.

That's at least a quite heavy approach, encoding numeric data into strings and stuffing it into a MC listbox or a Tree. But it'll work for really slow data rates of course (a few Hz).

Cheers,

Steen

Link to comment
  On 2/6/2012 at 9:49 AM, Steen Schmidt said:

Well, I really hate Shared Variables due to their weight (the SVE among other things), their black-box nature (when do updates happen? Not at 10 ms/8 kB as the white paper says, that's for sure) combined with known bugs (for instance will a buffer overflow warning in one node flow out of error out of all the other SV instances as well, even though their buffer didn't overflow), and due to their inflexibility (you can't programmatically deploy SVs on LV Real-Time for instance, and you need to be able to, since we sometimes experience SVs undeploying themselves spontaneously when the SVE is heavily taxed).

There are many better alternatives for intra-process communication (queues, events, and even locals). For network we have developed our own sticky client/multi-server TCPIP-Link solution which is much better (faster, more stable, more features, slimmer) than SVs and Network Streams for network communication. Granted, TCPIP-Link wasn't trivial to make, it currently has clocked up in excess of 1000 man-hours. The only killer feature of SVs is how simple they are to bind to controls. But one or two wrapper-VIs around TCPIP-Link will get you almost the same...

And it's a pity you need the DSC module for enabling events on SVs.

While I certainly agree there are some areas where SVs can improve (and a few years ago I would have agreed with your assessment when they had much more serious problems) I think SVs now deserve more credit. (I will state upfront that we still see occasional issues with the SVE--mostly having to do with logging, but we have workarounds for these, and I think if there are more users NI will fix these issues more quickly.)

1) Weight. We've never had an issue with an SVE deployed on Windows. We never deploy the SVE on RT. (We don't have any reason to do so.) Our RT applications (on cRIOs) read and write to SVs hosted on Windows without any problem.

2) Black box: It is true that we don't see the internal code of the SVE, but we don't have to maintain it either. At some level everything is a black box (a truism, I know)--including TCP/IP, obviously--it just depends on how much you trust the technology. I understand your concerns. By the way, in my tests I haven't noted a departure from the 10 ms or buffer full rule (plus variations since we are deploying on a nonRT system), but there may be issues I haven't seen. The performance has met our requirements.

3) Buffer overflow: I'm not sure I understand this one. SV reads and writes do have standard error in functionality, so they won't execute if there is an error on the error input. We don't wire the error input terminal for this reason but merge the errors after the node. (I don't think this is what you are saying, though. I think you are saying you get the same warning out of multiple parallel nodes? We haven't encountered this ourselves so I can't help here.)

4) While we don't programmatically deploy SVs on RT, as I mentioned, I agree it would be good (and most appropriate!) if LabVIEW RT supported this functionality. For the record, we do programmatically connect to SVs hosted on Windows in our RT applications, and that works fine.

5) Performance: SVs are pretty fast now, both according to the NI page on SVs, and according to our tests and experience. I'm sure there are applications where greater performance is required, and for these they would not be suitable, but for many, many applications I think their performance is sufficient.

6) Queues, events, and local variables are suitable for many purposes but not, as you note, for networked applications. (We do, of course, use events with SVs.) TCP/IP is a very good approach for networking, but in the absence of a wrapper does not provide a publish-subscribe system. Networked shared variables are one approach (and the only serious contender packaged by NI) to wrapping TCP/IP with a publish-subscribe framework. If someone wants to write such a wrapper, that is admirable (and that may be necessary and a good idea!), but I think for most users it is a much shorter path to use the SV than to work the kinks out their own implementation. I haven't tried making my own, though, and the basics of the Observer Pattern do seem straightforward enough that it could be worth the attempt--it's just not for everybody. (I'd also prefer that we as a community converge on a single robust implementation of a publish-subscribe system, whether this be an NI option or the best offered by and supported by the community. At the present time I think SVs are the best option readily available and supported.)

7) Binding to controls: We do use that feature sometimes, but honestly we tend to do most of the updates ourselves using events.

8) SV events: Yes, I wholeheartedly agree that SV events should be part of the LabVIEW core, and I have said as much many times. By the way, we do generate SV user events on RT without the DSC Module by using the "Read Variable with Timeout" function and generating a user event accordingly. This is straightforward to do and works fine. We have only done this with a single variable (which supports many possible message types via the Command Pattern), but I am guessing it would not be too terribly difficult to extend this to n variables and avoid the use of the DSC Module altogether. (We haven't attempted this to date because we also use the logging capabilities of the DSC Module.)

Perhaps the best evidence I can provide is that we have deployed functioning, robust (and quite complex) systems that use networked shared variables effectively for interprocess communication. Hence, at least for the features we use, in the manner in which we use them (which is in the end quite straightforward and simple to implement), we know networked shared variables offer a valid and powerful option.

Link to comment
  On 2/6/2012 at 6:28 PM, Paul_at_Lowell said:

While I certainly agree there are some areas where SVs can improve (and a few years ago I would have agreed with your assessment when they had much more serious problems) I think SVs now deserve more credit. (I will state upfront that we still see occasional issues with the SVE--mostly having to do with logging, but we have workarounds for these, and I think if there are more users NI will fix these issues more quickly.)

We started using SVs really heavily 3-4 years ago for streaming. We ran into all sorts of trouble with them, and were thrown around by NI a fair bit while they tried to find the spot where we used those SVs wrong. Finally last year we got the verdict: SVs were not designed for streaming, they can break down when heavily loaded - please use Network Streams for this instead. That NI didn't let this info surface before Network Streams were ready as replacement is quite distasteful, it has cost us and our customers millions chasing SVs with the wrong end of the stick. Had NI told us with a straight face that our implementation weren't going to work, but they were working on a replacement, we'd have been in a much better position. Now, as it is, I never want to touch anything that has to do with SVs. I was burned too badly, and now we have our own much better solution.

  On 2/6/2012 at 6:28 PM, Paul_at_Lowell said:

1) Weight. We've never had an issue with an SVE deployed on Windows. We never deploy the SVE on RT. (We don't have any reason to do so.) Our RT applications (on cRIOs) read and write to SVs hosted on Windows without any problem.

We need to deploy the SVE on RT since the RT system is usually the always-on part in our applications - our Windows hosts are usually optional connect/disconnect type controllers while the RTs run some stuff forever (or for a long while). Simulation, DAQ, control/regulations etc. It could seem SVs fare better on Windows, both due to the (often) higher availability of system resources there, but also due to the more relaxed approach to determinism (obviously).

  On 2/6/2012 at 6:28 PM, Paul_at_Lowell said:

2) Black box: It is true that we don't see the internal code of the SVE, but we don't have to maintain it either. At some level everything is a black box (a truism, I know)--including TCP/IP, obviously--it just depends on how much you trust the technology. I understand your concerns. By the way, in my tests I haven't noted a departure from the 10 ms or buffer full rule (plus variations since we are deploying on a nonRT system), but there may be issues I haven't seen. The performance has met our requirements.

That's a good thing, and I sense that SVs work for most people. It may just be because SVs were never designed to the load we put on them. It's only when we have problems that it's bad that the toolset is a black box. Otherwise it's just good encapsulation :). But with TCPIP-Link we just flip open the bonnet if anything's acting up.

  On 2/6/2012 at 6:28 PM, Paul_at_Lowell said:

3) Buffer overflow: I'm not sure I understand this one. SV reads and writes do have standard error in functionality, so they won't execute if there is an error on the error input. We don't wire the error input terminal for this reason but merge the errors after the node. (I don't think this is what you are saying, though. I think you are saying you get the same warning out of multiple parallel nodes? We haven't encountered this ourselves so I can't help here.)

If you have a network enabled, buffered, SV, all subscribers (readers) will have their own copy of a receive buffer. This means a fast reader A (a fast loop executing the read SV node) may keep up with the written data and never experience a buffer overrun. A slow reader B of the same SV may experience backlogging, or buffer overwrite, and start issuing the buffer overflow error code (−1950678981, "The shared variable client-side read buffer overflowed"). A bug in the SVE backend means that read node A will start issuing this error code as well. In fact all read nodes of this SV will output this error code, even though only one of the read buffers overflowed. That is really bad, since it 1) makes it really hard to pinpoint where the failure occurred in the code, and 2) makes it next to impossible to implement proper error handling since some read nodes may be fine with filtering out this error code while other read nodes may need to cause an application shutdown if its buffer runs full. If the neighbors buffer runs full but you get the blame... how will you handle this? NI just says "That's acknowledged, but it's too hard to fix".

  On 2/6/2012 at 6:28 PM, Paul_at_Lowell said:

4) While we don't programmatically deploy SVs on RT, as I mentioned, I agree it would be good (and most appropriate!) if LabVIEW RT supported this functionality. For the record, we do programmatically connect to SVs hosted on Windows in our RT applications, and that works fine.

You connect programmatically through DataSocket, right? With a PSP URL? Unfortunately DataSocket needs root loop access, so it blocks all other processes while reading and writing. I don't know if SVs do the same, but I know raw TCP/IP (and thus TCPIP-Link) doesn't. Anyways, for many applications this might not be an issue. For us it often is. But the main reason I'd like to be able to programmatically deploy SVs on RT would be for when variables undeploy themselves for some reason. If we could deploy them again programmatically we could implement an automatic recovery system for this failure mode. As it is now we sometimes need to get the RT system connected to a Windows host with LabVIEW dev on it to deploy the variables again manually. I know this process can be automated some more, but I'd really like the Windows host out of this equation - the RT systems should be able to recover all by themselves.

  On 2/6/2012 at 6:28 PM, Paul_at_Lowell said:

6) Queues, events, and local variables are suitable for many purposes but not, as you note, for networked applications. (We do, of course, use events with SVs.) TCP/IP is a very good approach for networking, but in the absence of a wrapper does not provide a publish-subscribe system. Networked shared variables are one approach (and the only serious contender packaged by NI) to wrapping TCP/IP with a publish-subscribe framework. If someone wants to write such a wrapper, that is admirable (and that may be necessary and a good idea!), but I think for most users it is a much shorter path to use the SV than to work the kinks out their own implementation. I haven't tried making my own, though, and the basics of the Observer Pattern do seem straightforward enough that it could be worth the attempt--it's just not for everybody. (I'd also prefer that we as a community converge on a single robust implementation of a publish-subscribe system, whether this be an NI option or the best offered by and supported by the community. At the present time I think SVs are the best option readily available and supported.)

It was very hard to write TCPIP-Link. But for us NI simply doesn't deliver an alternative. I cannot sell an application that uses Shared Variables - no matter the level of mission criticality. I can't have my name associated with potential bombs like that.

Single process (intra-process) SVs seems to work fine, but I haven't used them that much. For these purposes I usually use something along the line of VIRegisters (queues basically), but the possibility to enable RT FIFOs on the single process SV could be handy. But again I'm loath to put too many hours into experimenting with them, knowing how much time we've wasted on their network enabled evil cousins :lol:.

Cheers,

Steen

Link to comment

OK, just a couple comments:

SVE on RT: Our model is as follows: Views run on Windows. They can open and close independently of the controllers running on the cRIOs (RT). The controllers can run indefinitely, but they do require a connection to the SVE hosted on Windows in order to operate normally. (If the Windows machine shuts down or the Ethernet connection breaks the controllers enter a [safe] Fault state.) If you can't count on the Windows machine being up, then I agree you need to host the SVs on RT. I don't think there is any reason to do so otherwise (even though the page on SVs tells us we should). (I can explain that further if anyone is curious.)

We haven't seen the issue you see with buffering, but in our applications we don't let listeners fall behind (probably we don't buffer more than 10 items, if that). Yes, what you describe sounds like a serious issue for that use case.

For the record, we use NI-PSP and the Shared Variable API for programmatic access to networked shared variables.

Link to comment
  On 2/5/2012 at 11:39 PM, jbjorlie said:
I am in Tulsa, OKLAHOMA! (always add the ! after Oklahoma! for dramatic effect).
That's my hometown. Spent 18 years there before going to university. Good place -- couldn't find a job there when I graduated, and my parents moved away, so I haven't been back in years. I'm glad to hear that someone's able to make a living in software there nowadays.
  Quote
I love the idea of the AF and he must be quite a scientist to have created it. However, it seems on the verge of too much "safety" at the cost of usability, especially when introducing it into existing code. I hope the discussions there result in some small modifications to lower the entry barrier before it gets plopped into the <vi.lib> for eternity.
Thanks for the critique. I mean that -- the AF was designed to go after real world code, so hearing that there are real world scenarios it cannot handle is useful feedback.

Your point has had some discussion, both on the forums and elsewhere. My position at this point is that there's a difference between "for the newbie" and "for new projects". We've had a couple of newbie LV users pick up the AF just fine -- I'm not worried about its usability in that respect. The existing projects aspect is a different story. Plopping the AF into existing code frameworks was not a goal we had in mind when building it. I'm not adverse to adding features to it to make such adaptations easier, but not if they come at the cost of the provable correctness for apps built with AF from the ground up. Powell had a useful suggestion the other day for a new asynch reply class that might help... I don't think I'll put it in in time for LV 2012 (we're already *way* late in the dev cycle, what with the beta releasing already, and although there's still some window for feature adjustment based on that feedback, I try to reserve that coding window for "it really is unusable" sorts of adjustments), but I like the idea going forward.

Also, being in vi.lib does not prevent us from making adjustments, it just means those adjustments have more overhead if it requires mutating existing user code.

Link to comment
  On 2/6/2012 at 8:55 PM, Paul_at_Lowell said:

For the record, we use NI-PSP and the Shared Variable API for programmatic access to networked shared variables.

OK, then you operate with static IP adresses, since the SV API needs a static SV refnum? In our network enabled SV apps we use DataSocket in one end to allow for dynamic IPs.

/Steen

Link to comment
  On 2/6/2012 at 9:00 PM, Steen Schmidt said:
OK, then you operate with static IP adresses, since the SV API needs a static SV refnum? In our network enabled SV apps we use DataSocket in one end to allow for dynamic IPs. /Steen

On the Windows side we use the name of the computer (e.g., 'DCS') and update the hosts file appropriately. (I think eventually IT will configure the network so we won't have to use the hosts file, but for the moment that is how we do it.)

On RT I think we could do more or less the same thing, but in practice we store the IP address in a configuration file, and construct the URL programmatically. This just seems simpler and, yes, removes the mysterious element.

To answer your specific question: Currently the control computers do have static IP addresses, yes.

Link to comment
  On 2/5/2012 at 11:39 PM, jbjorlie said:

Lastly, JAMES, I have enjoyed your hassling with Stephen on the Actor Framework forum :thumbup1: I love the idea of the AF and he must be quite a scientist to have created it. However, it seems on the verge of too much "safety" at the cost of usability, especially when introducing it into existing code. I hope the discussions there result in some small modifications to lower the entry barrier before it gets plopped into the <vi.lib> for eternity. LabView will always be a coding toolbox for the engineer and the tool sets need to be somewhat flexible. IMNO (newbie).

Not trying to hassle AQ. OK, a hopefully constructive hassle. One problem with complex designs is that people rarely take the effort of digging into them enough to make a constructive criticism of them. I was lamenting that fact one day (since I have complex design posted here myself) and I thought I really should make an effort. I’d be happy for AQ to point out the security flaws in my own, made by lone developer with an eye only for flexibility, designs. Or of other worthy ActorFramework-like targets, like mje’s “MessagePump".

— James

Link to comment

This thread is starting to resemble a holiday dinner at my parents' house...I do have more questions though and I'll post them later today if anybody is still paying attention. Here's some for AQ or anybody who wants to take a shot at them. I hope they are clear enough.

  On 2/6/2012 at 8:58 PM, Aristos Queue said:

That's my hometown. Spent 18 years there before going to university. Good place -- couldn't find a job there when I graduated... I'm glad to hear that someone's able to make a living in software there nowadays.

I knew that great mind had to come from around here! :lightbulb: However, I had to adapt from BioEngineering to Oilfield Instrumentation to stay in Tulsa so I wouldn't exactly say it's a good spot for a software guy. Without LabView we would be running instruments from a rheostat so I'm glad you're off making it modern. If you ever come through, let me know :beer_mug:

  Quote
Plopping the AF into existing code frameworks was not a goal we had in mind when building it.

Yeah, I realized that after trying my best for a few days. After reading the paper and watching your webcast I knew that framework was the answer to all my troubles! However, when you try to mix it in to an existing project it gets messy real quick. You cannot easily follow the Actor Framework by looking at the code, I follow it by looking at the project explorer. When you try to mix the AF in with other LVOOP code & non-OOP code the project looks like alphabet soup. I would think it is desirable to have a framework that can mesh more easily, even with older, uglier code.

I must need an AF-Lite...something that allows you to launch an actor, get the queue from it, and then send it messages to tell it what to do. That's it, something a Hard Drive, Air Conditioner, and Fire Suppression can all do: message = start fan, message = increase speed...

I understand having a dynamic dispatch for different types of fans but why does Fan need one for different types of fan callers? I'm missing something fundamental here aren't I?

Here's my problem:

I want to click an .exe, some initialization routine reads a file which tells it to launch, say, a certain type of motor controller as CH1, a thermocouple input and 2 DO's as CH2, an ultrasonic pulsing PCI board as CH3, and then send their queues to a TouchscreenIdle UI, which will send and receive messages from them. The UI has no idea what they are and CH1, 2, and 3 have no idea what the other channels are or what is controlling them.

The GUI sends messages asking what controls/indicators to show/hide, labels, etc...to the queues it received from the channels through the initialization routine. Then when somebody presses a button it sends that message through the right queue and the motor turns on. If they want to setup a test, the IDLE UI passes all the queues to Setup UI and so on.

It seems like I should be able to use the AF for just the I/O, or for just the GUI's, or for just the Motor Controller since they are all lightly coupled. There is some coupling due to the set of POSSIBLE MESSSAGES being non-infinite but what else? How can I use the AF in this type of program incrementally?

Please help a fool understand how to connect to your fool-proof machine!

  • Like 1
Link to comment
  On 2/7/2012 at 5:43 PM, jbjorlie said:

This thread is starting to resemble a holiday dinner at my parents' house...I do have more questions though and I'll post them later today if anybody is still paying attention.

I've been pretty scarce for the last 3-4 months so I just found this thread.

  On 2/7/2012 at 5:43 PM, jbjorlie said:

I must need an AF-Lite...something that allows you to launch an actor, get the queue from it, and then send it messages to tell it what to do.

If you're already using LapDog for messaging why not roll your own actors from scratch? They're not terribly difficult to write. LapDog doesn't care about the nature of message senders/receivers, nor does it care what kind of message is sent. If you want to use the command pattern and actors, create your own "Command" class as a child of "Message," give it an "Execute" method, and subclass that for each unique command. (I've been thinking about creating a LapDog.Actor package, but to be honest I haven't needed to use actors much.)

In fact, your code may very well be simpler in the long run if you create your own actors. The downside of creating and using prebuilt frameworks is the framework needs to be flexible enough to handle a wide range of user requirements. Adding flexiblity requires indirection, which in turn adds complexity. If you have a lot of actors to write then you could create your own Actor superclass with a Launch method and subclass it for your concrete actors.

  On 2/7/2012 at 5:43 PM, jbjorlie said:

The GUI sends messages asking what controls/indicators to show/hide, labels, etc...to the queues it received from the channels through the initialization routine.

I wouldn't do this. It's giving the business components (motor controller, etc.) responsibility for deciding how the data is displayed. Display is the responsibility of the UI code. Since the UI doesn't know what business component (bc) is on the other end of the queues it receives, you could have the UI send a Request_ID message to each component so it can map each bc's messages to the proper UI display. Then the UI decides what controls/indicators to show based on the response from the bc.

Or, what I might do is create a BC_Token class containing a MessageQueue class for sending messages to the bc and a BC_Info class containing information about the bc at the other end of the queue. Your "initialization" code is going to be figuring out which concrete classes should be associated with each channel and creating the appropriate objects. During that process the initialization code would also create an appropriate info object for each bc it's sending to the UI. It packages the two things together in a BC_Token object and sends it to the UI as a message.

  • Like 2
Link to comment

Daklu wrote:

> I wouldn't do this. It's giving the business components (motor controller, etc.)

> responsibility for deciding how the data is displayed.

Totally agree. This is the basis of the separation of

  • "model" -- the hardware/business logic/process/software simulation/whatever you're actually doing

from

  • "view" -- the UI layer or data logger system that records what's going on

from

  • "controller" -- the section of code that decides what new instructions to send to the model

Model generally knows *nothing* about view or controller. Any view can connect to it. Any controller can send it orders. The model publishes a list of (events for the view to listen to OR messages sent from the model) and a list of (events for the controller to fire OR messages that can be sent to the model). The controller knows little or nothing about the view (binding these two is sometimes more acceptable). It publishes a list of events/methods for the view which tells the view what commands are available right now and lets the view command the controller.

Edited by Aristos Queue
  • Like 1
Link to comment

How does Controller get data from Model if Model only sends messages to View? If my controller is running a PID loop commanding Heat & Cool Models how should it get the Temp from the Thermocouple Model? Does thermocouple send value messages to both View and Controller? Does it send a third value message to the data logger? I've gone from one to two too many Qs! ARGH! :frusty:

Link to comment

The controller can also listen in on those same messages... as I said, there can be a bit of bleed over between the responsibilities of the view and the controller. Keeping them firmly separated is a goal needed by some applications but far fewer than the number of applications that need an absolute wall between the model and everything else.

I'm probably not describing this in helpful terms. I have an example of exactly this, but it isn't something I can post at the moment... if you join the LV 2012 beta program, the beta of 2012 went live today and there's a shipping example in there for exactly this with the AF.

Edited by Aristos Queue
Link to comment
  On 2/7/2012 at 5:43 PM, jbjorlie said:

I must need an AF-Lite...something that allows you to launch an actor, get the queue from it, and then send it messages to tell it what to do. That's it, something a Hard Drive, Air Conditioner, and Fire Suppression can all do: message = start fan, message = increase speed...

I understand having a dynamic dispatch for different types of fans but why does Fan need one for different types of fan callers? I'm missing something fundamental here aren't I?

I too didn’t like that “Low Coupling Solution”. I spent some idle time trying to think of a way to make the “Zero Coupling Solution” simple enough that one wouldn’t ever need to consider the “Low Coupling Solution”. After I while I realized: it already is simpler! Not trivial to see at first. So I think it would be better if the documentation never mentioned the “Low Coupling Solution”, and instead concentrated more on demonstrating how to do zero coupling, and presented it as the standard way to use the Actor Framework. In fact, with a simple guideline, one could write “High Coupled” actors in such a way that they could later be easily upgraded to zero-coupled actors. I’ll try and get this suggestion, with more detail, in the 2012 beta forum (once I’ve figured out how to download the beta software :rolleyes: ).

— James

Edited by drjdpowell
Link to comment

There are a few different ways to implement M-V-C. In http://www.amazon.co...28725276&sr=8-1, for instance, the authors show a flavor where the Controller subscribes to the Model's published state, and another whether the View subscribes to the Model state directly.

In our components the Model publishes its state (in both senses, the current parameter values, and particularly a value that describes in which State it is currently, e.g., DisabledState). Each of our Views has the responsibility of deciding what to display based on the value of the State parameter. It is pretty simple to implement this sort of thing.

(There is another flavor of M-V-C where the controller takes a much more active role in determining what the View should display. We have decided to date not to implement that flavor for our purposes.)

  • Like 1
Link to comment
  On 2/3/2012 at 4:31 PM, jbjorlie said:

I am running the UI on the control machine but the operator can also access a GUI from a remote machine which will plug in much like Steen's system noted above. I do keep the "business" seperate by making each channel the commander of it's own routines. It runs by itself only receiving change commands through a queue from the mediator. My education from this forum lead me to believe that was the best way to do it. I was going to have each channel write its data to the tdms file as well but maybe I should message the data to a central "Storage" vi and have it write the data??

I am leaning towards messaging the data to a central "Storage" vi and have it write the data. The reason for this is that in one of my projects, I have NOT done this and I am now realizing that I might should have. I simply passed the TDMS reference to my parallel processes, and as the processes needed to write to Disk, I just use the Write to TDMS vi's. This worked, because my file never really needed to change. Now I am wanting to implement a new feature to allow the user to interrupt this constant stream of Data and redirect it to another file. So I need to close the TDMS reference and Open a new File. Problem is that since I passed all the references from a top level down, and I am constantly streaming to disk, I cannot easily change files. If I had used a messaging system back to a storage VI, I could buffer up the data flowing in while the process of closing and opening a new file completed. I would only have 1 place to change the TDMS reference.

I just thought of another way to do this, without using a messaging system back to the "storage" vi. I could go ahead and open another file, then

pass the new TDMS reference to all of my parallel processes, with a message that the TDMS reference has changed.

The advantage with using a single storage VI is that one could parse the data flowing in to determine the correct "point" at which to cut the flow, start buffering, and wait for the new file to open. I feel that it would be much harder to "line up" the TDMS channel streams if a new TDMS reference was simply passed to the parallel loops.

  • Like 1
Link to comment
  On 2/8/2012 at 12:35 AM, jbjorlie said:

How does Controller get data from Model if Model only sends messages to View?

Good question. I agree with AQ. The line between View and Controller is a lot fuzzier than between the Model and the other two. If you're looking at diagrams like the one one wikipedia, my advice is to ignore it and do what makes sense to you. There are lots of ways to skin a cat but they all result in the same thing... tasty soup.

My preference is more along the lines of Paul's "other flavor" MVC implementation. There is no direct communication between the model and view at all; it all goes through the controller. (Instead of a triangle my three components are in a line with the controller in the middle.) The controller is where I put the glue code that translates output messages from one into input messages for the other. I actually don't like calling it MVC because I don't think it's an accurate description. It's closer to a Model-View-Mediator architecture, but that's not exactly right either.

  On 2/8/2012 at 12:35 AM, jbjorlie said:

If my controller is running a PID loop commanding Heat & Cool Models how should it get the Temp from the Thermocouple Model? Does thermocouple send value messages to both View and Controller? Does it send a third value message to the data logger?

Depends on your messaging topology. Applications using the observer pattern lean towards a lot of direct point-to-point communication. In these the message source usually sends a copy of the message to each receiver.

I prefer a hierarchical messaging topology. In that system the thermocouple sends out a single Temp message to it's owning mediator, which then forwards it to the next mediator, and so on down the line. Copies are only made if a mediator has to forward the message to multiple destinations.

Or you could always do a hybrid of the two...

Link to comment
  On 2/8/2012 at 10:21 PM, Daklu said:

If you're looking at diagrams like the one one wikipedia, my advice is to ignore it and do what makes sense to you.

Yup, looking at that while you were writing this...now looking for eraser...

  Quote
...tasty soup

You've been to Kazakhstan too 'eh?

  On 2/8/2012 at 4:32 PM, drjdpowell said:

I’ll try and get this suggestion, with more detail, in the 2012 beta forum (once I’ve figured out how to download the beta software :rolleyes: ).

Looking forward to it. Does NI only allow beta downloads from USA? Texans never could play nice with foreigners. :P

  On 2/8/2012 at 3:48 PM, Aristos Queue said:

The controller can also listen in on those same messages...

I think I'm getting how to do that but now I'm looking forward to the example in 2012. It's only 638 MB away...630MB!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.