Jump to content

Techniques for componentizing code


Recommended Posts

One slightly annoying thing at first about LVOOP child classes is that they cannot directly access their own “parent” data via “unbundle”.

Another battle in the never ending war between safety and flexibility. :) I agree it is a bit annoying at first, but I also believe it is the right decision for Labview.

Anyway, I was saying that I've figured out that new slave loops should be children of your parent class with different execution loops (blindingly obvious in retrospect).

Actually, I use it as a template, not a parent class. When creating a new slave loop class I copy the template and customize it for the specific task. That's why there are no accessor methods for children.

Why not use it as a parent class? Primarily because the parent-child relationship creates a dependency, and I think it's more important to manage/limit dependencies between various application components (lvlibs.) If the parent is in one library and the child is in another, then the library with the child class is dependent on the parent's library. If I want to reuse the child's library in another app I have to drag the parent's library along with it.

Also, there are really only 3 methods in the class: Create, Execute, and Destroy. My convention is that Creators are not overridable, since they have to accept all the data that is unique to a specific class. Execute must be overridden by each child class, so there's no opportunity for reuse there either. Destroy can be reused, but there's not much savings in doing it. It takes maybe a minute to write it. In short, there's no benefit to making a SlaveLoop parent class and it requires more effort to manage.

I can envision scenarios where it would make sense, but I haven't encountered them in the real world.

[still need to respond to your earlier post...]

Link to comment
  • 1 month later...

Hi Daklu,

I imagine you're busy but I was wondering if you could weigh in on the following question (from an earlier post).

What would you do if you wanted to be able to close and reopen your slave loop without locking your code up? What I'm getting at here is, what is the OOP equivalent to launching a sub-vi using a Run VI method with "Wait Until Done" set false? Or, as an even more abstract question, how does one implement a plug-in type code structure where the functionality is determined at run-time?

Also, if you're slave process should be continuous except when handling messages, do you utilise a timeout method, or do you separate the behaviours within the slave by adding another layer via a separate message handler?

Thanks for your insight!

Edited by AlexA
Link to comment

Sorry Alex, I completely forgot about this!

What would you do if you wanted to be able to close and reopen your slave loop without locking your code up? What I'm getting at here is, what is the OOP equivalent to launching a sub-vi using a Run VI method with "Wait Until Done" set false? Or, as an even more abstract question, how does one implement a plug-in type code structure where the functionality is determined at run-time?

Three questions, three answers.

Close & Reopen

The simplest implementation doesn't allow exiting and restarting a slave loop. Since most slaves are idle unless they are handling a message, this design is adequate for many situations. If you have a continuous-process slave that consumes a lot of resources and want to be able to shut it down when it's not being used, I'd build an internal state machine with "Active" and "Standby" states. In the Standby state the slave simply monitors messages. In the Active state it's monitoring messages and doing the resource consuming processes.

OOP Run VI

There is no "OOP equivalent." If you want to dynamically launch a slave loop you still need to use that function. (Or the Call Async VI function introduced in 2011.) When I wrap a slave loop in a class, the class usually has three methods: Create MySlave, ExecutionLoop, and Destroy. If I need dynamic launching I'll add a fourth method, Launch. Launch simply loads the ExecutionLoop vi and launches it using the mechanism of your choice. (Note: Dynamic launching adds complexity to the code, so I'll only use it if I need a large or unknown number of identical slave loops.)

Plug-ins

To me, a "plug-in" is the ability to add functionality to an application without recompiling any of the original source code. When the application starts, it searches the plug-in directory, finds the installed plug-ins, and hooks into them. If that's what you're looking for, then you'd probably need to launch each plug-in slave loop dynamically unless there's a known upper limit on the number of plug-ins that will be running at any one time. I've never done plug-ins with slave loops so I'm not sure what issues you'll run into, but it seems like it'd be fairly straightforward.

Also, if you're slave process should be continuous except when handling messages, do you utilise a timeout method, or do you separate the behaviours within the slave by adding another layer via a separate message handler?

I occasionally use a queue timeout case for very simple situations, but I consider it code debt because there's no way to guarantee the timeout case will ever be called. I'll usually refactor it into something more robust the next time I'm adding functionality to that loop. There are two ways I've dealt with "continuous process" slaves in the past: Heartbeats and DVRs.

Using Heartbeats for Continuous Slave Loops

A heartbeat is a simple timer that sends out a single message at specific intervals. (I've also called them "watchdogs," "timers," and in a fit of lyrical excessiveness, "single task producers.") In the example below, every 20 ms the heartbeat loop sends a RefreshDisplay message to the Image Display Loop, ensuring the display will be refreshed regularly regardless of the timing of other messages it might receive. However, it's still possible for the timing to get out of whack if the queue were backed up.

post-7603-0-28424400-1334680856_thumb.pn

[in this example the image display slave loop is not wrapped in an ExecutionLoop vi--it's on the main UI block diagram with several other slave loops and a mediator loop to handle message routing. A heartbeat can be put on a slave's ExecutionLoop block diagram if it is an inherent part of the slave's functionality, but usually setting up the heartbeat on the calling vi is more flexible. Either way the heartbeat is set up to automatically exit when the slave loop exits.]

Using DVRs for Continuous Slave Loops

When a simple heartbeat isn't an adequate solution, I'll refactor to use a DVR instead. This is the ExecutionLoop of an ImageComposerSlave class. In this particular case the SetOverlay message comes in bursts--a few seconds of high volume messages followed by relatively long periods of no messages, so there was a risk of the message queue getting backed up and throwing the timing off.

The ImgCmp object (containing all the relevant information needed to compose an image) is unbundled from the ImgComposerSlave object and immediately put into a DVR. The DVR is branched, with one branch going to the message handling loop and the other going to the image rendering loop. When the message handling loop receives a message that changes a value related to image rendering, it locks the DVR, changes the value in the ImgCmp object, and unlocks it again.

The image rendering loop executes at regular intervals, ensuring the rendered image gets produced on time. In principle the rendering loop can block for an excessive time waiting for the message handling loop to release the DVR. However, because the DVR never leaves this block diagram it is easy for me to verify there are no lengthy processes locking the DVR.

post-7603-0-11122300-1334682228_thumb.pn

Does that help?

-Dave

  • Like 1
Link to comment

Also, if you're slave process should be continuous except when handling messages, do you utilise a timeout method, or do you separate the behaviours within the slave by adding another layer via a separate message handler?

There are two ways I've dealt with "continuous process" slaves in the past: Heartbeats and DVRs.

I had a similar issue recently that I tried a different way with. A fourth option to consider with Timeouts, Heartbeats and DVRs (never thought of the last one).

I was writing software to log to an SQLite database; each individual SQLite transaction to disk takes a large amount of time, so it is best to store up log messages and save them as a batch periodically. I solved it with a “Scheduled Tasks” VI shown below (in a background process in the “Command Pattern” OOP style):

post-18176-0-43925600-1334692777_thumb.p

“Scheduled Tasks” is called after each message and outputs a timeout that feeds back into the dequeue. Internally, “Scheduled Tasks” checks to see if it is time to write the accumulated log messages to disk, and if not, calculates the remaining milliseconds, which is output. Thus, the task always gets done on time, regardless of how many messages are incoming. A disadvantage is that the timeout calculation has to be done after each message, but it isn’t a big calculation. An advantage is that “Scheduled Tasks” outputs −1 (no timeout) after it flushes all waiting messages to disk; thus if log messages arrive very rarely, this loop is spending most of its time just waiting.

It worked out quite well in this application, so I thought I’d mention it.

— James

Link to comment

I've seen a few implementations over the years have the dequeue timeout on a shift register, but I've never quite trusted them--probably because I didn't spend enough time investigating them. That implementation looks pretty clean. I like it better than putting a message at the front of the queue, which was the other thing I thought about. Do you keep the timestamp of the last flush in an internal shift register? I assume Do.vi is a no-op if the queue timed out? What should we call this implementation? Timeout shifting? Variable timeout?

Heartbeat and Timeout Shifting are two ways to get an actor (i.e. slave loop) to perform relatively short process at regular intervals. Intuitively I think the DVR is more suitable when the process is more or less continuous. I noticed a couple other things about Timeout Shifting compared to the Heartbeat. (Not to be down on your solution...)

-I believe the temporal separation of the dequeue timeout occurance and the next timeout being calculated will cause the absolute error to grow over time with Timeout Shifting. If you have something that has to execute every 5 seconds, it may start by executing at 0, 5, 10, etc., but after 10 minutes it may be executing at 1, 6, 11, etc.

-I don't think Timeout Shifting will scale as well as a Heartbeat. To add multiple periodic task on different intervals to a slave using a Heartbeat, just drop another loop with the message and interval. To do it with Timeout Shifting requires more internal shift registers and logic to make sure only the task to execute next passes its time to the timeout terminal. More importantly, with each additional periodic task to perform, not only does that sub vi get called more frequently, but the percentage of wasted processing time with each execution increases, making the scheme less and less efficient. For practical purposes this may not matter.

But as long as the developer understands these consequences I don't think there's anything wrong with them. Thanks for sharing!

Link to comment

Do you keep the timestamp of the last flush in an internal shift register? I assume Do.vi is a no-op if the queue timed out?

I keep the last timestamp in the object data. Yes, “Do.vi” is no-op for the parent message class.

-I believe the temporal separation of the dequeue timeout occurance and the next timeout being calculated will cause the absolute error to grow over time with Timeout Shifting. If you have something that has to execute every 5 seconds, it may start by executing at 0, 5, 10, etc., but after 10 minutes it may be executing at 1, 6, 11, etc.

One can get around that by making the calculation like a metronome. Using the scheduled last time rather than the actual last execution time. Though in this application I didn’t, as true periodicity is not desired. Instead, I just need a minimal time between writes to disk, and doing it this way works better than a periodic heartbeat.

-I don't think Timeout Shifting will scale as well as a Heartbeat. To add multiple periodic task on different intervals to a slave using a Heartbeat, just drop another loop with the message and interval. To do it with Timeout Shifting requires more internal shift registers and logic to make sure only the task to execute next passes its time to the timeout terminal. More importantly, with each additional periodic task to perform, not only does that sub vi get called more frequently, but the percentage of wasted processing time with each execution increases, making the scheme less and less efficient. For practical purposes this may not matter.

True. I’ve been meaning to think how to generalize this to multiple tasks. Perhaps an array of “tasks” that each output a timeout, with the minimum timeout being used, or an array of tasks sorted by next one scheduled, with only the first element actually checked.

— James

Link to comment

Hey Daklu and James,

Very interesting discussion, lots of insightful stuff. Thanks Daklu for your reply, yes in answer to your question, it was extremely helpful!

The following is tangentially related to the discussion at hand, just musings on my part with any insight very welcome:

I've spent a few hours trying to figure out how best to build a database of different queue types, and have come up with the idea of using a Class (Object Queue Class) whose private data is a Queue Reference to a queue of LV Objs. This Class has methods "Init Queue" and "Get Queue Ref" at the moment, Init queue takes an Object as input, where the object is whatever Data Class I want to make the queue.

Get Queue Ref gives me the reference, then I can enqueue an element of type Data Class and dequeue, cast to more specific then read.

This works for my test case, I was musing on whether it makes more sense to create an individual Object Queue class for each data type, haven't tested it but I imagine it would allow me to see mis-wirings at edit time rather than at run time which the current case does.

Anyway, the whole point of the exercise is that encapsulating the queue references allows me to store them in an array and I guess search it for a specific reference by trying to cast to the reference and handling errors, though there may be an even better way to do this taking advantage of inheritance.

More play to come.

Link to comment

I actually have a class laying around somewhere I prototyped a couple years ago specifically to send periodic messages to slaves. It's one of those things I've been meaning to add to LapDog....

I have a reusable “Actor” that does the same thing:

post-18176-0-80610500-1334739806.png

It’s one of the more useful things to have in the toolkit.

Anyway, the whole point of the exercise is that encapsulating the queue references allows me to store them in an array and I guess search it for a specific reference by trying to cast to the reference and handling errors, though there may be an even better way to do this taking advantage of inheritance.

Doesn’t sound that great to me. Personally, all my “Queues” (actually anything to which I can send a message, via queue, User Event, TCP server, notifier, etc.; my parent class is called “Send") use the same base “MSG” class, and I keep track of what is what by holding them in a cluster in shift register, and unbundling by name when I want to send something to one. I never put them in an array unless I mean to send the same message to everybody.

So I never search through an array of “Send” objects. In fact, the container I use for multiple “Send”s, called “Observer”, deliberately doesn’t provide the ability to access individual elements.

— James

Link to comment

This works for my test case, I was musing on whether it makes more sense to create an individual Object Queue class for each data type, haven't tested it but I imagine it would allow me to see mis-wirings at edit time rather than at run time which the current case does.

My initial reaction is no, it doesn't make much sense, but if anyone proposes a valid use case then I could be persuaded. Off the top of my head there are a couple issues with this idea:

1. The receiver has to monitor several queues simultaneously. This is possible (the Priority Queue in Mercer's Actor Framework is the first implementation that does it well) but it does add unnecessary complexity in most cases. In most cases each actor/loop should only be listening to a single queue.

2. Type safety on the sending side doesn't gain you anything. All the different queue types will be going to the same receiver, so having to pick the correct queue from the collection is just a hassle.

3. Type safety on the receiving side doesn't gain you anything either. After you pass the message out of your dequeue sub vi you'll still need to downcast to a specific class to get any information from the message itself.

Type safety is helpful in detecting programming errors, but sometimes it requires an inordinate amount of extra effort to maintain type safety while a non-type safe solution will be faster to implement and easier to understand.

I have a reusable “Actor” that does the same thing

Tell me again why you're developing all your own stuff instead of joining LapDog? :lol:

Link to comment

Yeah Daklu, after reading James post and letting a few things percolate that had already bothered me (the fact that a "cast to more specific" call feels very similar to a "variant to data" call), I've come to the conclusion that I was wasting my time.

Thankfully though, I've largely gotten over my irrational aversion to passing data via the messaging architecture. Contingent on the architecture being able to keep up with my data update rates!

Link to comment

Contingent on the architecture being able to keep up with my data update rates!

Sometimes for high throughput data streams I'll create a direct data pipe from the source to the destination to avoid loading down the messaging system. The pipe is just a native queue (or notifier depending on your needs) typed for the kind of data that is being sent. You can send the queue refnum to the data source and/or destination actors as a message. The data pipe is *only* for streaming data; all control messages (StartCapture, FlushToDisk, etc.) are still sent through the regular messaging system.

Link to comment

Yeah that was what I was doing, with the added difficulty that different processes could register for the same data queue (not simultaneously). Looking at my code now, I think I had a problem of too much separation of functionality. For example, almost sequential functionality such as reading in a data set from the FPGA and calculating its average, were separated, the reader read the data then passed it to a whole other process that basically existed to calculate the average and subsequently pass it to the UI. Needless complexity.

I'm busily trying to create a logical construct, but its very much a learning process for me, I'm still struggling with the fundamental discussion of this thread, the problem of componentising code. I wish I could wrap my head around the whole "write a story where the nouns become objects and the actions become methods" but it just doesn't gel with me at the moment.

Link to comment

Tell me again why you're developing all your own stuff instead of joining LapDog? :lol:

Short answer: started (and had the first app using it) before I knew LapDog existed. Keep meaning to see if I can reformat the message part into a LapDog extension. Or make the messenger part interoperate with your "Message Queue”.

Link to comment
  • 4 months later...

I'm curious if someone can speak to the benefits and drawbacks of having a case structure that takes a string from the message (as is shown in this example), versus something like the actor framework that has a Do.vi that is must override for every message. I feel the actor framework method of message handling is safer because the programmer is required to define a Do method for each message which defines how it is handled. In this example, the programmer still has the possibility of having a typo in his message name or case structure which would not be found until runtime. With the actor framework, it forces you to implement what to do when a specific message is gotten, and if this isnt' defined it is caught at compile time.

Link to comment

Command pattern messages (Do.vi) are arguably a more OOPish design, they are more efficient at runtime, and yes, they eliminate the risk of typos in the case structure.

Name/Data messages (whether they are LVOOP or not) centralize the message handling code, are more familiar to most LV developers, support better natural decoupling, and (imo) require less work to refactor.

Theoretically command pattern messages are "safer." However, in practice I've found typos to be a non-issue. I write my message handlers so the default case handles any unexpected messages (such as from a typo) and generates an "Unhandled Message" notice for me. On rare occasions when I do make a mistake it is quickly discovered.

Neither is inherently better than the other. Name/data messages support my workflow and design requirements better. I'll use the command pattern only when I need it. Others swear by command pattern messages.

Link to comment

I'm curious if someone can speak to the benefits and drawbacks of having a case structure that takes a string from the message (as is shown in this example), versus something like the actor framework that has a Do.vi that is must override for every message. I feel the actor framework method of message handling is safer because the programmer is required to define a Do method for each message which defines how it is handled. In this example, the programmer still has the possibility of having a typo in his message name or case structure which would not be found until runtime. With the actor framework, it forces you to implement what to do when a specific message is gotten, and if this isnt' defined it is caught at compile time.

I agree with everything Daklu has said (with the exception of efficiency ;) but wholeheartedly with typos). Since I am firmly in the case structure camp for this stuff I would also add (in addition to my earlier posts which detail other aspects):

Pros:

  • Much smaller code base.
  • Single point of maintenance,
  • Better genericism (what do I mean? Discuss :P ).
  • Looser coupling between messages and code (there is no code for messages).
  • Less replication.(you don't need wizards or tools to save copying and pasting).
  • Better portability (messages can easily be transmitted via ethernet, serial etc and interface to non-labview languages).

Cons

  • Typos (trivial)
  • State difficult to handle (e.g. timed/timeout responses).

Dynamic dispatch is a stealth case statement with handcuffs :D

Edited by ShaunR
Link to comment

Command pattern messages (Do.vi) are arguably a more OOPish design, they are more efficient at runtime, and yes, they eliminate the risk of typos in the case structure.

Name/Data messages (whether they are LVOOP or not) centralize the message handling code, are more familiar to most LV developers, support better natural decoupling, and (imo) require less work to refactor.

Theoretically command pattern messages are "safer." However, in practice I've found typos to be a non-issue. I write my message handlers so the default case handles any unexpected messages (such as from a typo) and generates an "Unhandled Message" notice for me. On rare occasions when I do make a mistake it is quickly discovered.

Neither is inherently better than the other. Name/data messages support my workflow and design requirements better. I'll use the command pattern only when I need it. Others swear by command pattern messages.

Thanks for the responses, guys. Hopefully this will never turn into the OOP version of the string vs. enum debate! I try to keep in mind this is LabVIEW, not c++ or JAVA so sometimes things will not conform perfectly to the generally accepted practices in those languages.

Link to comment

As I study this more and adapt it to meet my general needs, I came up with another question. You mentioned providing type safety by passing the slave loop class to the event loop, which forces developers to write methods for the messages. But it seems that this same type safety doesn't hold true for messages sent back to the "master" because the slave loop can get the queue reference directly. Would it be safer to have a "master" class as well which holds the queue ref in its private data and then pass that class to the slave loop instead? Then it would force messages sent to the master to have methods defined.

Edited by for(imstuck)
Link to comment

Let me start by saying my development practices have changed a lot in the 1.5 years since I started this thread. I still use slave loops (although I don't call them that any more) and my apps are still based on hierarchical messaging, but I don't put nearly as much effort into enforcing type safety. I can easily add type safety if I think I need it (like if I'm exposing a public api to other developers or building an app allowing plugins) but most of the time it's just extra work that doesn't provide me with any benefit.

Would it be safer to have a "master" class as well which holds the queue ref in its private data and then pass that class to the slave loop instead? Then it would force messages sent to the master to have methods defined.

Yeah, it probably would be safer. If you want to spend the time wrapping all the master messages in methods you certainly can--I won't tell you you're wrong for doing so.

However, there are significant consequences of doing that. Having the slave loop call methods defined by the master loop makes the slave loop statically dependent on the master loop. Since the master already depends on the slave you now have a circular dependency in your design. That's usually bad. Managing dependencies is the single most important thing I need to do to keep my apps sustainable. Unfortunately it's rarely discussed. Probably because it's not as sexy as actors and design patterns.

I am curious why you're so interested in safety. Type safety is fine, but it costs development time to implement. Furthermore, the more safety you build into your app the more time it will take you to change the app when requirements change. Too much type safety will soon have you pulling your hair out every time your customer says, "I was thinking it would be cool if we could..."

I agree with everything Daklu has said (with the exception of efficiency ;) )

I've never benchmarked examples. I got the runtime efficiency information from AQ and he's in a better position to know than I am.

Link to comment

I am curious why you're so interested in safety. Type safety is fine, but it costs development time to implement. Furthermore, the more safety you build into your app the more time it will take you to change the app when requirements change. Too much type safety will soon have you pulling your hair out every time your customer says, "I was thinking it would be cool if we could..."

I'm not directly interested in only this, although I see why it came off that way. As I transition to using and fully understanding LVOOP, I just want to make sure I cover all of my bases, and there are certain points I understand more than others. So, some I may hammer home until I understand the best way, then move onto the next thing that I'll repeatedly ask questions about until I understand it fully :P . I assume it will be like learning LabVIEW; I'll just do it, ask questions, refactor it, ask more questions, refactor it then one day I'll realize I no longer have to ask questions and instead can answer them.

Edited by for(imstuck)
Link to comment

As I transition to using and fully understanding LVOOP, I just want to make sure I cover all of my bases...

Fair enough. That's very similar to how I learned it too... start by learning the "academically correct" way to implement it, then once I understand that I can selectively implement only those aspects that are important for a given project.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.