Jump to content

Data allocation under OOP design patterns


Recommended Posts

Hi everyone,

I come from the Java/c# world with some experience with older version of LV.

In a redesign of an old humongous LV legacy program I planned on using the some design patterns keeping in mind agility, minimum coupling, testability, upgradability and simulation mode:

Yet, coming to implement it in LV I wonder what are the differences between LV and regular OOP language and what should I watch out from.

So, here are my questions:

1. After I create and close some instances of a child is the memory they held still allocated if their parent is still alive? When does LV decide on duplicating an object in the memory and how do I release it?

2. If I allocate the initial empty memory of an object and try keeping it on a single wire where do I need to use in-place element structure? When should I use a SQL database? I use single element queues saving the data from the hardware yet once in the logic how should I handle it?

3. What is the best way to implement the design patterns in LV (the command design pattern in particular) and what are their pitfalls in LV? Working in a closed loop (live control feedback on the hardware) is there something I should watch out of?

The design patterns I use are:

Singleton (main app and hardware)

Observer (main app for cross element communication)

Abstract factory (initialization of elements and even of different versions of an element)

Factory method (implementation of each element)

Proxy (hardware elements)

Command (requests and operations between elements)

Chain of responsibility (command organizing with high/low priority for hardware requests)

Strategy (common algorithms with slightly different implementations)

Template method (implementation of algorithms)

Flyweight (passing partial data to ovoid data and cpu performance issues).

Thanks in advance,

Dror.

Link to comment

Yet, coming to implement it in LV I wonder what are the differences between LV and regular OOP language and what should I watch out from.

At first you need to understand the basic by-val model. This is similar to the normal cluster in LV.

Using by-val implementation, you don't need to take care about creating and destroying objects at all.

There is also the possibility to implement a class by-ref (using DataValueReference+InPlaceElement structure), in wghich situation you need to writie a Create and a Destroy method (and call them).

1. After I create and close some instances of a child is the memory they held still allocated if their parent is still alive? When does LV decide on duplicating an object in the memory and how do I release it?

Using by-val, everything is done by LV.

Using by-ref, use the DVR functions in the Create and Destroy methods for memory management. A by-ref class is by ref, so a branch is just creating a copy of the refnum. Using the destry method on any branched wire invalidates the refnum, hence the other branches don't reference the object any more.

2. If I allocate the initial empty memory of an object and try keeping it on a single wire where do I need to use in-place element structure? When should I use a SQL database? I use single element queues saving the data from the hardware yet once in the logic how should I handle it?

The IPE is to be used with DVR. Using a single-element queue (SEQ) the locking is done via the empty state of the queue, in which you can't dequeue the object before it's checked in again.

3. What is the best way to implement the design patterns in LV (the command design pattern in particular) and what are their pitfalls in LV? Working in a closed loop (live control feedback on the hardware) is there something I should watch out of?

For command design pattern, search LAVA for a post by Paul at Loweel with a bigger document about the command design pattern.

For all other design patterns, do a search on LAVA and NI.com. Not everyone is demonstrated yet and some don't make sense in LVOOP. A big issue translating the design patterns is the lack of abstract classes and interfaces in LVOOP.

Felix

Link to comment

...For all other design patterns, do a search on LAVA and NI.com. Not everyone is demonstrated yet and some don't make sense in LVOOP. A big issue translating the design patterns is the lack of abstract classes and interfaces in LVOOP.

Felix

As of LabVIEW 2009 (at least that the version where I use this feature), you can declare methods that must be overridden by the child and whether or not the parent must be called in the child's implementation. So there's no "abstract" or "interface" keyword but the effect is the same. You still have the limitation that a class can't implement multiple interfaces since the "interface" is still a LabVIEW class and there's no multiple inheritance.

Mark

Link to comment

Thanks for the quick reply!

Using by-val implementation, you don't need to take care about creating and destroying objects at all.

As I understand it, the by val implementation duplicates the allocated memory everytime the wire is spliting (including when just a property node is created), inside loops when using shift registers and when I pass it to another vi (Are those all the cases?).

A. When I create a child in OOP, and it is a by value design, is the memory connected to the parent and as long as the parent is alive the memory is allocated? (Do you think I should use by reference or by value when I use design patterns?)

B. After allocating the memory for an array for example and then using a block like replace array subset is the memory allocated again to contain indexes?

C. If for example I add 1 to a variable does it lock the memory of the variable? Does the IPE lock the memory? When I pass messages between threads does it lock the memory when it is being synchronized?

I already know LV basics, I need to know more details about the "behind the scene" since letting LV handle automatically a bad design will cause memory problems in the first two cases and cpu slow down in the third.

Regarding Paul at Loweel command pdf, Instead of giving a thread for each task (which will waste a lot of cpu time since some threads won't work all the time) I want to create the same number of threads yet let all of them work on a different task each time so all the threads will work all the time. Is that a logical line of thought in LV? How should I implement such a design pattern?

P.S. - is there a way to enter variables into an enum instead of constants?

Link to comment

As I understand it, the by val implementation duplicates the allocated memory everytime the wire is spliting (including when just a property node is created), inside loops when using shift registers and when I pass it to another vi (Are those all the cases?).

This is a complicated topic, not only in LVOOP but in LV in general. The compiler is performing a lot of tricks that it only needs to copy the data when it's necessary. I'd guess you get even better performance using by-val instead of by-ref (each time you branch your data out of the IPE structure, it's copied as well.

Don't care about this too much. LV is fast.

A. When I create a child in OOP, and it is a by value design, is the memory connected to the parent and as long as the parent is alive the memory is allocated? (Do you think I should use by reference or by value when I use design patterns?)

You always need the parents private data cluster in the memory. Any OOP language must do this, I guess. You always can access the properties of a parent from it's child (indirectly at least).

The choice of by-val and by-ref for the design patterns is: both. Conventional OOP is always by-ref, so it translates better into by-ref-LVOOP. But things can greatly simplify if using by-val. That's the great fun in LVOOP, that you have both choices and can even mix them. Which is 'better' could result in lengthy debates with no clear outcome.

B. After allocating the memory for an array for example and then using a block like replace array subset is the memory allocated again to contain indexes?

C. If for example I add 1 to a variable does it lock the memory of the variable? Does the IPE lock the memory? When I pass messages between threads does it lock the memory when it is being synchronized?

Show us code. Benchmark code on your own.

Most of the cases we never think about troubles concerning memory allocation or multi-threading when coding in LV.

I already know LV basics, I need to know more details about the "behind the scene" since letting LV handle automatically a bad design will cause memory problems in the first two cases and cpu slow down in the third.

LV is doing clever things behind the scene, so most of the time we don't care. On the downside, you propably need to read 10k posts on the forums to get a glipse of what performance optimizations are happening.

Regarding Paul at Loweel command pdf, Instead of giving a thread for each task (which will waste a lot of cpu time since some threads won't work all the time) I want to create the same number of threads yet let all of them work on a different task each time so all the threads will work all the time. Is that a logical line of thought in LV? How should I implement such a design pattern?

Doesn't make sense. An inactive thread isn't consuming any CPU time. That's what the guys that wrote your OS were paid for.

P.S. - is there a way to enter variables into an enum instead of constants?

Now you lost me completely.

Post a screenshot of what you want to achieve. Ok, write it in your favorite text-language, I'll be able to translate.

Fear you are really coming far to much from the text-based paradigma. I'll do my best to reeducate you to data-flow thinking.

The only advice I really can give: Post a lot the next time in the forums (LAVA and NI) to get a decent coding standard from the beginning. Asking for reviews of code will always give you some harsh critics.

Felix

Link to comment

Just t add my 2 cents worth (and that's about what it's worth)

Thanks for the quick reply!

As I understand it, the by val implementation duplicates the allocated memory everytime the wire is spliting (including when just a property node is created), inside loops when using shift registers and when I pass it to another vi (Are those all the cases?).

As Felix pointed out, this gets complicated - LabVIEW doesn't copy data any more often than it thinks it has to. For instance, the standard LV OO Vi ( a LabVIEW method) has input and output terminals. But whether or not LabVIEW creates a copy of the data depends on what happens inside the VI. For example, if the VI indexes into a class member that is an array, gets an element, operates on that element and does not write back to the array, no copy is made. It just looks like dereferencing a pointer. So no copy is made of the instance of the class and no copy is made inside the array - everything happens "in-place" without any extra effort on the part of the programmer. There's lot's more to know but the simple answer is that until you start operating on really large data sets LabVIEW's automatic memory management works well without any help. If you use large data sets, search for the white papers on-line about managing large data sets in LabVIEW.

A. When I create a child in OOP, and it is a by value design, is the memory connected to the parent and as long as the parent is alive the memory is allocated? (Do you think I should use by reference or by value when I use design patterns?)

I'm guessing here since I'm not sure I understand the question, but I think the answer is that any particular wire at any point in time will effectively allocate enough memory for the instance of the current type. The wire type determines what type of object instance can be carried. That instance will include the parent data (if there is any) and when that wire has no more data sinks, LabVIEW will know it can deallocate all of that memory for that instance.

B. After allocating the memory for an array for example and then using a block like replace array subset is the memory allocated again to contain indexes?

Nope - a simple LabVIEW array doesn't contain indices. It just contains the size of each array dimension and the data. So if you do something that could be implemented as pointer manipulation (like replacing array elements) no reallocation is required.

See http://zone.ni.com/r...data_in_memory/

C. If for example I add 1 to a variable does it lock the memory of the variable? Does the IPE lock the memory? When I pass messages between threads does it lock the memory when it is being synchronized?

I already know LV basics, I need to know more details about the "behind the scene" since letting LV handle automatically a bad design will cause memory problems in the first two cases and cpu slow down in the third.

You mean something like "x=x++" ? To something similar in LabVIEW then you'd use a shift register and an increment function. I can't imagine this operation not being atomic.

Regarding Paul at Loweel command pdf, Instead of giving a thread for each task (which will waste a lot of cpu time since some threads won't work all the time) I want to create the same number of threads yet let all of them work on a different task each time so all the threads will work all the time. Is that a logical line of thought in LV? How should I implement such a design pattern?

I've never seen anything about thread pools in LabVIEW. Also, LabVIEW threads are part of the LabVIEW execution engine and the LabVIEW scheduler does all the heavy lifting. This is another complicated subject that the developer typically can ignore as the execution system will allocate resources.

http://www-w2k.gsi.d...tithreading.htm

http://forums.ni.com...-work/m-p/73733

P.S. - is there a way to enter variables into an enum instead of constants?

I'm with Felix - I don't get this

Mark

Link to comment

A. When I create a child in OOP, and it is a by value design, is the memory connected to the parent and as long as the parent is alive the memory is allocated?

To expand a bit on what Felix and Mark said,

The question doesn't quite make sense in the Labview world. When a child object is instantiated during run-time it is an independent object with internal memory space for all the data associated with the parent class. It is not "connected" to any other objects nor does it's lifespan depend other objects. The only way to deallocate the memory associated with the parent object is to destroy the child object. This is true regardless of whether you're using by-val or by-ref classes.

Link to comment

OK guys, I'll try to close my eyes and trust the one true all knowing power :)

I. regarding threads, when is the thread released? Usually a thread gets resources and thus, when it is inactive those resources are wasted and that might slow down the design.

II. As for the enum conundrum, I want to reduce the coupling in the design and increase the upgradability. In enum I enter a list of strings for example and decide which one to use later. It is a bit like a case structure. Yet, if I want to add additional case or change the string in the enum I have to change it manually in my entire design.

There is the option to turn it into a typedef which I prefer to avoid.

So, I wonder whether you can enter a string variable, for example, into the enum instead of a constant string. This way I can replace the key word automatically and even create a dynamic program with context enum instead of endless lists. The same goes for entering a variable as the name of a case in a case structure. I guess the answer is I can't do that.

III. To sum it up: you guys say that if I run a program that runs a loop with a vi inside that uses an instance of a child, after a million iterations I'll still have as much free memory as in the first iteration both under by ref and by val implementation?

Link to comment

I. regarding threads, when is the thread released? Usually a thread gets resources and thus, when it is inactive those resources are wasted and that might slow down the design.

In most implementations you will code, you aren't directly interfering with the threads of your OS. LabVIEW will aquire a decent set of threads from the OS. Then the compiler will schedule your code on these threads with it's exact knownledge of your architecture.

This is due to the internal unique memory mapping of graphical programming languages.

II. As for the enum conundrum, I want to reduce the coupling in the design and increase the upgradability. In enum I enter a list of strings for example and decide which one to use later. It is a bit like a case structure. Yet, if I want to add additional case or change the string in the enum I have to change it manually in my entire design.

There is the option to turn it into a typedef which I prefer to avoid.

Why? I know of no reason to not type def an enum.

So, I wonder whether you can enter a string variable, for example, into the enum instead of a constant string. This way I can replace the key word automatically and even create a dynamic program with context enum instead of endless lists. The same goes for entering a variable as the name of a case in a case structure. I guess the answer is I can't do that.

An enum is a compile time type. This gives you the advantage of compiler errors if something is wrong instead of run-time errors. And on the other end this means a no to dynamic enums.

But when using LVOOP, instead of an enum you can use dynamic dispatch. As the name says, it's dynamic.

III. To sum it up: you guys say that if I run a program that runs a loop with a vi inside that uses an instance of a child, after a million iterations I'll still have as much free memory as in the first iteration both under by ref and by val implementation?

Yes. Because the number of wires on your code didn't change (it can't change during execution). In a simplified view, each wire segment is a single instance of your class data in the memory.

Your question just seem to reflect the worries of a C-programmer. All these threading and memory issues are handled by LabVIEW (and it does a brilliant job since ages in this respect) and we never take care about it in most situations.

Felix

Link to comment

I. regarding threads, when is the thread released? Usually a thread gets resources and thus, when it is inactive those resources are wasted and that might slow down the design.

Just to clarify this a bit which might make thing a bit clearer for you.

Labview allocates a fixed number of threads when it starts the environment. As such, you (the programmer) neither create nor destroy threads and labview uses this "pool' to schedule tasks from your program. Again, which tasks are scheduled are up to Labview although you have a small amount of control with the "Execution Subsystem".

If you are really worried about the mechanics of threading LV then I would suggest reading Multi-threading In LabVIEW. But for most Labview programmers, it's not a consideration any more than indexing past the end of an array is (you can't wink.gif ).

Link to comment
  • 4 months later...

Threadconfig will let you specify the number of threads allocated to each execution system. See this link.

http://forums.ni.com.../416234#M205616

Ben

HeyBen,

I stopped fallowing this thread that I myself opened and missed your greatpost.

Since I posted it I came across a related "bug". For some unknownreason my app hangs for a few seconds from time to time and I guess it isbecause some big while loops work simultaneously and prevent the user eventfrom working instantaneously.

Setting each loop on a different peak time might work yet giving each adifferent thread won't help me much I guess, moreover since there are more thana few of those big while loops.

After searching a bit around I came to think that the best way to handle userevents is by pausing any other background action until this first priorityevent is dealt with (I'm using the UI Framework).

I'll be leaving the threads handling for LV to handle in most cases.

Thanks again you all, I finally started enjoying working on LV.

D.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.