Jump to content

My clever design...is NOT clever


Recommended Posts

Hoping to get some insight from others; LAVA-ers have repeatedly saved the day for me.

BACKGROUND:

I spent time switching over a working program to a new architecture as it was becoming difficult to manage its scalability in its current state.

However, with my new, more scalable architecture I must be missing a fundamental LV programming philosophy -->

LabVIEW memory keeps increasing until it crashes (memory increasing and then CPU usage to ~100%).

I have used the Tools -> Profile ->Performance and Memory and Tools -> Profile ->Show Buffer Allocations. I can see the VIs that are taking up more and more memory but am stumped on how to fix this problem. There was significant effort in my first redesign, and I was hoping I don't have to rework this architecture.

To get right to the point, there are 2 basic designs that I thought were "clever" but must not be.

The first is my basic SubVI structure. I created an array of cluster data where each array element is an object for each UUT I am testing (running parallel UUTs). I am using FOR loops to read/write the data of each "object".

Note: I am not using LV's OOP features, just calling this cluster an object since it ties to a UUT.

1)Basic SubVI.

post-15321-087544400%201286230047.png

2) The example subVI above would be called in a "substate machine" <below>. I used state machines with functional globals, so that after each substate is driven the subVI is left (to access the main State Machine) and then comes back to the sub-state that is hanging out as a functional global.

post-15321-025487400%201286230933.png

I am passing a lot of data (multiple clusters with arrays).

I am guessing I have designed something that is fundamentally flawed. Can someone break my heart and tell me why.

***Right before posting this I noticed that the "sub-state machine" main data cluster doesn't need a shift-register terminal since this data is being passed in and out each time this subVI is run. Does this have an impact on memory?***

Thanks!

post-15321-087544400 1286230047_thumb.pn

post-15321-025487400 1286230933_thumb.pn

Link to comment

I am passing a lot of data (multiple clusters with arrays).

That could be an issue, as if you resize the array, the complete super-cluster must be reallocated/copied.

But I would suggest to dig further. Try to get hold of the code section where LV crashes. If your project is big, you could implement a simple logger vi, that just writes to a file it's call chain. Then you know until which point the code did run fine.

Felix

Link to comment

Felix,

Thanks for the recommendation.

Because of how my program logic operates, I believe I know what code is running when LV gets stuck. Basically it is one (sub-)state machine that is run pretty continuously (gathering serial data). Other subVIs are run only periodically (change mux channel, increment counter, etc).

What is strange is that there seems to be a tipping point in this code. Memory slows increases (not to concerned about this), but at one point in my program execution the CPU memory will almost double or triple and cause LV to stop responding. However, I am not changing what code I am executing at this point (that is, the subVI running during the crash has run many times before without this memory expansion).

frusty.gif

That could be an issue, as if you resize the array, the complete super-cluster must be reallocated/copied.

But I would suggest to dig further. Try to get hold of the code section where LV crashes. If your project is big, you could implement a simple logger vi, that just writes to a file it's call chain. Then you know until which point the code did run fine.

Felix

Link to comment

What is strange is that there seems to be a tipping point in this code. Memory slows increases (not to concerned about this), but at one point in my program execution the CPU memory will almost double or triple and cause LV to stop responding. However, I am not changing what code I am executing at this point (that is, the subVI running during the crash has run many times before without this memory expansion).

frusty.gif

To me that sounds like an array that has is growing and need to allocate more memory. LV allocates a chunk of memory for the array when it is created, if that memory space is about to run out it allocates another chunk of memory. Have you tried using Initialize Array to define the array siza from the start?

//Martin

Link to comment

.Because of how my program logic operates, I believe I know what code is running when LV gets stuck. Basically it is one (sub-)state machine that is run pretty continuously (gathering serial data). Other subVIs are run only periodically (change mux channel, increment counter, etc).

Something to try might be to replace that 1 vi that is gathering serial data with a dummy vi that just outputs a string of the same size you are expecting. See if that still produces a memory problem.

My most confusing memory loss issues in the past was with a driver that wasn't playing well with a new computer, and not a code problem. If there's no other smoking gun, maybe you have something similar.

Link to comment

Hoping to get some insight from others; LAVA-ers have repeatedly saved the day for me.

BACKGROUND:

I spent time switching over a working program to a new architecture as it was becoming difficult to manage its scalability in its current state.

However, with my new, more scalable architecture I must be missing a fundamental LV programming philosophy -->

LabVIEW memory keeps increasing until it crashes (memory increasing and then CPU usage to ~100%).

I have used the Tools -> Profile ->Performance and Memory and Tools -> Profile ->Show Buffer Allocations. I can see the VIs that are taking up more and more memory but am stumped on how to fix this problem. There was significant effort in my first redesign, and I was hoping I don't have to rework this architecture.

To get right to the point, there are 2 basic designs that I thought were "clever" but must not be.

The first is my basic SubVI structure. I created an array of cluster data where each array element is an object for each UUT I am testing (running parallel UUTs). I am using FOR loops to read/write the data of each "object".

Note: I am not using LV's OOP features, just calling this cluster an object since it ties to a UUT.

1)Basic SubVI.

post-15321-087544400%201286230047.png

2) The example subVI above would be called in a "substate machine" <below>. I used state machines with functional globals, so that after each substate is driven the subVI is left (to access the main State Machine) and then comes back to the sub-state that is hanging out as a functional global.

post-15321-025487400%201286230933.png

I am passing a lot of data (multiple clusters with arrays).

I am guessing I have designed something that is fundamentally flawed. Can someone break my heart and tell me why.

***Right before posting this I noticed that the "sub-state machine" main data cluster doesn't need a shift-register terminal since this data is being passed in and out each time this subVI is run. Does this have an impact on memory?***

Thanks!

That code construct resembles something that Shane and I looked into a while back.

Check your buffer allocations.

In some cases a stategic use of seq structures will help LV figure it can re-use the bufers.

Q:

Is the code you show the whole sub-VI or that code inside a case structure?

Ben

Link to comment

That code construct resembles something that Shane and I looked into a while back.

Check your buffer allocations.

In some cases a stategic use of seq structures will help LV figure it can re-use the bufers.

Q:

Is the code you show the whole sub-VI or that code inside a case structure?

Ben

Ben,

Thanks for the reply. The original .png of the code was a sample VI that I made to simplify the question.

But actually the subVI in question is three case statements deep (inserted below)

Do you think that is the issue?

I am not sure how to change my code for fewer buffer allocations with out some major re-design (which may need to happen, anyway)

post-15321-095417100%201286297385.png

I should note to, per original post, that the PC this code runs on has 512MB of RAM. I plan to test the same code with more RAM installed.

Will update if this fixes my memory problem.

I hate to use a hardware forgive a design flaw (if that is what I have). But, it appears it is a balancing act --> paying some memory overhead for a scalable design; as I had a working program that was much harder to modify/understand (but didn't crash because of memory issues)

post-15321-095417100 1286297385_thumb.pn

Link to comment

That code construct resembles something that Shane and I looked into a while back.

Check your buffer allocations.

In some cases a stategic use of seq structures will help LV figure it can re-use the bufers.

Q:

Is the code you show the whole sub-VI or that code inside a case structure?

Ben

URL for Dark-side discusion.

http://forums.ni.com/t5/LabVIEW/cluster-array-performance-penalty/m-p/481919#M231529

Ben

Link to comment

AND...

Per these replies--> it appears that LV 8.5+ (and higher) addressed this issue with the in-place structure.

Now I need to convince my employer to leave 2007 (8.2.1) behind!

Hi,

Even if you convert to a newer LabVIEW version I think you will still have to do some major redesign;

In the picture posted in response to Bens question, you are entering the loop with both the complete cluster as well as an individual cluster element.

Doing so should force LabVIEW into copying data.

/J

Link to comment

Hi,

Even if you convert to a newer LabVIEW version I think you will still have to do some major redesign;

In the picture posted in response to Bens question, you are entering the loop with both the complete cluster as well as an individual cluster element.

Doing so should force LabVIEW into copying data.

/J

Depends...

on what is in that case structure.

WIth a helper seq structure we can help LV "see" that the data buffer can be used in place so if the code is scheduled to do the opearations in the correct order, the copy is not required.

Ben

Link to comment

WIth a helper seq structure we can help LV "see" that the data buffer can be used in place so if the code is scheduled to do the opearations in the correct order, the copy is not required.

I'm not picking up what you're putting down about this helper sequence structure. Care to demonstrate?

Link to comment

Depends...

on what is in that case structure.

WIth a helper seq structure we can help LV "see" that the data buffer can be used in place so if the code is scheduled to do the opearations in the correct order, the copy is not required.

Ben

I'm not picking up what you're putting down about this helper sequence structure. Care to demonstrate?

Yes, I am still stuck on this.

An upgraded PC and upgraded LabVIEW version 8.5 hasn't helped. LabVIEW just alocates up more memory before prompting with error message --> error message in 8.5.1 versus actual crash in 8.2.1.

But I am struggling about where and how to implement the in-place structure to fix this.

Also, I am not sure how to benefit from the Show Buffer Allocations.

Hi,

Even if you convert to a newer LabVIEW version I think you will still have to do some major redesign;

In the picture posted in response to Bens question, you are entering the loop with both the complete cluster as well as an individual cluster element.

Doing so should force LabVIEW into copying data.

/J

If this causes LabVIEW into copying data I am :frusty: ; because it seems like this is an intuitive way to implement testing "objects"

Without sending the complete <main data> cluster, along with a the individual <object>cluster elements (in certain subVIs), and the individual <object> array nested within the <main data> cluster -- there doesn't seem to be a way to keep my design neat. By neat I mean sending one cluster wire <main data> to my most top-level subVIs. Another, better, design approach doesn't jump out at me. I felt like I built this design on using LabVIEW "best practices" and programming technique, but clearly, I must still be missing something.

Is the code presented a clearly depict a memory problem?

Is it possible I am looking at the wrong section that is causing my program to crash?

Other parts of my code use several VI server references which I know isn't best practice or desired. I checked to make sure I am closing all these references, but could this be an issue? Or is the consensus is my nested object artichitecture is killing this program?

Thanks for everyone's input and insight. This problem is frustrating, but has been a great learning experience so far.

-pete

Link to comment

Now might be a good time to download an eval copy of the Execution Trace Toolkit - it's not without its own issues, but it really might be helpful in this instance to see where the memory is actually getting allocated at run time.

IIRC, you need at least 8.6 for that toolkit, but I could be mis-remembering. Either way, I'll vouch that it has a learning curve of its own, but once you get the hang of it, it can be a very valuable asset when you push the limits of LV.

Link to comment

Sorry about the hit-an-miss replies... many distractions.

The show bufers will let you compare various code constructs to find one that minimize the number of buffer copies.

What is in the Other case in the original image?

That thread I linked from the dark-side included annotations about the buffers. The same approach could/should help.

Re: seq structure

Somewhere in the old LAVA I responded to a thread froma user with a pig icon (?) about memory. In that case the array size was forcing a "buffer copy on wire branch" and a seq structure helped LV figure out that it could check the size THEN latter do something in the same buffer.

I hope that helps,

Ben

Link to comment

Sorry about the hit-an-miss replies... many distractions.

The show bufers will let you compare various code constructs to find one that minimize the number of buffer copies.

What is in the Other case in the original image?

That thread I linked from the dark-side included annotations about the buffers. The same approach could/should help.

Re: seq structure

Somewhere in the old LAVA I responded to a thread froma user with a pig icon (?) about memory. In that case the array size was forcing a "buffer copy on wire branch" and a seq structure helped LV figure out that it could check the size THEN latter do something in the same buffer.

I hope that helps,

Ben

Found that old thread!

http://lavag.org/topic/4656-memory-allocation/page__p__25647__hl__%22ben%22+inplace+%22array+size%22__fromsearch__1#entry25647

Ben

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.