Jump to content

How to make LVOOP faster?


Recommended Posts

I found the example located in

C:\Program Files\National Instruments\LabVIEW 8.5\examples\lvoop\BoardTesting

This example claims the work being done using the traditional Task approach is the same as in the OO version. While studying these examples I sensed the OO version was taking longer so I did my standard benchmark code (note: I added output terminal of the results to the icon of both versions) and it turns out YES the LVOOP is slower!

post-29-1210878876.png?width=400

Watching the Task manager shows the LVOOP version is spending much more time idle (not using all of CPU, waiting on mutex, maybe durring dynamic dispatch (?) ).

So are there any ideas on how to make the LVOOP version as fast as the Task verions?

Thank you!

Ben

PS: That example I showed above will demonstrate a memory leak (I think ). Is the a BUG in LVOOP that I don't know about?

Link to comment

QUOTE (neB @ May 15 2008, 02:18 PM)

This example claims the work being done using the traditional Task approach is the same as in the OO version.

Actually, there isn't any claim that these two implementations are doing the same work. They are doing the same job. That example is a highlight of two different architectures. There are several performance accelerations that are possible in both implementations, but those optimizations obscure the behavior of the code. The efficiency of the implentations is not something I've ever dug into.

I'll take a look at the memory leak claim sometime next week.

[LATER] Ok. I couldn't resist digging into the memory leak tonight.

I'm not seeing it. I tried both LV8.2 and 8.5. But I am seeing something that might make you think you're seeing a memory leak.

Run the VI once. It allocates memory. When it finishes running, it deallocates some memory, but does not return to its initial amount.

Run the VI again. It allocates more memory than it did the first time. Then it deallocates some.

Repeat a few times.

After a few runs, you'll reach an equilibrium state where the amount it starts with is the same as the amount it finishes with.

So, no memory leak. But the curious behavior deserves some explanation. Here's what I'm fairly certain is happening.

So, what is going on? LV classes save memory by having only one copy of the default value of the class in memory. Any instance of the class that is the default value just shares that one copy. So if a terminal gets a non-default value, we allocate a new space in memory to hold that value. We don't bother deallocating the terminal once we have bothered to allocate it (if we paid for the effort of allocating it, we might need it again the next time the subVI is called).

Since this VI has random input, not every code path is exercised on every execution, and so on successive execution there will be some terminals that get allocated for the first time.

Eventually, all the code paths are allocated, and we reach equilibrium.

Link to comment

QUOTE (Aristos Queue @ May 15 2008, 08:51 PM)

...

Since this VI has random input, not every code path is exercised on every execution, and so on successive execution there will be some terminals that get allocated for the first time.

Eventually, all the code paths are allocated, and we reach equilibrium.

Thanks Aristos!

With a loop count of "50" the code was terminating with not enough memory. NI Support was able to see the increse in memory useage.

So I f I had an intital step that forced non-default data into "everything" (? still have have to learn more to be able to figure out to do that! ?) the memory useage should not climb.

[back at work and trying some more]

I got rid of the random number stuff and just wired a true to make all regions bad then cranked up the benchmark for 100 iterations. Got message saying out of memory but it also ID'd the offending VI as the code that generates the images and puts them in the queue.

So I hacked the code to eleminate the queue all together and what was looking like a memory leak is now gone. I'll say that this makes sense since with a loop count of 100 I am creating and then re-creating the queues 100 times. On top of that I am stuffing 150 elements of image data into each queue. I already understand that the resource allocated for queues are only freed up when LV terminates, so.....

I don't think there is a memory leak but rather my example is simply not manamging memory properly.

RE: Speed

The task version is still running faster (about 40%). I am coming to understand that fact being due to how the two methods are implemented. In the task version, the individual sub-sections of the widgets are checked using explicit code for each region. This code construct (attempt to ofuscate?) lends it self well to multithreading as indicated by the CPU usage of both cores being higher durring the "task" part of the benchmark. In the LVOOP implelemtation (help me out with terminology) regions are all handled by the same code (decomposition?) so parallel execution is just not possible.

[And after some further thought...]

The decompostion (?) observation I made above seems to parallel "Normalizing" a database where a "fully-normalized" DB can perform better if it is slightly "de-normalized". ("Just because we CAN do something does not mean we SHOULD do something." (Paraphrase from Jurasic Park, Jeff Goldblooms character)).

Thanks for your comments and reading!

Ben

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.