Jump to content

Profile Performance and Memory - Inaccurate Numbers


mje

Recommended Posts

Any of you ever see something like this?

post-11742-0-09692200-1317650639_thumb.p

I'm trying to pinpoint a bottleneck in a new process I wrote, but I'm seriously questioning the results here...

The numbers for those four VIs do not change over the lifetime of an execution, or if I execute the application multiple times within the same instance of the IDE. If the IDE is restarted, some of the VIs will return the same numbers, others not so much.

Also, some of the more reasonable looking large numbers, like the VI which has reportedly been executing for 179 seconds are definitely wrong, as the application had been running for less than a minute.

Some clues as to what might be going on:

  • These VIs with astronomically large metrics fall into one of two categories: they are VIs which are responsible for spawning asynchronous tasks; or are VIs which contain the main loop for asynchronous tasks. Note the later VIs are reentrant.
  • I can start a new async task, and the new main loop VI will show up in the list a very similar large number.
  • Not all of these async tasks however show up with huge numbers. Some which are always spawned as the application start up do not show this behavior (but others do).
  • These async tasks are launched via the async call by reference primitive.

Maybe this is old news with timing reentrant/async methods, or is there something else afoot?

-m

Link to comment

Hi there. Could you post a link to the VIs that are giving you trouble here? If you don't feel comfortable posting the VIs publically, you can always privately email them to me at doug.tucker@ni.com

I would like to do some further testing, as we've recently addressed and fixed an issue with astronomical numbers in the profiler (hopefully the fix will cover this case as well). If not, we will look into it!

  • Like 1
Link to comment

Unfortunately I have seen this a lot as well. I think I saw this first around LabVIEW 8.0 (for sure I never saw this in LV 7.1 or earlier). I don't have a fix, but I have a work around that sometimes help get better number.

  • Before your application run, start the "profiler"
  • Click on the snapshot button a couple of time (<- this is the trick that do sometimes help)
  • Start your app

Hopefully this will be useful in your situation.

PJM

Link to comment

Thanks both of you.

Doug, I'll assemble something for you soon. It's a fairly complex application (last count put it over 1800 VIs), so without a little direction I fear you will not get anywhere. I will contact you directly via email as the source code for this application can't be released.

Link to comment

Thanks both of you.

Doug, I'll assemble something for you soon. It's a fairly complex application (last count put it over 1800 VIs), so without a little direction I fear you will not get anywhere. I will contact you directly via email as the source code for this application can't be released.

Wonderful. When including directions, please be specific! I just started on the LabVIEW team in July and have had very little exposure to LV prior to that. Thanks!

Link to comment

Here's an update/resolution for everyone else following this topic: I received the VI(s) from user mje and tested his code. As expected, the error reproduced in retail release version of LabVEW 2011.

Great news: the previous issue we fixed also fixed this code! Here is a screenshot of the profiler running the code mje submitted:

post-26652-0-35282300-1318259652_thumb.p

As I told mje via email: I will talk to the powers that be and see if I can get the fix added in to LabVIEW 2011 SP1. If for some reason it cannot be added into SP1, it will most likely be integrated into the release after that.

As for whether that's actually the code that mje gave me or not... you'll just have to ask him ;)

Link to comment

Yes, thanks Doug!

No harm in showing the VI names I suppose, many of them are library VIs I've published here before:

post-11742-0-02571900-1317911593_thumb.p

As most of their names imply, the ones with any real VI time fall into one of two categories: either they're involved with launching a new async process; or they involve some aspect of the user interface. So all in, the numbers are exactly as expected!

Good to see a fix is coming, hopefully we won't have to wait too long for it!

-m

  • Like 1
Link to comment
  • 10 years later...

Hello,

It's been a long time since this discussion, but I have the same issue (LabView 2015, SP1). The "outrageous" numbers in a given run are similar, e.g.

  • 1844674406060158.2
  • 1844674406060345.8
  • 1844674406374439.5

etc, close to each other but not necessarily equal. It also seems that these pop up either under "Sub VIs Time" or "Total Time".

I tried hitting "snapshot" after starting the profile and before running the vi, but this does not seem to eliminate these bizarre numbers.

Thanks for any insights,

Lyle

Link to comment

Back in 2015 the LV profiler on Windows used an older, lower resolution time API that I believe could return timestamps that would result in negative deltas. Using unsigned arithmetic would result in gigantic time deltas like this.

I believe it was LV 2017 or 2018 that was upgraded to use the QueryPerformanceCounter API for these timestamps, which has much better monotonicity and higher resolution.

Rob Dye, NI LabVIEW team.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.