Jump to content

Milox

Members
  • Posts

    3
  • Joined

  • Last visited

LabVIEW Information

  • Version
    LabVIEW 2020
  • Since
    2008

Milox's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In Rare

Recent Badges

0

Reputation

  1. Wow, thank you very much for your insights. That definitely looks like the memory allocation during RTE is at least "different" from the IDE, which is strange for a compiled language I think. I have an open ticket with the NI support, but no answer yet. I will get back when I have some more info. For now the workaround is to not dynamically call IMAQ Create in RTE apps. Thanks a lot!
  2. Thank you guys for the replies. @dadreamer you are correct, placing the create IMAQ outside of the loop improves it, but as I said this is just a simplified example. My real app hast to extract 30.000 burls from a wafer clamp surface, analyze and save them (yes, save the extracts, don't ask me why, it is a requirement). These extracts are spread all over the image. The reason for the 2 loops is that you "can't" really parallelize the 1st loop, since it is depending on the same raw image input. But parallelizing the 2nd loop has performance gains in IDE. And I was just wondering why the EXE is so much slower. You mentioned that LabVIEW goes crazy trying to create IMAQ's in the for-loop. I can't see that in IDE. Could you expand on that? I will try out your suggestions. Creating N IMAQ's beforehand Doing it all in series. Raw img -> extract -> threshold (same memory location) Thanks, Milox -- Edit: I just created all the IMAQ's beforehand and it is indeed the IMAQ Create function that takes so long during Run-Time. The actual extracting and thresholding is just a couple of ms slower than IDE. Creating the IMAQ's takes a combined 14 seconds during Run-Time. In IDE it's just 0,6 s. So the question changed to "Why is IMAQ Create so much slower during Run-Time?". A workaround is to create all IMAQ's once, keep and reuse them. Just 1 massive slowdown during initialization.
  3. Hi Community, I am working on an app, which analyzes an image for multiple features. While testing the executable I found that there is a massive performance difference between running the code in the IDE and the executable. I made an example, which shows this problem clearly. Basically I extract small areas out of a much larger image. The example extracts the same 50*50 px area. After that, some local thresholding is being applied onto those extracts. 10.000 extracts, 8 parallel loops, 1 core for Vision: IDE: 0,6 s for the extracts, 0,4 s for the thresholding EXE: 4,5 s for the extracts, 11 s for the thresholding Why does it take so much longer in the EXE? My actual algorithms are much more complex, which amplify the problem massively. Playing with the parameters influences the numbers slightly, but the big difference in time between IDE and EXE remains. I tried the code on multiple machines, same problem. Example saved as 2012. System Info: Win 10 64, LabVIEW 2020 64, Vision 2020 64 (I have tried the code in 32 bit and observed the same problem) I hope you can help me out. Thanks in advance! performance_testing_lv_2012.zip
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.