Jump to content

Milox

Members
  • Content Count

    3
  • Joined

  • Last visited

Community Reputation

0

About Milox

  • Rank
    LAVA groupie

LabVIEW Information

  • Version
    LabVIEW 2020
  • Since
    2008
  1. Wow, thank you very much for your insights. That definitely looks like the memory allocation during RTE is at least "different" from the IDE, which is strange for a compiled language I think. I have an open ticket with the NI support, but no answer yet. I will get back when I have some more info. For now the workaround is to not dynamically call IMAQ Create in RTE apps. Thanks a lot!
  2. Thank you guys for the replies. @dadreamer you are correct, placing the create IMAQ outside of the loop improves it, but as I said this is just a simplified example. My real app hast to extract 30.000 burls from a wafer clamp surface, analyze and save them (yes, save the extracts, don't ask me why, it is a requirement). These extracts are spread all over the image. The reason for the 2 loops is that you "can't" really parallelize the 1st loop, since it is depending on the same raw image input. But parallelizing the 2nd loop has performance gains in IDE. And I was just wondering why the EX
  3. Hi Community, I am working on an app, which analyzes an image for multiple features. While testing the executable I found that there is a massive performance difference between running the code in the IDE and the executable. I made an example, which shows this problem clearly. Basically I extract small areas out of a much larger image. The example extracts the same 50*50 px area. After that, some local thresholding is being applied onto those extracts. 10.000 extracts, 8 parallel loops, 1 core for Vision: IDE: 0,6 s for the extracts, 0,4 s for the thresholding EXE
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.