Jacemdom Posted January 15, 2017 Report Share Posted January 15, 2017 Why is indexing conditional tunnel 3x faster than shift registers? It must have something to do with some kind of optimization code managing the array allocation, but why can't it do it in both cases? Run attached example. ConditionalIndexingTunnel.vi Quote Link to comment
drjdpowell Posted January 15, 2017 Report Share Posted January 15, 2017 Indexing tunnels (conditional or not) follow a preallocation strategy of filling a larger-than-initially-needed array (later cutting the unneeded elements), while the “Build Array” primitive allocates a new array of exactly the right size on each iteration. So there is a lot fewer calls to the memory manager with an indexing tunnel. Quote Link to comment
MikaelH Posted January 16, 2017 Report Share Posted January 16, 2017 I would use for loops in your examples. And if you do, it's kind of impressive that you can run the loop with Parallelism activated, but them suddenly it takes 3 times longer time. Quote Link to comment
Jacemdom Posted January 16, 2017 Author Report Share Posted January 16, 2017 21 hours ago, drjdpowell said: Indexing tunnels (conditional or not) follow a preallocation strategy of filling a larger-than-initially-needed array (later cutting the unneeded elements), while the “Build Array” primitive allocates a new array of exactly the right size on each iteration. So there is a lot fewer calls to the memory manager with an indexing tunnel. Was aware of this but was not aware that this was also the case in initialize/replace scenarios. More info on the subject here http://forums.ni.com/t5/LabVIEW/Why-is-indexing-conditional-tunnel-3x-faster-than-shift/m-p/3570173#M999382 Thanks Quote Link to comment
smithd Posted January 16, 2017 Report Share Posted January 16, 2017 On 1/15/2017 at 2:30 PM, drjdpowell said: Indexing tunnels (conditional or not) follow a preallocation strategy of filling a larger-than-initially-needed array (later cutting the unneeded elements), while the “Build Array” primitive allocates a new array of exactly the right size on each iteration. So there is a lot fewer calls to the memory manager with an indexing tunnel. I thought build array did some preallocation but not very much. I rearranged the code with manual preallocation and you had to allocate thousands and thousands of elements for it to be faster. I had a stray thought about allocating a percent of the existing array with a baseline and tried it out. For elements 0-10000 it allocates 1000 when it runs out, then for elements 10000+ its 10% of current array size. That made it about 30% faster than the indexing terminals. But, on the other hand, who cares in this situation? This use case doesn't seem like a big performance bottleneck. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.