Jump to content

Computing array intersections


mje

Recommended Posts

Warning! This is purely academic. My current implementations work fine, I'm just going into excessive details here...

My lack of a real CS background here is going to show, but I'm wondering if there's a way to compute array intersections that don't operate in linear time (that is I'd like the operation time not to scale linearly with array size)?

I have two methods I hammered out:

post-11742-063863200 1285959016_thumb.pn

post-11742-040273300 1285959335_thumb.pn

The first one is probably a little less obvious, but I actually consider it more elegant as it operates in place on the two input arrays, and avoids N^2-time by not blindly doing a nested for loop. Big caveat is the input arrays need to be pre-sorted. Works for me though since the arrays I will be feeding it always will be sorted by virtue of how they were created.

The second one is slower by a factor of about 5 on my system, I suspect mostly due to the extra memory allocation being done by concatenating the two input arrays, I'm sure the sort doesn't help either.

I'll be honest, both of these work in about a millisecond for the data I'm throwing at them, either one is fine. However they're called in series after a few binary searches I've implemented, and after powering through multiple arrays in microseconds (yay log(N)!), I hate to bottleneck afterwards with serial calls to a function that operates on N-time. I will be iterating over this a few thousand times though, going into log-time can mean the difference of waiting a few seconds and being instantaneous. I like instantaneous.

-m

Link to comment

The top approach looks pretty good to me. You could improve it slightly by pre-allocating the final array, then resizing it after the loop finishes. Also, this is purely aesthetic, but there's a Swap primitive available in the memory management palette that could replace the case structure at the top left.

Link to comment

Array indexing inside a for loop is a killer. Actually concatenating isn't too bad due to optimisations (LV is very efficient with For-loops). I vaguely remember a challenge from Jim Kring to find a better method than concatenating for selecting arbitrary values from an array. I don't think there were any takers.

So assuming that we have to concatenate in some form or another the best we can hope for is pre-processing. Along this line what about finding turning points? Maybe faster, maybe not but it would uses in-build functions (generally faster than native LV) and eliminate the array indexing.

Link to comment

I thought I'd seen this problem bedore :)

Here is another method that's similar to your first one rather than the second. This VI (wrote it a while ago now) returns the array differences but the theory is the same.

Slightly different in that it actually does a search and eliminates data, therefore reducing the data set as it goes. Its very quick especially if the duplicates are sparse and isn't explicitly linked to the data-set size, rather the number of duplicates.

Link to comment

Array indexing inside a for loop is a killer. Actually concatenating isn't too bad due to optimisations (LV is very efficient with For-loops). I vaguely remember a challenge from Jim Kring to find a better method than concatenating for selecting arbitrary values from an array. I don't think there were any takers.

So assuming that we have to concatenate in some form or another the best we can hope for is pre-processing. Along this line what about finding turning points? Maybe faster, maybe not but it would uses in-build functions (generally faster than native LV) and eliminate the array indexing.

Isn't the array intersection the result of the terms present in both arrays? You show the list of terms that are present at least in one, not both.

Link to comment

Array indexing inside a for loop is a killer. Actually concatenating isn't too bad due to optimisations (LV is very efficient with For-loops). I vaguely remember a challenge from Jim Kring to find a better method than concatenating for selecting arbitrary values from an array. I don't think there were any takers.

So assuming that we have to concatenate in some form or another the best we can hope for is pre-processing. Along this line what about finding turning points? Maybe faster, maybe not but it would uses in-build functions (generally faster than native LV) and eliminate the array indexing.

There's no need to concatenate in a loop at all here. You know in advance that the intersection array cannot be larger than the smaller of the two input arrays, so you can preallocate an array of that size. Track the current array index in a shift register. Each time there's a match, use replace array subset instead of build array, and increment the array index. When the loop finishes, resize the array to the final value of the index shift register. This is much faster than concatenating in a loop.

Link to comment

Isn't the array intersection the result of the terms present in both arrays? You show the list of terms that are present at least in one, not both.

Indeed. It was a quick demonstration of a theory rather than a solution. If you change it like so......

Then you get unique intersection values for example.

There's no need to concatenate in a loop at all here. You know in advance that the intersection array cannot be larger than the smaller of the two input arrays, so you can preallocate an array of that size. Track the current array index in a shift register. Each time there's a match, use replace array subset instead of build array, and increment the array index. When the loop finishes, resize the array to the final value of the index shift register. This is much faster than concatenating in a loop.

Sweet. that'd be even faster biggrin.gif

Edited by ShaunR
Link to comment

First yes, intersection is the element present in both arrays.

Of course, there's no need to do the concatenation. However my use case has the input arrays potentially being orders of magnitude larger than the intersections, I really don't want to do an allocation like that if I don't have to. Plus, NI has some really good optimizations for array sizes. Tests I've done earlier implied when arrays are allocated, a chunk of memory larger than the array size is used, allowing the array to grow without reallocations. I believe it seemed that reallocations only happened every time the array doubled in size or so (within reason, I'm sure larger arrays work off fixed size padding, possibly small ones too). With those considerations, I left the concatenation operation in the loop.

For the record, I think there are three things that can be done with the arrays:

  1. Operate in place on one of the input arrays (that is overwriting it as you go). Trim it when finished. This implies the VI will have to always copy its inputs as not to alter data on the calling VIs wires.
  2. Pre-allocate an intersection array sized to the smaller of the two inputs. Trim when finished.
  3. Concatenate within a loop.

Again, I'm banking on compiler optimizations and my use case that #3 is fastest.

I'll have to look into ShaunR's idea, not sure how fast it will run (don't know much about that function). Also completely forgot about possibly parallelizing. Good idea!

Link to comment

For the record, I think there are three things that can be done with the arrays:

  1. Operate in place on one of the input arrays (that is overwriting it as you go). Trim it when finished. This implies the VI will have to always copy its inputs as not to alter data on the calling VIs wires.
  2. Pre-allocate an intersection array sized to the smaller of the two inputs. Trim when finished.
  3. Concatenate within a loop.

If you want to be really efficient about this, and you don't need to reuse the arrays after computing the intersection, you can reuse the non-auto-indexed array to store the intersection, and trim that when the loop exits. This gets you the benefit of both 1 and 2.

Link to comment

Yes, good point, ned.

Now I think I might be a little obsessive. But here's some data.

Method A:

This is a slightly modified version of the first method I put up in my original post. Concatenation in loop.

post-11742-069631200 1285989328_thumb.pn

(Method B is not worthy of mention it's so slow, it's the second one in my original post)

Method C:

Modified to pre-allocate the intersection array, then trim it as necessary.

post-11742-026593000 1285989470_thumb.pn

Method D:

Re-uses one of the input arrays, then trims it as necessary. I'd expect that this has an implicit allocation though if the compiler decides that the VI can't operate in place on the caller's data (not verified).

post-11742-094176300 1285989564_thumb.pn

The metrics are that for sets where the intersection is very small compared to the input (say 2 elements out of 10 000), method A is fastest, but not by much. Note this is my typical use case.

A: 561

C: 655

D: 624

When the intersection is larger, say 2500 out of 10 000 elements, method A starts to show its limits due to in-loop concatenation. Method D works out to be fastest.

A: 1326

C: 780

D: 608

I'm surprised A doesn't show more of an improvement in small intersections. Given that find, I think D is the clear winner.

Still trying to wrap my brain around if this could be done more intelligently by applying binary type searching rather than indexing, but I think I'm satisfied enough for now.

I'll add that given the poor performance of B, I didn't even explore parallelism. Yet to look into the derivative methods, but I suspect they'll be quite slow given that for no other reason the data needs to be cast to arrays of doubles, but I expect the differential function to be the real bottleneck, never mind a linear iteration still needs to be done to build the intersection array. Might explore after getting some sleep.

Link to comment

Still trying to wrap my brain around if this could be done more intelligently by applying binary type searching rather than indexing, but I think I'm satisfied enough for now.

I think you can get to O(m log n) with this using a binary search. (NI's binary search vi's are here.)

Binary%20Search%20Intersection.png

Sort and binary search are (most likely) O(log n), and the for loop is O(m). Of course, this doesn't catch duplicates, and NaN's can make it a little cross-eyed.

Joe Z.

Edit: Come to think of it, you don't actually need to sort your smaller array.

Edited by jzoller
Link to comment
  • 6 years later...

I'm not an expert at this type of function, but your version doesn't produce the same output as the others.  In your VI I gave Array 1 = [4] and Array 2 = [5]  and the output was [4,5], and the first VI posted here results in an empty array for that input.

I haven't done any speed test but with the advent of the conditional indexing terminal I'd suspect the simplest solution to be the modified version of what SuperS posted.

Example_VI_BD.png

Edit:  Oh but that does include duplicates multiple times.  Maybe a remove duplicates from the Array B, or remove duplicates from Array A and B since that output will never be larger than Array B.

Link to comment
  • 2 weeks later...

Going off Hooovahh's code, since you say the inputs are sorted, you could use the found index as the initial search location for the next iteration. I know you lose the parallel-for-loop benefits, but it may make up for that by having a decreasing search pool for each subsequently found element, You'd need to catch when the B element wasn't found so you don't reset your starting point.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.