mje Posted October 1, 2010 Report Share Posted October 1, 2010 Warning! This is purely academic. My current implementations work fine, I'm just going into excessive details here... My lack of a real CS background here is going to show, but I'm wondering if there's a way to compute array intersections that don't operate in linear time (that is I'd like the operation time not to scale linearly with array size)? I have two methods I hammered out: The first one is probably a little less obvious, but I actually consider it more elegant as it operates in place on the two input arrays, and avoids N^2-time by not blindly doing a nested for loop. Big caveat is the input arrays need to be pre-sorted. Works for me though since the arrays I will be feeding it always will be sorted by virtue of how they were created. The second one is slower by a factor of about 5 on my system, I suspect mostly due to the extra memory allocation being done by concatenating the two input arrays, I'm sure the sort doesn't help either. I'll be honest, both of these work in about a millisecond for the data I'm throwing at them, either one is fine. However they're called in series after a few binary searches I've implemented, and after powering through multiple arrays in microseconds (yay log(N)!), I hate to bottleneck afterwards with serial calls to a function that operates on N-time. I will be iterating over this a few thousand times though, going into log-time can mean the difference of waiting a few seconds and being instantaneous. I like instantaneous. -m Quote Link to comment
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.