Jump to content

Faster Spline interpolation - c++ dll implementation?


Recommended Posts

Hi,

in my code I'm using "Spline Interpolant.vi" and "Spline Interpolation.vi" (the last one in a for loop) to interpolate an array of 11 elements and the final array has 501 elements. The time required (on my pc) is ~40 microseconds.

I have to do the same operation thousand of times for every measure and, every time, I need to execute also "Splite Interpolant.vi" because the original array will change. I have tried others interpolation method available in LabVIEW but the time required is the ~ same or more; I don't know if there is something else I can try to do to increase the speed of the interpolation. But, at the moment, the time budget between two consecutive measure is very limited.

I have looked at the code of the two vi's and, from my understanding, they are not calling some C/C++ primitive to calculate the 2nd derivative and interpolate; instead it looks like pure LabVIEW till the end.

I found this C++ implementation https://kluge.in-chemnitz.de/opensource/spline/ but it's a "header-only" library and I don't know how to call it from LabVIEW without dll file. Anybody here can help me? can I expect a decrease of the required time? The linked library is just an example, I'm open to any other solutions/library/dll.

Thank you.

Marco.

splineInterpolation.png

Link to comment

You need to compile it into a DLL to be able to call it.

But LabVIEW code IS compiled too and fairly performant. 40us is not a lot of time to do those kind of mathematical operations. Even if you use a highly optimizing C compiler like the Intel C compiler you are most likely not going to see huge differences when using that code as a DLL. You can of course try but you will need to use a C compiler of some sorts for this.

And it is C++, using the standard template classes. You will also need to write a small C wrapper around this in order to be able to call it from the LabVIEW Call Library node. As the code is GCC specific I can't help you. If it would be compilable with Visual C as is, I might try to create the DLL, but as already said, my hopes that you will see significant performance improvements if you use the C++ code is not that great.

Edited by Rolf Kalbermatter
Link to comment

Is the output of the previous interpolation used in the setup of the next measurement, or could you process the incoming readings in a separate loop to avoid having to wait for the interpolation before doing the next measurement?

Link to comment
15 minutes ago, Rolf Kalbermatter said:

You need to compile it into a DLL to be able to call it.

But LabVIEW code IS compiled too and fairly performant. 40us is not a lot of time to do those kind of mathematical operations. Even if you use a highly optimizing C compiler like the Intel C compiler you are most likely not going to see huge differences when using that code as a DLL. You can of course try but you will need to use a C compiler of some sorts for this.

And it is C++, using the standard template classes. You will also need to write a small C wrapper around this in order to be able to call it from the LabVIEW Call Library node. As the code is GCC specific I can't help you. If it would be compilable with Visual C as is, I might try to create the DLL, but as already said, my hopes that you will see significant performance improvements if you use the C++ code is not that great.

Thank you. I realized I made a big mistake: "Spline Interpolant.vi" and "Spline Interpolation.vi" are indeed calling a dll primitive. I got confused with some other vi that I'm using after the spline interpolation. Hence I think that you comment is right: there are very few possibilities to reduce the time required, at least with the same hardware. (sorry for wasting your time).

Just now, Mads said:

Is the output of the previous interpolation used in the setup of the next measurement, or could you process the incoming readings in a separate loop to avoid having to wait for the interpolation before doing the next measurement?

Unfortunately I cannot use any information from the previous measure.

Link to comment
1 hour ago, Bruniii said:

Thank you. I realized I made a big mistake: "Spline Interpolant.vi" and "Spline Interpolation.vi" are indeed calling a dll primitive. I got confused with some other vi that I'm using after the spline interpolation. Hence I think that you comment is right: there are very few possibilities to reduce the time required, at least with the same hardware. (sorry for wasting your time).

Unfortunately I cannot use any information from the previous measure.

No, I am asking why you need the spline to run between the measurements, instead of just handling that part off to a parallel or post-processing code. If the result of the spline is not needed for the next measurement the two things might not need to be handled sequentially. If that is the case the time spent on the interpolation would not be an issue...(it might represent a memory issue instead then, but that is less likely and easier to deal with).

Edited by Mads
Link to comment
10 minutes ago, Mads said:

No, I am asking why you need the spline to run between the measurements, instead of just handling that part off to a parallel or post-processing code. If the result of the spline is not needed for the next measurement the two things do not need to be handled sequentially, and the time spent on the interpolation would then not be such an issue...

Ah, sorry... The "analysis" code is already in a dedicated thread of the QMH framework. The time to acquire the 2d array of 10000x100 elements is 360ms; while the total time to:

  • take one row ( 1d array of 100 elements)
  • take a small subset of 11 elements centered around the index of the max
  • spline the subset from 11 elements to 501
  • correlate with a reference array and search the index of the max

for 10000 times is consistently more than the acquiring time and the cpu usage is close to 80%. At the moment the only solutions is to decimate the 2d array to 2000 x 100 elements or 5000 x 1000 elements.

I have already timed all the other steps of the "analysis" and the interpolation is responsible for >90% of the total time.

Edited by Bruniii
Link to comment

Ok, so you need the processing to return the results faster for other reasons than the sampling itself.

Just as a comparison I set up a test on my computer (I used the NI_Gmath:Interpolate 1D.vi, with spline as the chosen method) and as on your computer it took 400 ms to run through an array of 10000*100. Adjusting the parallelism of the loop running the interpolation VI it dropped to 220 ms with 2 loops. With 16 loops it got down to 74 ms (I ran it on an i7-9700 with 8 cores).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.