PA-Paul Posted August 30, 2017 Report Share Posted August 30, 2017 Hi All, This may be a maths question, or a computer science question... or both. We have a device frequency response data set, which is measured at discrete frequency intervals, typically 1 MHz. For one particularly application, we need to interpolate that down to smaller discrete intervals, e.g. 50 kHz. We've found that the cubic Hermite interpolation works pretty well for us in this application. Whilst doing some testing of our application, I came across an issue which is arguably negligible, but I'd like to understand the origins and the ideal solution if possible. So - If i interpolate my data set with my X values being in MHz and create my "xi" data set to be in MHz and with a spacing of 0.05, I get a different result from the interpolation VI than I do if I scale my X data to Hz and create my xi array with a spacing of 50,000. The difference is small (very small), but why is it there in the first place? I assume it comes from some kind of floating point precision issue in the interpolation algorithm... but is there a way to identify which of the two options is "better" (i.e. should I keep my x data as frequency and just scale to MHz for display purposes when needed, or should I keep it in MHz)? Ideologically there should be no difference - in both cases I'm asking the interpolation algorithm to interpolate "by the same amount" (cant think of the write terminology there to say we're going to 1/20th of the original increment in both cases). Attached is a representative example of the issue (In LV 2016)... Thanks in advance for any thoughts or comments on this! Paul Interpolation with Hz and MHz demo.vi Quote Link to comment
crossrulz Posted August 30, 2017 Report Share Posted August 30, 2017 Sounds like the classic floating point accuracy issue to me. Quote Link to comment
JamesMc86 Posted August 30, 2017 Report Share Posted August 30, 2017 I agree with crossrulz given the errors involved (E-12 and E-13). I put a snippet on a blog post which lets you calculate the error for a particular range. https://devs.wiresmithtech.com/blog/floating-point-precision/ (Note these will be smaller since these are the representation errors which will increase through rounding errors in the maths) I expect neither is better - since the dynamic range of both is the same then no option is more or less appropriate for floating point numbers. I would just pick your favourite, safe in the knowledge that your hardware will probably generate errors much higher that 1 pHz! (p.s. I attached the code since I couldn't get the snippet in the post to work) Double Precision Calculator.vi Quote Link to comment
PA-Paul Posted September 1, 2017 Author Report Share Posted September 1, 2017 Thought I posted this the other day, but apparently didnt. I agree it's likely a precision/floating point thing. The issue I have is how it tracks through everything. So the interpolation is used to make the frequency interval in our frequency response data (for our receiver) match the frequency interval in the spectrum (FFT) of the time domain waveform acquired with the receiver. We do that so we can deconvolve the measured signal for the response of the device. We're writing new improved (and tested) version of our original algorithm and wanted to compare the outputs of each. In the new one, we keep frequency in Hz, in the old in MHz. When you run the deconvolution from each version on the same waveform and frequency response you get the data below: The top graph is the deconvolved frequency response from the new code, the middle from the old code and the bottom is the difference between the two... It's the structure in the difference data that concerns me most - it's not huge, but it's not small and it appears to grow with increasing frequency. Took me a while to track down the source, but it is the interpolation. If we convert our frequency (x data) to MHz in the new version of the code, the structure vanishes and the average difference between the two is orders of magnitude smaller. And its there that I'd like to know which is the more correct spectrum?! The old or the new? Any thoughts? Paul Quote Link to comment
ensegre Posted September 1, 2017 Report Share Posted September 1, 2017 deconvolution=inverse problem=sensitivity to small denominators difference of large, almost equal numbers=large truncation errors Without seeing the code (maybe even seeing it), it's difficult to say which is more right. Given the kind of problem, anyway, I'd not be surprised that simply rescaling one set of datapoints (you say Hz or Mhz - this already shifts 6 orders of magnitude) gives a better or worse result. If you ever had experience, even with linear regression you may incur in the problem - fitting (y-<y>)/<y> to (x-<x>)/<y> may do wonders. In the code, I'd look first for possible places where you sum and subtract numbers of very different magnitudes, and see if a proper rescaling reduces the spread in magnitudes, while keeping the computation algebraically equivalent. On 8/30/2017 at 11:57 AM, PA-Paul said: This may be a maths question, or a computer science question... or both. "numerical analysis", proper. Quote Link to comment
PA-Paul Posted September 3, 2017 Author Report Share Posted September 3, 2017 So, does this sound like a sensible way to evaluate which option (expressing the X values for my interpolation in Hz or MHz) provides the "best" interpolation: Generate two arrays containing e.g. 3 cycles of a sine wave. One data set is generated with N samples, the other with 200xN samples. Generate two arrays of X data, N samples long, one running from 0 - N and the other running from 0 to N E6 Generate two arrays of Xi data - 200xN samples long, again with one running from 0-N and the other from 0 to N E6 Perform the interplolation once for each data set using the N samples sine wave data for the Y data in each case, then using the other X and Xi data sets. Calculate the average difference between the interpolated data sets and the 200xN sample sine wave. If one option is better than the other, it should have a lower average (absolute) difference, right? I'll code it up and post it shortly, but thought I'd see if anyone thinks that's a valid approach to evaluating this! Thanks Paul Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.