Jump to content

Generate 3rd Order Polynomial from 2D Data?


Recommended Posts

How would I go about deriving a 3rd order Polynomial from data like the two columns below, and do it inside of LabVIEW?

Wanting to do this LabVIEW directly as opposed to in Excel then transferring the polynomial into LabVIEW.

Yes, I know that there are pairs of rows, even one quad of rows which are redundant. But this is how we get the calibration data from the source vendor.

20.00150000    0.00071310
20.00270000    0.00071310
27.48430000    0.00106420
27.48550000    0.00106420
34.97700000    0.00135657
34.97860000    0.00135660
42.46360000    0.00164990
42.46370000    0.00164989
49.95510000    0.00194370
49.95600000    0.00194366
50.00030000    0.00194537
50.00110000    0.00194537
67.48960000    0.00263146
67.49260000    0.00263158
84.98750000    0.00331974
85.00500000    0.00331813
102.49060000    0.00400498
102.51360000    0.00400515
120.00760000    0.00469242
120.01640000    0.00469238
 

Link to comment

Looking more closely at your data snippet. Since the left column has 2 data points very close together (certainly within measurement noise) you could have 2 lines/curves which maybe max/min. You might get that sort of file, for example, if you want to test the linearity of analogue output device with a 110v supply and a 240v supply. You may set an output, measure at 110V then change the supply to 240v and take another measurement before moving on to the next analogue output level.

Anyway. the analysis method is the same, you just need to de-interleave to get the two lines/curves and maybe subtract one from the other before doing the poly fit.

 

Later....

Ah yes. Makes more sense now when you said "pairs of rows" and why you want a poly.

image.png.f920da05a53c11bbc3c9419faf35cee7.png

Polyfit2.vi

For some reason the decimate arrays crashes LV so I had to use a while loop.

Edited by ShaunR
  • Like 1
  • Thanks 1
Link to comment
15 hours ago, ShaunR said:

Looking more closely at your data snippet. Since the left column has 2 data points very close together (certainly within measurement noise) you could have 2 lines/curves which maybe max/min. You might get that sort of file [...]

Later....

Ah yes. Makes more sense now when you said "pairs of rows" and why you want a poly.

The two columns are excerpted from a calibration sheet for a sonic nozzle (device to measure mass flow of gas in relation to pressure). We have a spec that requires Excel to generate a 3rd order polynomial for use in scaling an instrument's output based on said calibration sheet. So historically, we'd take that 3rd order polynomial and employ it as the scaling factor in the instrument. I am wanting to get away from that. I want to do so by putting the columnar scaling data itself into LabVIEW as an array.

Generally, I prefer to use a spline curve instead. That way I can enjoy many pairs of calibration points to define a curve. And spline curves REALLY hate those redundant points within the noise level. So I wrote a routine to average them out, wheresoever they might occur, however many there might be. I have always used spline curves in my original test stands. Those we calibrate in-house from first principal standards (dead weight, etc).

But this is a legacy test stand which I'm converting over to LabVIEW. So for any changes, I must first run the gauntlet of "Tradition" from an eon before. And in any case, we lack any first principal standards for mass flow of gas, so these sonic nozzles get sent out for that. And I must deal with their own data format, as is.

Attached are my two VIs (one a sub-VI of the other) for auto-averaging any series of N redundant rows in a messy table of data like that supplied at top of topic.

 

Kink-Free_Bezier_Calib.vi

Kink_Free.png.3d92816874180d0003e5b6a7ba78dd5c.png

Array_Avg_Last_N.vi

Array_Avg.png.c604375ea352a82c81b74ea6ba71e495.png

 

Kink-Free_Bezier_Calib.vi Array_Avg_Last_N.vi

Link to comment
21 hours ago, Gan Uesli Starling said:

redundant points within the noise level

Are you sure they are redundant?

Looks to me like you have two lines - high pressure and low pressure. It's highly unusual to have a cal cert with "redundant" information in it and it would explain why a polynomial fit would be used instead of a linear fit-which would also get rid of your "glitches". I would be contacting the manufacturer for clarification at this point.

But you know your devices so maybe I'm wrong. You have two examples of a poly fit so I guess the first VI I posted is where you want to start and incorporate it into all your data fiddling.

Edited by ShaunR
Link to comment
On 8/13/2021 at 9:21 PM, ShaunR said:

Looks to me like you have two lines - high pressure and low pressure. It's highly unusual to have a cal cert with "redundant" information in it and it would explain why a polynomial fit would be used instead of a linear fit-which would also get rid of your "glitches". I would be contacting the manufacturer for clarification at this point.

Your point is cogent. And I can now see where you are persuaded, there being doubles to half way, and doubles again for half above that, with the quad group at center belonging half to the lower and half to the upper. However... I believe there was a typo in the list of two columns I presented at top of thread. Seeing a glitch, I found and corrected that one number. After which, the anomaly disappeared.

The item is a critical flow venturi (aka "sonic nozzle"). The calibration sheet is a single table of twenty points and five columns: pressure, temperature, Cd, Reynolds No, LBMS and C*.  This together with a table of uncertainties and a single graph plotting a 4th order polynomial curve of Reynolds No versus Cd, with overlapping near-duplicate points. Nowhere in the calibration sheet is there a separation into two flows, low and high. Nor any separation of ranges in the history of cross-calibrations done in-house from said sheets prior to now. Plus the apparent two lines, end of low and start of high, meet very cleanly.

So I'm pretty sure that the table is as I assume. Nevertheless, I shall inquire, just to make absolutely certain.

 

Sonic_Nozzle_Bezier_and_Polynomial_Calibration_Example.PNG

Edited by Gan Uesli Starling
Addition of info.
Link to comment

All is now clear. I telephoned an engineer at the calibrating service.

These calibrations are performed on an automatic machine that employs two sources of pressure, one low and one high. But with the same recording instrumentation. Thus, on switch-over from the low source to the high source, they repeat the same point. Thus to obtain double repetitions at each point on each source of pressure, but the same measuring instruments taking data. 

Edited by Gan Uesli Starling
Fixed punctuation.
Link to comment
2 hours ago, Gan Uesli Starling said:

These calibrations are performed on an automatic machine that employs two sources of pressure, one low and one high.

On 8/14/2021 at 2:21 AM, ShaunR said:

Looks to me like you have two lines - high pressure and low pressure.

  • Like 2
Link to comment

Two lines or one, it is no different than if I were using a dead weight calibrator and adding on a different set of weights. The recording instruments are unchanged. There has only just been a pause.

I have myself done this very thing, but with fluid pressure as opposed to dry gas mass flow. I have employed a pneumatic-lift dead weight source for pressure to collect data points at the low end of a sensor. Those points collected, I will then switch to a hydraulic-lift dead weight source for pressures on the high end. And when doing so, I'll start out below the high end of the prior data as proof to myself that there is no bubble trapped in the hydraulic line. And once sure of that, to also make sure that the second set of dead weights is at the same column height. 

So it is both: in one sense, two lines; in another sense a single line having two segments. The same as if I were to draw a line on paper, then lift my pencil for a moment, and afterwards continued on from where I'd left off.

That is how the results must be treated by the instrument under cross-calibration. It can't call two functions, one after the other. The published requirements for this test stand historically calls for a 3rd order polynomial as computed in Excel from data like the above. Which line must plot within a given (very small) percentage of the furthest outlying datum.

I will be proposing an update to this (very ancient spec) to allow the inclusion of a spline curve as an alternative to the 3rd order polynomial ... this with some constraint upon linearity.

I thank you (ShaunR) most kindly indeed for your very timely assistance.

 

Link to comment
6 hours ago, Gan Uesli Starling said:

So it is both: in one sense, two lines; in another sense a single line having two segments.

We agree to disagree.

Both VI's I supplied answer the question. The first VI you can apply to your existing calibration fiddling whilst the the second VI doesn't require any calibration fiddling. It's up to you.

 

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.