Jump to content

"Propagating Calibration Changes" or "Difference Based Configurations"


dterry

Recommended Posts

Hello again LAVAG,

I'm currently feeling the pain of propagating changes to multiple, slightly different configuration files, and am searching for a way to make things a bit more palatable.

To give some background, my application is configuration driven in that it exists to control a machine which has many subsystems, each of which can be configured in different ways to produce different results.  Some of these subsystems include: DAQ, Actuator Control, Safety Limit Monitoring, CAN communication, and Calculation/Calibration.  The current configuration scheme is that I have one main configuration file, and several sub-system configuration files.  The main file is essentially an array of classes flattened to binary, while the sub-system files are human readable (INI) files that can be loaded/saved from the main file editor UI.  It is important to note that this scheme is not dynamic; or to put it another way, the main file does not update automatically from the sub-files, so any changes to sub-files must be manually reloaded in the main file editor UI.

The problem in this comes from the fact that we periodically update calibration values in one sub-config file, and we maintain safety limits for each DUT (device under test) in another sub-file.  This means that we have many configurations, all of which must me updated when a calibration changes.

I am currently brainstorming ways to ease this burden, while making sure that the latest calibration values get propagated to each configuration, and was hoping that someone on LAVAG had experience with this type of calibration management.  My current idea has several steps:

  1. Rework the main configuration file to be human readable.
  2. Store file paths to sub-files in the main file instead of storing the sub-file data.  Load the sub-file data when the main file is loaded.
  3. Develop a set of default sub-files which contain basic configurations and calibration data.  
  4. Set up the main file loading routine to pull from the default sub-files unless a unique sub-file is not specified.
  5. Store only the parameters that differ from the default values in the unique subfile. Load the default values first, then overwrite only the unique values.  This would work similarly to the way that LabVIEW.ini works.  If you do not specify a key, LabVIEW uses its internal default.  This has two advantages:
    • Allows calibration and other base configuration changes to easily propagate through to other configs.
    • Allows the user to quickly identify configuration differences.

Steps 4 and 5 are really the crux of making life easier, since they allow global changes to all configurations.  One thing to note here is that these configurations are stored in an SVN repository to allow versioning and recovery if something breaks.

So my questions to LAVAG are:

  • Has anyone ever encountered a need to propagate configuration changes like this?  
  • How did you handle it?  
  • Does the proposal above seem feasible?  
  • What gotchas have I missed that will make my life miserable in the future?

Thanks in advance everyone!

Drew

Link to comment

You really need to migrate to a database so that you can maintain different configurations and calibration data by just viewing the database information in certain ways (usually a single SQL query). The benefits far outweigh not being able to use a text editor and you can easily manage multiple configurations and calibrations on-the-fly.

Link to comment
2 hours ago, dterry said:

I'm certainly open to that as an option.  I really wouldn't know where to start, do you have any recommendations or resources to look into?

Do you have any INI files you can post? We can do an import to a DB and I'll knock up a quick example to get you started..You can then figure out a better schema to match your use case once you are more familiar with DBs.

Link to comment

I would say the easiest way to think about sql databases is as an excel workbook. If you can represent it in a workbook you are very likely to be able to represent it, similarly, in a db. This is a nice intro along the same theme: http://schoolofdata.org/2013/11/07/sql-databases-vs-excel/
If you're looking to try things out, you'll likely go along one of three free routes:
postgres/mysql: server-oriented databases, everything is stored on a central server (or network of servers) and you ask it questions as the single source of truth.
sqlite: file database, still uses sql but the engine is implemented as a library that runs on your local machine. To share the db across computers you must share the file.
I can't say this with certainty, but if you need to use a vxworks cRIO and don't like suffering, mysql is the only option.

One recommendation I have is to use a schema that allows for history. A nice writeup of the concept is here: https://martinfowler.com/eaaDev/timeNarrative.html

The implementation should be pretty simple...basically add a "valid from" and "valid to" field to any table that needs history. If "valid to" is empty, the row is still valid. You can use triggers to create the timestamps so you never have to even think about it. You can also skip a step if you always have a valid field, as then you just have a "valid from" field, and select the newest row in your query. An alternative but more complex implementation would be to have a table which only represents the current state, and use a trigger to copy the previous value into an append-only log. The first option is more valid if you regularly need to ask "what was the value last week" or specify "the new value will be Y starting on monday" while the log is more appropriate just for tracking -- something got fat fingered and you want to see the previous value.

Edited by smithd
Link to comment

Either way, if you are using SQL or files for configuration there is usually some sort of trigger chain.  By this I mean that there is an order in which the files (or SQL tables) must be re-loaded if any file in the chain is changed.  Thus if your config dependency is sort of hierarchical as you suggest then my usual approach to this problem is to brute force all of the loading.  For example if you have files (or tables) A->B->C->D then no matter where in the chain a file is changed the whole chain is refreshed.  So for example if file C is changed and it has no impact on A, I dont really care I just re-load everything (A,B,C and D).

An advantage with databases is that these triggers can be attributed to fields within tables.  This can often require some careful thinking and is worth doing with larger databases.  However if it takes a minimal amount of time to reload everything then at least you know all of your tables / config data has been refreshed and you're not using old fields.

On the other hand if you are constantly changing values within your files (which I dont think you will be for configuration) then you may well want a finer amount of control as to which properties / fields you want to refresh within your application.  If this is the case I would argue that these changes should not really be held in configuration files and a database or memory structure which is written periodically back to file should be used instead.

Edited by CraigC
Link to comment
5 hours ago, smithd said:

I can't say this with certainty, but if you need to use a vxworks cRIO and don't like suffering, mysql is the only option.

From the SQLite API for LabVIEW readme:

Quote

Applicable NI VxWorks Platforms:
cFP-22xx
cRIO-901x
cRIO-902x
cRIO-907x
sbRIO-96xx

 

1 hour ago, CraigC said:

By this I mean that there is an order in which the files (or SQL tables) must be re-loaded if any file in the chain is changed.

Why would you need to reload DB tables? (and from where?) .

Link to comment

First off, THANK YOU all for contributing to the discussion!  This is very helpful!  Please see my responses and thoughts below.

13 hours ago, ShaunR said:

Do you have any INI files you can post? We can do an import to a DB and I'll knock up a quick example to get you started..You can then figure out a better schema to match your use case once you are more familiar with DBs.

I attached a truncated INI file below (file extension is .calc, but its a text file in INI format).  I'm interested to see what you mean by import to DB.  I have some ideas for a basic schema which I'll outline below.

 

11 hours ago, smithd said:

I would say the easiest way to think about sql databases is as an excel workbook. If you can represent it in a workbook you are very likely to be able to represent it, similarly, in a db. This is a nice intro along the same theme: http://schoolofdata.org/2013/11/07/sql-databases-vs-excel/
If you're looking to try things out, you'll likely go along one of three free routes:
postgres/mysql: server-oriented databases, everything is stored on a central server (or network of servers) and you ask it questions as the single source of truth.
sqlite: file database, still uses sql but the engine is implemented as a library that runs on your local machine. To share the db across computers you must share the file.
I can't say this with certainty, but if you need to use a vxworks cRIO and don't like suffering, mysql is the only option.

One recommendation I have is to use a schema that allows for history. A nice writeup of the concept is here: https://martinfowler.com/eaaDev/timeNarrative.html

The implementation should be pretty simple...basically add a "valid from" and "valid to" field to any table that needs history. If "valid to" is empty, the row is still valid. You can use triggers to create the timestamps so you never have to even think about it. You can also skip a step if you always have a valid field, as then you just have a "valid from" field, and select the newest row in your query. An alternative but more complex implementation would be to have a table which only represents the current state, and use a trigger to copy the previous value into an append-only log. The first option is more valid if you regularly need to ask "what was the value last week" or specify "the new value will be Y starting on monday" while the log is more appropriate just for tracking -- something got fat fingered and you want to see the previous value.

I'm pretty familiar with SQL databases, so I have that going for me!  Using them for configuration/calibration data will be a new one though.  I'll probably opt for mysql since I have the server already.  Luckily we use a Linux based cRIO (and the config is currently loaded on a PC before deployment, though this could change in the future).

I like the history based schema!  Great idea!  Do you have any examples of queries you have used to pull current/previous data?

 

7 hours ago, CraigC said:

Either way, if you are using SQL or files for configuration there is usually some sort of trigger chain.  By this I mean that there is an order in which the files (or SQL tables) must be re-loaded if any file in the chain is changed.  Thus if your config dependency is sort of hierarchical as you suggest then my usual approach to this problem is to brute force all of the loading.  For example if you have files (or tables) A->B->C->D then no matter where in the chain a file is changed the whole chain is refreshed.  So for example if file C is changed and it has no impact on A, I dont really care I just re-load everything (A,B,C and D).

An advantage with databases is that these triggers can be attributed to fields within tables.  This can often require some careful thinking and is worth doing with larger databases.  However if it takes a minimal amount of time to reload everything then at least you know all of your tables / config data has been refreshed and you're not using old fields.

On the other hand if you are constantly changing values within your files (which I dont think you will be for configuration) then you may well want a finer amount of control as to which properties / fields you want to refresh within your application.  If this is the case I would argue that these changes should not really be held in configuration files and a database or memory structure which is written periodically back to file should be used instead.

Great things to keep in mind!  My current operation is that a config file is loaded on the PC and deployed to the target on startup (target runs a simple EXE which receives and loads the config at runtime).  I'm thinking this means I can just rely on pulling the whole configuration at launch.  I'm lucky in that the configs take very little time to read in, so I don't think that will be an issue.  Also, the values change often now, as we are still in buildout/validation, but should slow down in a month or two.

-------------------------------------------------------------------------------------------------------

Overall, my first guess at a database structure would be something like this:

jDjTBHE.png

But this presents the question how to handle different sets of parameters.  Perhaps you just have n number of Parameter columns and interpret the data once loaded? If you ever need more parameters, add another column?  Or would you consider something like below?  To me the schema below seems hard to maintain since you have a different table each time you need a different type of calc.  Is there such a thing as OO Database Design?  Haha

9Iq7fjp.png

Other questions that come to mind:

  • To pull these data, you would still need a config file which says "I need Channel ID X", correct?
  • Some of my configs are fairly lengthy.  Do you have a typical limit on the number of columns in your tables?

 

Drew

 

Example.calc

Link to comment
55 minutes ago, dterry said:

First off, THANK YOU all for contributing to the discussion!  This is very helpful!  Please see my responses and thoughts below.

I attached a truncated INI file below (file extension is .calc, but its a text file in INI format).  I'm interested to see what you mean by import to DB.  I have some ideas for a basic schema which I'll outline below.

Example.calc

You have more? One isn't really enough to demonstrate how multiple devices/configs can work.

What I mean by "Import" is the SQLite API that use has an Import from INI file function.

Link to comment
7 hours ago, dterry said:

Ah good point, I just stripped one down to show several types of Calculations.  Are you asking for more than one file, or more channels of each type?

Depends how much of a match the example will be to your real system. I could just copy and paste the same INI file and pretend that they are different devices But it wouldn't be much of am example as opposed to, say, a DVM and a spectrum analyser and a Power Supply - you'd be letting me off lightly :D.

Link to comment
9 hours ago, dterry said:

I like the history based schema!  Great idea!  Do you have any examples of queries you have used to pull current/previous data?

What we've got on my current project is nothing too special, its just if your original table is columns (a, b, c) you ad (t0, tF) to the end. t0 would default to current_timestamp. A trigger would be run on every insert that says
update table set tF=current_timestamp where tF=null and a=NEW.a and b=NEW.b and c=NEW.c.
Another trigger would run if you want to update a row which replaces the update with an insert.
Another trigger would replace a delete with a call to update tF on a row.

Then your query would be either:

  • (for most recent) select a,b,c from table where (filter a,b,c) and tF=null
  • (for selected time) select a,b,c from table where (filter a,b,c) and tF > selectedTime and t0 <= selectedTime
    • for both, you can add "order by t0 desc limit 1" but from my recent experience this leads to sloppy table maintenance -- for example we have tons of configurations with tF=null, but we just pick the one with the most recent t0. It works, but its annoying to read and make queries for, and plus it makes me feel dirty.

I may have some details wrong but thats the gist of it.

I couldn't find a nice article with a lot of actual sql in it, but it looks like 'temporal database' might be the right phrase to hunt for: 

Link to comment
On 1/27/2017 at 8:21 PM, ShaunR said:

Depends how much of a match the example will be to your real system. I could just copy and paste the same INI file and pretend that they are different devices But it wouldn't be much of am example as opposed to, say, a DVM and a spectrum analyser and a Power Supply - you'd be letting me off lightly :D.

Alright, lets try this.  I included two DAQ config files, another CALC file with different calcs/values, a CRACT (actuator) file, a CRSTEER (steer profile), and a limits file (CRSAFE).  That enough variety? :lol:

 

HS AI.daq

Barcode.daq

Generic.crsteer

Limits.crsafe

Sled.calc

Sled.cract

Link to comment
On 1/27/2017 at 8:58 PM, smithd said:

What we've got on my current project is nothing too special, its just if your original table is columns (a, b, c) you ad (t0, tF) to the end. t0 would default to current_timestamp. A trigger would be run on every insert that says
update table set tF=current_timestamp where tF=null and a=NEW.a and b=NEW.b and c=NEW.c.
Another trigger would run if you want to update a row which replaces the update with an insert.
Another trigger would replace a delete with a call to update tF on a row.

Then your query would be either:

  • (for most recent) select a,b,c from table where (filter a,b,c) and tF=null
  • (for selected time) select a,b,c from table where (filter a,b,c) and tF > selectedTime and t0 <= selectedTime
    • for both, you can add "order by t0 desc limit 1" but from my recent experience this leads to sloppy table maintenance -- for example we have tons of configurations with tF=null, but we just pick the one with the most recent t0. It works, but its annoying to read and make queries for, and plus it makes me feel dirty.

Ah good point on the triggers.  Basically, prevent a user from ever doing an update or delete, and then add an action to update the old record's tF.

One question regarding the INSERT trigger.  Your where statement looks for an identical record with a null tF.  Would that just be a record with the same identifier instead?  Like you said, the gist is there, just curious about that line.

Thanks!

Link to comment
10 hours ago, dterry said:

Your where statement looks for an identical record with a null tF.  Would that just be a record with the same identifier instead?

Ah yes, thats right. What we do is more or less a migration from an ini file, so its a,b,c are section, key, and value, with section.key being the unique identifier

Edited by smithd
Link to comment
23 hours ago, dterry said:

Alright, lets try this.  I included two DAQ config files, another CALC file with different calcs/values, a CRACT (actuator) file, a CRSTEER (steer profile), and a limits file (CRSAFE).  That enough variety? :lol:

 

HS AI.daq

Barcode.daq

Generic.crsteer

Limits.crsafe

Sled.calc

Sled.cract

Well They are all config files so I'll have to make up some tests and test limits.:D

I think I might add it to the examples in the API-without your data of course.

I'll have a play tomorrow.

Edited by ShaunR
Link to comment
1 hour ago, ShaunR said:

Well They are all config files so I'll have to make up some tests and test limits.:D

I think I might add it to the examples in the API-without your data of course.

I'll have a play tomorrow.

Sorry ShaunR, still not really understanding what you are asking for.  Here are a few test files.

Is there a place I can get a trial of the API you keep referencing?

Slip Sweep x 2.crtest

Ky Test.crtest

Data Sample 1650.crtest

Link to comment

Overall, my first guess at a database structure would be something like this:

jDjTBHE.png

But this presents the question how to handle different sets of parameters.  Perhaps you just have n number of Parameter columns and interpret the data once loaded? If you ever need more parameters, add another column?  Or would you consider something like below?  To me the schema below seems hard to maintain since you have a different table each time you need a different type of calc.  Is there such a thing as OO Database Design?  Haha

9Iq7fjp.png

Other questions that come to mind:

  • To pull these data, you would still need a config file which says "I need Channel ID X", correct?
  • Some of my configs are fairly lengthy.  Do you have a typical limit on the number of columns in your tables

Still scratching my head on these, does anyone have any feedback?  

From my own research, storing classes is kind of a cluster.  It looks like the two main options could:

  • Some combination of "Table for each class", "Table for each concrete implementation", or "One table with all possible child columns".  This feels super un-extensible.
  • One Table for Abstract or Parent Class, with another table for parameters in TEXT or BLOB format.

nnkBkmq.png OR JnNeedH.png

Anybody have any experience or warnings about do this?

Link to comment
2 hours ago, dterry said:

Sorry ShaunR, still not really understanding what you are asking for.  Here are a few test files.

Is there a place I can get a trial of the API you keep referencing?

Slip Sweep x 2.crtest

Ky Test.crtest

Data Sample 1650.crtest

You can download the API here. (You will need to wait for the example)

It's not that important to have real files - it's just useful context for you. But to demonstrate the power of using a DB means that you can have multiple configurations of devices and tests so what I have so far is this:


This is just showing some device info which is the beginnings oft of asset management.1.png

This is showing you a filtered list (CVT Tags showing)  of the configuration parameters (your old ini files)  for Test1 that you can edit.

2.png

....and the Test1 limit list

3.png

You can go on to add UUTs which is just a variation on the devices from a programmers point if view but I don't think I'll go that far, for now.

The hard part is getting the LabVIEW UI to function properly...lol.

Edited by ShaunR
Link to comment
5 minutes ago, dterry said:

I like the UI for sure.  Still not sure on the schema (as referenced in the last post) but I'd like to see it in person to understand better.  Trying to check it out, but the installer is throwing an error.

2017-01-31_1239.png

You will design  your own schema-it's an example.

Email support@lvs-tools.co.uk and we'll get your error looked at. At a glance, it looks like it can't extract the files to the vi.lib directory.

Link to comment
  • 2 weeks later...

Sorry it's a bit later than expected - I was called out of country for a week or so.

Here it is, though. You'll need the SQlite API for LabVIEW but once you have that you should be good to go.

 

There's still a couple of bits and pieces before it's production ready and I'm still "umming" and "ahing" about some things, but most of it's there.

 

 

Edited by ShaunR
Link to comment
On 12/02/2017 at 10:58 AM, ShaunR said:

Sorry it's a bit later than expected - I was called out of country for a week or so.

Here it is, though. You'll need the SQlite API for LabVIEW but once you have that you should be good to go.

 

 

There's still a couple of bits and pieces before it's production ready and I'm still "umming" and "ahing" about some things, but most of it's there.

Oops.  A bit embarrassing.:wacko: Seems LabVIEW linking insists that VIs are in the VI.lib instead of user.lib when opening the project. and causes linking hell! :frusty:

Attached is a new version that fixes that. However. If it asks for  "picktime.vi", it is a part of the LV distribution and located in "<labview version>\resource\dialog" which isn't on the LabVIEW search path by default, it seems.

I'll delete the other file in the previous post to avoid confusion.

 

Test Manager0101.zip

Edited by ShaunR
  • Like 1
Link to comment

ShaunR,

It seems like the database you sent is encrypted.  Is there a password?  Right now, I get errors because the file path ("TM.3db-> 12345" + AppDir)  seems to resolve to <Not a Path>.  I replaced it with a hardcoded path, and got the "Enter Password" dialog.  Taking a SWAG, I entered "12345", but it threw error 26 [file is encrypted or is not a database].  I found that the password dialog was being bypassed (see below), but it worked fine once I rewired it.

2017-02-15_1129.png

 

From what I can tell, the schema you put together looks some like my E-R diagram above.  It helped to be able to see it somewhat implemented, and I think I may end up going this route, and dealing with the consequences (enforcing names/types/values, complex queries, application side logic, etc.).  

On 1/31/2017 at 9:56 AM, dterry said:
  • One Table for Abstract or Parent Class, with another table for parameters in TEXT or BLOB format.

nnkBkmq.png OR JnNeedH.png

 

Thanks a ton for your help with this!  It has been very enlightening and helpful in narrowing my focus for configuration management!

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.