The test executive I've created calls test modules by references. First, each test module sets up benchtop, custom and PCI instrumentation. Next, a series of sub-tests are performed. Finally, 'Save Data' then 'Post Run' vi's are called.
First version:
Each sub-test measures interesting things then updates a test module specific global (which has clusters of: doubles, longs, 1d and 2d arrays of doubles, and other types). The module specific 'Save Data' vi pulls data out of the global then stores the data in flat files or a database, depending on switches.
Second version:
Each sub-test measures interesting things and shows each in an indicator (same variety of data types as the First version). Each indicator's value is converted to a variant. Attributes are added to indicate data level (debug, production) and what to do with the data (1> create a graph showing, for example, gain setting vs gain error; 2> just log the data) then concatenated into a 1d array of variants. This array is concatenated onto the 1d variant array passed into the sub-test.
Now a generic 'Save Data' vi iterates through the variant array doing what the attributes call for. OpenG's get variant names is elegant for creating flat files or for filling a database (saw Jim's SQL insert example).
Now the question. The second version was brought about by obvious shortcomings of the first version. Does anyone care to comment about pitfalls that would cause a third version? (I've left off things such as: throwing data into a queue for graphing and for offline processing like histograms.)
Also, we're up to about twenty test modules (our IC is on-the-fly reconfigurable mixed-signal). We end up with lots of data that needs to be presented against data sheet limits, sigmas, etc. What is the best way to dynamically generate graphs? The Excel Report vi's fall short because some graphs have 13k points.
--todd