todd Posted May 31, 2003 Report Share Posted May 31, 2003 The test executive I've created calls test modules by references. First, each test module sets up benchtop, custom and PCI instrumentation. Next, a series of sub-tests are performed. Finally, 'Save Data' then 'Post Run' vi's are called. First version: Each sub-test measures interesting things then updates a test module specific global (which has clusters of: doubles, longs, 1d and 2d arrays of doubles, and other types). The module specific 'Save Data' vi pulls data out of the global then stores the data in flat files or a database, depending on switches. Second version: Each sub-test measures interesting things and shows each in an indicator (same variety of data types as the First version). Each indicator's value is converted to a variant. Attributes are added to indicate data level (debug, production) and what to do with the data (1> create a graph showing, for example, gain setting vs gain error; 2> just log the data) then concatenated into a 1d array of variants. This array is concatenated onto the 1d variant array passed into the sub-test. Now a generic 'Save Data' vi iterates through the variant array doing what the attributes call for. OpenG's get variant names is elegant for creating flat files or for filling a database (saw Jim's SQL insert example). Now the question. The second version was brought about by obvious shortcomings of the first version. Does anyone care to comment about pitfalls that would cause a third version? (I've left off things such as: throwing data into a queue for graphing and for offline processing like histograms.) Also, we're up to about twenty test modules (our IC is on-the-fly reconfigurable mixed-signal). We end up with lots of data that needs to be presented against data sheet limits, sigmas, etc. What is the best way to dynamically generate graphs? The Excel Report vi's fall short because some graphs have 13k points. --todd Quote Link to comment
Michael Aivaliotis Posted June 1, 2003 Report Share Posted June 1, 2003 Are you interested in working on an open-source test executive? There has been some interest expressed by myself and others on the openg site but it hasn't progressed much. I have built my own test executive as well but my approach is different. I use the OpenG variant Data tools vi's from the OpenG site. Instead of linking my data to the front panel controls, I define it in a datatype that is built into a Variant Cluster and passed allong a data queue. Each test module has it's own VCluster that is added to this data queue. Also, a test module does not necessarily mean 1 VI. The test module can consist of a group of vi's such as a configuration vi, a test vi, a data presentation vi and a data save vi. Each of these sections will run in a parallel engine and accept requests via a queue. This data queue will also contain the data which will be passed from the test engine. Now with LV7 this becomes easier because of embeded panels. :idea: Quote Link to comment
todd Posted June 2, 2003 Author Report Share Posted June 2, 2003 An Open Source Test Executive would need to be extremely modular and flexible, no? I am willing to contribute. The Data queue is a great idea. I will try to apply that model to hardware configuration data (mainly relay settings) and to the test sequencer. The VCluster could contain elements that describe what kind of processing to perform. Other elements are probably parameter name, data and units. My test module vi's each have (the following are strung together with errors): prerun.vi: (recently genericized from project-specific) programs the DUT (unless programmed during last run), lightly configures stimulus and measurement instrumentation (unless configured during last run), and closes relays between DUT and instrumentation (unless ... last run) dut param config.vi: sends command to the DUT to configure its global parameters <project name> <param name>.vi: twiddles stimuli and takes measurements to get parameter data (plans to add selective parameter testing using a global bitmask) |<project name>|save data.vi: (not completely genericized) puts data into flatfiles or a database postrun.vi: clears hardware settings, unless needed by next run So, the test executive gets ready then calls a test module by reference based on the sequence file(s). How do the parameter measuring vi's running in parallel know when to execute? It could be quite handy to be able to drop a new measurement vi into a folder and have it run without changing the test module. Quote Link to comment
TomWP Posted February 10, 2004 Report Share Posted February 10, 2004 Hi guys, I am a LabVIEW user based in the UK. I would be interested in helping with the development of any open source test stand type product. Please email me if i can help in anyway. Is there any representation of LAVA here in the UK? Regards, Tom. Quote Link to comment
Michael Aivaliotis Posted February 10, 2004 Report Share Posted February 10, 2004 Is there any representation of LAVA here in the UK?Regards, Tom. No there is not. You can start one if you like! Quote Link to comment
TomWP Posted February 10, 2004 Report Share Posted February 10, 2004 Sounds like a good idea! If you hear of anyone else in the UK looking for LabVIEW help I would be glad to assist. Just in the process of setting up a LabVIEW consultance here, when that is going I will devote some effort to getting things going. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.