Jump to content

Is this an appropriate use of Data Value Reference and LVOOP?

Recommended Posts

I work in an environment where we really don't ever want to lose test data once we've taken it.  Many of the tests we do are either destructive (i.e. one time only, then the DUT is dead) or require very long test times that we don't have the production bandwidth to spare on a retest if we can avoid it.


We already have a 3-step system to ensure test data isn't dropped due to network or database problems.  Instead of just issuing a command to write to the database when the test completes, we first write the data to a local file on the hard disk.  Then have a compact database running on the local machine import the data from the local file.  Then the local database syncs its successful imports with the main database once every few minutes.


We're in the process of trying to standardize our company's code instead of having a bunch of use rewriting the same procedures multiple times and in different ways, with too much duplication of effort and chances to add new bugs.  So one of my tasks at the moment is to come up with a common method for everyone to use that does the best job possible of ensuring test data is never dropped.


The LVOOP white paper says that "less than 5%" of cases have a need for by-reference objects.  I think this is one of them.  We want to use classes for data saving so we can add functionality to child classes without having to redo all other tests that use the parent class.


The (extremely simplified) process is something like this:

1. Create one set of test data for each unit about to be processed, indexed by serial number, available globally.

2. Do any manner of tests, possibly with many tests running in parallel.  Abort on error or operator cancel.  The instant test data is acquired it is added to the test data corresponding to its serial number.  In some cases test data is dependent on previous test data.

3. Handle then clear any errors.  Then save all test data, including partial data sets caused by cancellation or errors.


As I see it, my options are:

  • Standard VI globals:  Bad for many reasons well known everywhere.
  • FGV: The amount of different types of data and operations performed on it make this unattractive due to the complexity that we'd have to fit in one VI
  • More "standard" methods of LV by-reference (i.e. 1-length queues or other blocking methods): Require extremely diligent coding to ensure the reference isn't left "checked out" and might not be "put back" in its holding area, making it unavailable for the final data saving
  • LVOOP class data converted to a DVR and stored in a FGV that has only 4 functions (clear/new/get/delete) using serial numbers as indexes, returning a DVR of the top-level test data class when called.  One wrapper VI for each child class downconverts it to a DVR of that child class.  Operations are performed on the data using Dynamic Dispatch VIs inside of In-place structures.  Since the in-place structures both block other in-place structures using the same reference while they are running, and absolutely require a matching object to be put back in the DVR during code creation that can't be skipped during runtime.

Obviously I am leaning towards the last option but before I start a big project I'd like to make sure I'm not using an architecture that is going to cause problems in the future that I don't see right now, or if there's an easier solution that I don't see that has the same amount of data integrity.


I'd like to hear input if anyone has done anything similar and it worked well or if I'm missing something.



Link to comment

To be honest, that statement in the LVOOP white paper is fairly controversial.


Your simplified process doesn't necessarily require by-reference behaviour, although it certainly makes it easier. If you want to pursue inheritance hierarchies using DVR's then I suggest looking at using GOOP / OpenGDS. It has a proven track record of sorting out the nitty-gritty and letting you focus on the more generic problems associated with reference-based designs.

Link to comment

I think you maybe missing a trick here. (and probably over thinking the mechanics). Queues, DVRS and all that sort of thing are for moving data around. You don't want to be doing that at all unless it is really,really necessary - and it isn't. You have a local DB. :yes:


If it's in memory you can lose it easily so by val, ref, DVR or globals is kind of irrelevant to that requirement.. If its on disk you can lose it, but it is much much less likely. Especially if you have an ACID compliant database.  You know that. That's why you have two databases. ;)


You have a remote DB. That's good - all the managers can run their reports :P. You have a local DB - that's even better. That's your DB to abuse and misuse :D .


So treat your local DB like memory storage. It's only performance that is the issue and its not much of one unless you are streaming, It's not as if it costs $ per write. Design your tables and as soon as you have a measurement, stick it in the local DB-don't store any measurement data in memory. Even configuration can be queried from the DB just before you need it. If the operator or anyone wants to see pretty pictures and numbers on screen. Query the DB and show them.  It will have the effect of completely separating the acquisition/test system from the user interface.


If you take a look at the SQLite API for LabVIEW it has an example of data logging (picture on the right with the sine wave). It saves data to a DB  while at the same time showing the operator a plot of the data. It does it pretty fast, (about every 30ms) but you wouldn't need that performance. You'd just need more complex DB relationships and tables but the idea is the same. Data is acquired straight to the DB and never sits in arrays or shift registers or queues or DVRS or anything else.The UI queries the DB to show plot data about every 100 ms to prove it's actually doing something ;)


Use any architecture you like for your configuring of devices, keeping the operator amused and running housekeeping, it doesn't matter for the data storage. But you've already decided how your going to do that - #4, right?

Edited by ShaunR
  • Like 1
Link to comment

I do think its more than 5%, but from what you described I don't think you're in that X%.


So...from a high level it sounds like what you want to do is spin up N workers, send them a UUT to work on, and then have the results recorded to disk. To be honest, I'm not sure why you need DVRs or any data sharing at all. Could you clarify?


->To me you'd have a QMH of some kind dealing out the work to your N workers (also qmhs) and then a QMH for logging which collects results from any of the N workers. You could do that with manually written QMHs, an API like AMC, a package like DQMH, or actor framework. Separating out worker functionality from logger functionality means (a) for those long running tests you could write intermediate results to a file without worrying about interfering with the test and (b) you can really really really easily switch it out for a database once you've decided you need to.


As a side thought, it sounds like you are re-writing significant parts of teststand (parallel test sequencing, automatic database logging and report gen, etc.). The DQMH library mentioned above seems to be written to work pretty well with teststand (nice presentation at NI week and I believe its also posted on their community page). Just a thought.


If I'm mistaken, or you can't use teststand for whatever reason the tools network does have a free sequencer which I think is pretty cool (although I do RT most of the time and never have a good chance to use this guy): http://sine.ni.com/nips/cds/view/p/lang/en/nid/212277. It looks like it could be a good fit for your workers.

  • Like 1
Link to comment

For reasons of which I am not 100% aware, the decision was made before I was brought on that direct data write access to the local database would be disallowed, and instead it would only allow stored procedures to be run that targets the local file.  I think it's done because of a combination of reasons (encapsulation of the table structure, a security level flag we have to attach to each piece of data in the DB for reasons I won't go into, etc).  I admit it would simplify things to have direct DB writing as the data storage.


I am at this very moment only attempting to design a universal data storage method and not worrying about the sequencing aspect.  We do have TestStand on some of our stations and are trying to roll it out to more of them, but I know that we'll never be able to switch 100% to TestStand for some of our processes, so when designing a universal solution it needs to be able to work both from a full LabVIEW application and from VIs called by TestStand as it runs.  


However I do see that instantiating a worker for each DUT (or one worker per class that handles an array of DUTs of that same class) addressable by serial number and sending it a message whenever a piece of data is available may be a better solution.  I think I was fixated on converting the global clusters that the old versions of the applications use (where they constantly unpack a cluster from a global VI, read/modify it, then put it back in the same global VI) to a by-reference method that I didn't think that maybe sending messages to a data aggregator of sorts might be a better idea.  I think I'll pursue that idea further and see what I come up with.

  • Like 1
Link to comment

  I admit it would simplify things to have direct DB writing as the data storage.


Then use SQLite. It's a file based full ACID compliant SQL database-no TCP connections, no servers, no IT, no hassle. You can abuse that instead and update the other local DB as you see fit.You could even use the same schema and just export the SQL to the other local database.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.