Jump to content

Question about DAQmx


Daklu

Recommended Posts

So... having been using Labview heavily over the past 4 years I am now, for the first time, writing an app that significantly uses DAQmx. Yep, I'm a total noob when it comes to doing those things that have historically been Labview's bread and butter.

Background:

Right now I have a state machine running in a parallel loop continuously reading (using a PCI-6143) analog signals from an accelerometer and gyroscope. This data collection loop posts the data on a queue as an array of waveforms. A data processing loop get the data, updates the front panel, and streams the data to a TDMS file.

I have a task setup with all 6 channels and a scale for each channel that converts the analog signal to g's and deg/sec. Since the sensors have multiple sensitivity settings, the user will (I think) be able to create new scales and apply them to the channels via NI Max when they change a setting. All good so far.

Each point of the waveform data gets sent to my data processing loop as a DBL, using 8 bytes of space. The PCI-6143 has 16-bit ADC's. I can cut the storage requirements by 75% by converting the DBL to an I16. If I knew the possible range of the waveform I could multiply each point by 2^15 and round it off. As near as I can tell the waveform data comes to me post-scaled and without any scaling information.

Question:

How do I go about showing the post-scaled data to the user while saving unscaled data and scale information to disk?

Link to comment

Question:

How do I go about showing the post-scaled data to the user while saving unscaled data and scale information to disk?

I haven't look at it in detail yet, only had a quick play when it came out as I had to demo it, but in LabVIEW 2009 (or rather DAQmx 9.0) the new TDMS API (2.0) is integrated with DAQmx.

The increased speed is attributed to not having to go through the TDMS, LabVIEW or OS buffers, essentially data goes from hardware to HDD.

See here, as it mentions logging raw data to reduce file size footprint.

Reading out the data should still come out scaled.

Worth checking out methinks?

Cheers

-JG

  • Like 1
Link to comment

So... having been using Labview heavily over the past 4 years I am now, for the first time, writing an app that significantly uses DAQmx. Yep, I'm a total noob when it comes to doing those things that have historically been Labview's bread and butter.

Background:

Right now I have a state machine running in a parallel loop continuously reading (using a PCI-6143) analog signals from an accelerometer and gyroscope. This data collection loop posts the data on a queue as an array of waveforms. A data processing loop get the data, updates the front panel, and streams the data to a TDMS file.

I have a task setup with all 6 channels and a scale for each channel that converts the analog signal to g's and deg/sec. Since the sensors have multiple sensitivity settings, the user will (I think) be able to create new scales and apply them to the channels via NI Max when they change a setting. All good so far.

Each point of the waveform data gets sent to my data processing loop as a DBL, using 8 bytes of space. The PCI-6143 has 16-bit ADC's. I can cut the storage requirements by 75% by converting the DBL to an I16. If I knew the possible range of the waveform I could multiply each point by 2^15 and round it off. As near as I can tell the waveform data comes to me post-scaled and without any scaling information.

Question:

How do I go about showing the post-scaled data to the user while saving unscaled data and scale information to disk?

If you save it as a single precision float you will save 50%, you won't have to do integer scaling and you will have a much better approximation. Depends how much disk space is important. Apart from that I'm not really sure what you are asking here. blink.gif The answer "show it on a graph" seems too simplistic. tongue.gif

Edited by ShaunR
Link to comment

Worth checking out methinks?

Thanks Jon... <dreamy eyed adulation> you're my hero. :wub:

I had hoped to separate the data collecting code from the data saving code, but this appears to be the easiest way to accomplish the main goal.

Couple other questions:

How do people manage tasks/channels/scales between the project and the executable on the target computer? My idea was to provide a known good measurement and let the engineers develop and use new tasks, channels, and scales in NI Max as needed. When testing it out on the target computer Max gave me a licensing notice that, since the target computer doesn't have a Labview license, you can use max to modify those things but not create them.

Link to comment

I use DAQmx quite a lot in my day to day work. The only way you can get to the 16bit data from the acquisition device is to use the Analog>Unscaled option on the DAQmx Read vi. When you read the waveform datatype, the unscaled data it converted to DBL values by the DAQmx driver using a polynomial evaluation. You can get the polynomial coefficients for converting to voltage from the driver by using a DAQmx Property Node: DAQmx Channel>>AI.DevScalingCoeff. What you would want to do then, is to set up your read as unscaled, use the unscaled data for your TDMS file, and convert to volts, or whatever scaling at that point. Stay away from the Raw read options. The Unscaled will still return your data in a 2D array of [channels x samples]. Raw returns a 1D array that you have to parse manually. One other note: don't assume the nominal voltage ranges on your device are acceptable for rescaling. Just because you have a 16bit ADC with +-10V range, it doesn't mean you can use 20 / 2^16 as your dV. The actual output from the device probably goes from -10.214 to + 10.173. This will make your data very messy. Always use the device scaling coefficients from the driver.

I use a polynomial composition to combine my unscaled to voltage conversion and my voltage to engineering units conversion. This reduces my CPU load quite a bit. If you are using a custom scale in MAX, I don't know how that works, so you may not see a difference.

Another thing I do it build my tasks programatically from config files (you could use xml). I have found that for situations where people are changing acquisition settings frequently, MAX can be a pain. Also, you can't (as far as I know) dynamically retarget a MAX task to new hardware so that you can run multiple instances of it. Another good thing about not using MAX is that you can control what configuration settings your users do and don't have access to. You don't have to give them AC/DC coupling options if they

are only reading DC levels, for example.

One nice thing about MAX is, well, um, oh, it already has custom scaling options. Though they can be a little cumbersome. I usually avoid MAX as much as I can since its portability is practically nonexistent and the interface is straight out of the early 90s.

  • Like 1
Link to comment

Thanks Jon... <dreamy eyed adulation> you're my hero. :wub:

I had hoped to separate the data collecting code from the data saving code, but this appears to be the easiest way to accomplish the main goal.

lol

I thought you might mention that ;)

But in this case its seems best as you mentioned.

Couple other questions:

How do people manage tasks/channels/scales between the project and the executable on the target computer? My idea was to provide a known good measurement and let the engineers develop and use new tasks, channels, and scales in NI Max as needed. When testing it out on the target computer Max gave me a licensing notice that, since the target computer doesn't have a Labview license, you can use max to modify those things but not create them.

Ok - you are ready for this...

You can create Tasks , Scales etc... in your LabVIEW Project! [... the crowd goes wild...]

post-10325-091937400 1285105538_thumb.pn

I love this, now I can associated hardware configuration with the Project, meaning its easy for anyone (or me in the future) to quick get access to the configuration and is the first thing I do for development.

When it comes time to test/build etc... I use the Import/Export features integrated into the Project.

post-10325-015874200 1285105665_thumb.pn

The only downside with this is that is I want to include DAQmx configuration for an installer it has to be as an .nce file.

The Project exports as an .ini only.

So it means importing the .ini to MAX then exporting it back out as .nce so I can point the installer to it.

Don't know if this has changed in LV2010, or if there is an easier way.

[License] I have seen this license but never been affected by it. What was the result of actually trying to create a Scale?

I mean if the customer buys NI-DAQ hardware which ships with the DAQ-mx drivers then they are going to need to create Scales etc... so this should not be an issue.

I normally get the prompts but I just acknowledge them and it all (seems to) works.

Link to comment

Though they can be a little cumbersome. I usually avoid MAX as much as I can since its portability is practically nonexistent and the interface is straight out of the early 90s.

Agree with most of that. But especially the above.

We usually create the channel associations at run-time in a similar manner to this:

Link to comment

We usually create the channel associations at run-time in a similar manner to this:

I have tried both approaches, I particular like integrating with MAX in certain situations:

Here is a brief list of my FORs:

  • MAX is usually installed (or its not a hassle to) just install the Full Driver (which the client gets on disk with the hardware anyways).
  • MAX already has an interface, no need to create one (save budget)
  • And having a GUI is normally easier for a customer to navigate than a config file API (depending on the Client of course).
  • MAX includes the ability to handle Scales of different types (linear, non-linear etc...) - again, I don't have to account for this my code.
  • Communicating with MAX is really easy using the Task-based API (through PNs etc...)
  • (So far) Clients seem to like using MAX
  • Especially if they have used it before, then you can maintain a consistent interface for hardware configuration.
  • Its easy to back up your configuration and port it over to another PC etc..

Some of my AGAINSTs:
  • It separates your (custom) application
  • Your application has a dependency on MAX

Link to comment
I have tried both approaches, I particular like integrating with MAX in certain situations:

Here is a brief list of my FORs:

  • MAX is usually installed (or its not a hassle to) just install the Full Driver (which the client gets on disk with the hardware anyways).
  • MAX already has an interface, no need to create one (save budget)
  • And having a GUI is normally easier for a customer to navigate than a config file API (depending on the Client of course).
  • MAX includes the ability to handle Scales of different types (linear, non-linear etc...) - again, I don't have to account for this my code.
  • Communicating with MAX is really easy using the Task-based API (through PNs etc...)
  • (So far) Clients seem to like using MAX
  • Especially if they have used it before, then you can maintain a consistent interface for hardware configuration.
  • Its easy to back up your configuration and port it over to another PC etc..

Some of my AGAINSTs:
  • It separates your (custom) application
  • Your application has a dependency on MAX

OK. Here are some of my FOR (nots) using MAX.

  • MAX is never installed it just bloats the installation and if it crashes, will take your whole measurement system down and you will get the telephone call not NI.
  • MAX already has an interface which doesn't fit with either our or our customers "corporate" style requirements for software (logos etc)
  • And having a GUI is normally easier for a customer to navigate and that's the last thing we want since they are neither trained or qualified to do these operations and we cannot poke-yoke it.
  • MAX includes the ability to handle Scales of different types (linear, non-linear etc...) - but this cannot be updated from integrated databases and other 3rd party storage.
  • Communicating with MAX is really easy using the Task-based API (through PNs etc...) because MAX sits in top of DAQmx so what we are really doing is configuring DAQmx.
  • (So far) Clients seem to like using MAX - do they have an alternative?
  • Its easy to back up your configuration and port it over to another PC etc. as it is with any other file based storage except text based files you can track in SVN.

And some more....
  • Have to support more 3rd party software for which there is no source and have no opportunity to add defensive code for known issues
  • Requires a MAX installation to do trivial changes as opposed to software available on most office machines (such as Excel, notepad etc).
  • Does not have the ability to easily switch measurement types, scaling etc to do multiple measurements with the same hardware.
  • MAX requires firewall access (I think) and this can be an issue with some anally retentive IT departments that decide to push their policies on your system..
  • As mentioned before above. Cannot integrate 3rd party storage such as SQL, Access, SQLLite, databases (mentioned again because it is a biggie). Or indeed automated outputs from other definitions (like specs)
  • MAX assumes you have a mouse and keyboard. Its very difficult to use with touch-screens operated by gorillas with hands like feet..

But I think our customers are probably a bit different. They don't want to "play". they just want it to work!. And work 7 days a week, 24hrs a day. We even go to great lengths to replace the "Explorer" shell and start-up logo so operators aren't aware that its even windows.ph34r.gif

Our system is quite sophisticated now though. It can configure hardware on different platforms using various databases, text files, specification documents etc and it can be invoked at any time to reconfigure for different tests if there are different batches/parts. Its probably the single most re-used piece of code across projects (apart form perhaps the Force Directory vi..laugh.gif. I tend to view MAX in a similar vein to express VIs. tongue.gif But that's not to say I never use it.

Edited by ShaunR
  • Like 1
Link to comment

I had hoped to separate the data collecting code from the data saving code, but this appears to be the easiest way to accomplish the main goal.

It's interesting that you mention this. I was just working on a DAQmx application today (I actually don't do this very often, actually, since we've been using mostly cRIOs), and originally I tried to put the acquisition, resampling, and logging in the main loop. After several minutes the application started lagging (I'm not actually quite sure why) way behind. I then started dividing the pieces into loops and before long into separate VIs and therefore threads (using networked shared variables, of course, to handle the signaling), and the timing of the application now works perfectly (and the CPU usage also reduced dramatically). I was actually quite surprised that doing the same things but in separate threads would make that much of a performance difference, but it surely did for this application! (It's a bit easier to debug, too.)

Link to comment

Great list.

...But I think our customers are probably a bit different. They don't want to "play". they just want it to work!. And work 7 days a week, 24hrs a day...

...Our system is quite sophisticated now thoug

Well, I am pretty sure all Customers want their software to work. But I agree, we are mostly likely talking a different project scope as I definitely don't have access to such a system/IP as you described. So if it makes sense for the project I do go with MAX.

Link to comment

I appreciate all the responses. Kudos for everyone! :)

Apart from that I'm not really sure what you are asking here. blink.gif The answer "show it on a graph" seems too simplistic. tongue.gif

Well yeah, but show what on the graph? Users want to see scaled data; I want to store store raw data. In a nutshell the question was about manually transforming between raw, unscaled, and scaled data.

I use DAQmx quite a lot in my day to day work...

Thank you for the detailed response. That's exactly what I was hoping for. I'll probably go with Jon's solution for now given time constraints, but I'd like to understand how to do it manually in case I need more control in the future.

What you would want to do then...

That got me pointed in the right direction. If I'm understanding the DAQmx correctly I'll use the channel and scale coefficients to do the transformations like this:

post-7603-092392200 1285164443_thumb.png

You can create Tasks , Scales etc... in your LabVIEW Project!

Whoa.... no way! <jaw drops in amazement>

(I did discover that... after I had already created everything in Max. I didn't see any way to transfer Max stuff over to the project so I had to redo it all manually.)

When it comes time to test/build etc...

Do you export them so the user can edit them in Max? It looks like the mx items are available to the app without exporting the config, but then they are not available to Max. Yes?

Does not have the ability to easily switch measurement types, scaling etc to do multiple measurements with the same hardware.

Does this mean I can't create separate tasks in Max that all use the same physical channels, even though I'll only be using one task at a time?

Hmm... this could be a problem over the long term. We're going to be using a single data collection computer to measure signals from different sensors, depending on the test being done. I had planned on having the test engineers use Max to set up new tasks, channels, scales, etc. and select the correct task in the test app. But if that's not possible I'll have to create my own interface for that. (Ugh...)

Link to comment

Does this mean I can't create separate tasks in Max that all use the same physical channels, even though I'll only be using one task at a time?

Of course you can, otherwise the Task-based API would be pretty useless.

As long as you don't run two tasks at the same time that share the same physical resources you are fine.

Link to comment

I appreciate all the responses. Kudos for everyone! :)

Wrong site. It's rep-points here biggrin.gif

Well yeah, but show what on the graph? Users want to see scaled data; I want to store store raw data. In a nutshell the question was about manually transforming between raw, unscaled, and scaled data.

That's what I mean. These are mutually exclusive?

Does this mean I can't create separate tasks in Max that all use the same physical channels, even though I'll only be using one task at a time?

Hmm... this could be a problem over the long term. We're going to be using a single data collection computer to measure signals from different sensors, depending on the test being done. I had planned on having the test engineers use Max to set up new tasks, channels, scales, etc. and select the correct task in the test app. But if that's not possible I'll have to create my own interface for that. (Ugh...)

Yes of course you can. But it depends if its the horse driving the cart or the other way round.

As soon as you start putting code in that needs to read MAXs config so you know how to interpret the results, you might as well just make it a text file that they can edit in notepad or spreadsheet program and when you load it you already have all the information you need without having to read it all from MAX. Otherwise you have to first find out what tasks there are and depending on what has been defined (digital AI AO?), put switches in your code to handle the properties of the channels.However if you create the channels on the fly, you don't need to do all that. It also has the beneficial side effect that if you can do things like switch from a "read file.vi" to a a "read Database" vi (oops. I meant Read Config Class biggrin.gif) with little effort.

However, if they are just "playing" then you are better off telling them to use the "Panels" in MAX.

Edited by ShaunR
Link to comment

Wrong site. It's rep-points here biggrin.gif

You're lucky I can't take them back. :P

That's what I mean. These are mutually exclusive?

That diagram doesn't do what I need. I need to keep every bit of data from the 16-bit ADC; your diagram is going to lose precision. I could avoid that by having them enter all the scaling information in the ini file... except I'm collecting data from m sensors simultaneously and there are n possible sensors to choose from to connect. On top of that any sensor can potentially be hooked up to any terminal. And they'll need to be able to add new sensors whenever they get one, without me changing the code. Oh yeah, it has to be easy to use.

Can all this be done with an ini? Sure, but the bookkeeping is likely to get a bit messy and editing an ini file directly to control the terminal-channel-scale-sensor mapping is somewhat more error prone than setting them in Max. Implementing a UI that allows them to do that is going to take dev time I don't have right now, and since Max already does it I'm not too keen on reinventing

the wheel.

I don't think your technique is bad--heck I'm ALL for making modular and portable code whenever I can. This is one bit of functionality where I need to give up the "right" way in favor of the "right now" way.

However, if they are just "playing" then you are better off telling them to use the "Panels" in MAX.

Heh... they're test engineers in a product development group. They play with everything you can imagine related to the product. (Read: They want infinitely flexibile test apps.) But they don't play with the data. That better be rock solid.

Link to comment

You're lucky I can't take them back. :P

Touche wink.gif

That diagram doesn't do what I need. I need to keep every bit of data from the 16-bit ADC; your diagram is going to lose precision. I could avoid that by having them enter all the scaling information in the ini file... except I'm collecting data from m sensors simultaneously and there are n possible sensors to choose from to connect. On top of that any sensor can potentially be hooked up to any terminal. And they'll need to be able to add new sensors whenever they get one, without me changing the code. Oh yeah, it has to be easy to use.

Can all this be done with an ini? Sure, but the bookkeeping is likely to get a bit messy and editing an ini file directly to control the terminal-channel-scale-sensor mapping is somewhat more error prone than setting them in Max. Implementing a UI that allows them to do that is going to take dev time I don't have right now, and since Max already does it I'm not too keen on reinventing

the wheel.

I don't think your technique is bad--heck I'm ALL for making modular and portable code whenever I can. This is one bit of functionality where I need to give up the "right" way in favor of the "right now" way.

It was just to show branching. What the numbers are is irrelevant. that's why I don't understand your difficulty with reading one value and showing another. I could just as easily read an int and displayed a dble.

But anyway.......

Just saving the ADC won't give you more precision. In fact, the last bit (or more) is probably noise. Its the post processing that gives a more accurate reading. You usually gain 1/2 a bit of precision and with post-processing like interpolation and averaging, significant improvements can be made (this is quite a good primer). What s the obsession with saving the ADC?

Now. From your n and m descriptions, I'm assuming you're thinking nxm configurations (is that right?). But. You don't care what the sensor is only that it has an analogue output which you can measure. You can't log data from nxm devices simultaneously because you only have m channels. So you only have to configure m channels (or the engineers do at least).. If you allow them to make a new task every-time they change something, the list of tasks in MAX very quickly becomes un-manageable. We use 192 digital IOs for example. Can you imagine going through MAX and and creating a Task for each one?

What you are describing is a similar problem we have with part numbers. Its a management issue rather than a programming one. We (for example) may have 50 different part numbers, all with different test criteria (different voltage/current measurements, excitation voltages, pass-fail criteria etc, etc). But they all use the same hardware of course, otherwise we couldn't measure it.

So the issue becomes how can we manage lots of different settings for the same hardware. Well. One way is a directory structure where each directory is named with the part number and contains any files required by the software (camera settings, OCR training files, DAQ settings, ini-files, pass/fail criteria....maybe 1 file, maybe many). The software only needs to read the directory names and hey presto! Drop down list of supported devices. New device? New directory. You can either copy the files from another directory and modify, or create a fancy UI that basically does the same thing. Need back-ups? Zip the lot biggrin.gif Need change tracking? SVN!

Another is a database which takes a bit more effort to interface too (some think its worth it), but the back-end for actually applying the settings is identical. And once you've implemented it you can do either just by using a case statement. biggrin.gif

What you will find with the NI products, is that really there are't that many settings to change. Maybe between current loop/voltage and maybe the max/min and you will be able to measure probably 99% of analogue devices. Do they really need to change from a measurement of 0-1V when 0-5v will give near enough the same figures (do they need uV accuracy?) Or will mV do! Don't ask them, you know what the answer will be tongue.gif). Do we really need to set a 4-20ma current loop when we can use 0-20 (its only an offset start point after all.).

Heh... they're test engineers in a product development group. They play with everything you can imagine related to the product. (Read: They want infinitely flexibile test apps.) But they don't play with the data. That better be rock solid.

Indeed. And I would much rather spend my programming time making sure they can play with as little as possible, because when they bugger it up, your software will be at fault tongue.gif You'll then spend the next week defending it before they finally admit that maybe they did select the wrong taskbiggrin.gif

Edited by ShaunR
Link to comment

Quick responses... gotta run soon.

It was just to show branching. What the numbers are is irrelevant. that's why I don't understand your difficulty with reading one value and showing another.

We're crossing signals somewhere in here. If I do a Read that returns a waveform, the data is post-scaled, which I don't want to save due to size and I don't want to truncate due to lost precision. If I do a Read that returns the unscaled I16 data, then I have to manually apply the arbitrary scaling factors before displaying it.

Indeed. And I would much rather spend my programming time making sure they can play with as little as possible

Me too. Except that it's their job to play with all those things to try and break the products. I can't restrict what I allow them to do, or they come back every other day and say, "now I want to be able to do..." My tool becomes the blocking issue, and that's bad mojo for me. (My tools are used by test engineers in a product development group. They need to be able to handle whatever screwball changes the design engineers conjured up last night at the bar. These are not manufacturing test tools with well defined sequential test processes.)

Link to comment

Quick responses... gotta run soon.

We're crossing signals somewhere in here. If I do a Read that returns a waveform, the data is post-scaled, which I don't want to save due to size and I don't want to truncate due to lost precision. If I do a Read that returns the unscaled I16 data, then I have to manually apply the arbitrary scaling factors before displaying it.

Yes. Welcome to the real world laugh.gif

Me too. Except that it's their job to play with all those things to try and break the products. I can't restrict what I allow them to do, or they come back every other day and say, "now I want to be able to do..." My tool becomes the blocking issue, and that's bad mojo for me. (My tools are used by test engineers in a product development group. They need to be able to handle whatever screwball changes the design engineers conjured up last night at the bar. These are not manufacturing test tools with well defined sequential test processes.)

But OOP makes that easy, right? biggrin.gif Sorry. couldn't resist. rolleyes.gif

They probably need 5 minute tools (as I call them). Discardable software that doesn't come under formal control, is quick to implement (5-30 mins) and usually provided by one of the members in the department that "likes" programming. You have anyone like that?

As an example. One of our machines was playing up. We thought it was temperature related. So we decided to monitor the temperature. So I took one of the graph examples. Replaced the sig gen vi with a DAQ one and added a save to file. It took 5 mins max. I then compiled it, copied to the machine and pressed the run arrow (no fancy user interfaces, hard-coded DAQ channel) and we all went home. Next day, came in and analysed the file, found the fault , ran the logger again to check and once everything was fine, removed it. It wasn't part of the "real software". It wasn't meant to be re-used. It was just a quick knock-up tool to log data for that specific scenario.

Link to comment

Yes. Welcome to the real world laugh.gif

Thank you. It's good to be wanted.

I knew some manual processing was going to be needed. The question was asking how other people dealt with the problem I was having. Jon suggested using the Configure Logging vi on the task. COsiecki suggested retrieving unscaled data and getting the scaling coefficients from the task. You suggested implementing everything as a configurable ini. All viable solutions. All with different strengths. All appreciated.

But OOP makes that easy, right? biggrin.gif Sorry. couldn't resist. rolleyes.gif

Easier, no doubt about it. Regardless of whether the app is implemented using OOP or procedural code, if they have to come to me for any little change it causes delays in the entire product development process.

They probably need 5 minute tools (as I call them).

I call them "throw aways." Unfortunately that isn't acceptable in this situation. Here's a bit more of an explanation of my environment:

I write test tools for a product development group. Early in the product's life cycle the dev team is evaluating all sorts of alternatives to decide which route to take. The alternatives can be entirely different components, different types of components, different combinations, different algorithms, etc. Anything about the details of the design can change. This process can take months. The tool I give them for qualification testing needs to be as flexible as possible while providing solid data that is easily comparable. If I or the test engineer is constantly going in to tweak the tool's source code, config file, etc., then it allows project managers and design engineers to more easily question the validity of results they don't particularly like.

In this particular project, part of component qualification is comparing the ADC signals to signals received from other sources (I2C, wireless capture, vision system etc.) to cross check the data from the various components. It also requires coordinating the motion of complex external actuators with the data collection. Using the external actuators and parsing the other data streams is beyond the programming capabilities of the test engineers. (And they don't have time for it anyway.)

As the product's design becomes solidified that kind of flexibility is deprecated. Design focus shifts from component qualification to product performance. Test efficiency and consistency become much more important as multiple samples are run through tests to obtain statistically significant data. Unfortunately the shift between component qualification and product testing is very blurry. There is no "we're done with component qualification, now let's build tools for product testing." The code written for qualification testing becomes the code used for product testing.

I do take shortcuts where I can during the component qualification. UI's are very simple wrappers of the module's* underlying functional code and are often set up as single shots. (Press the run arrow to execute instead of a start and stop button.) There isn't a lot of integration between the various functional modules. Error handling probably isn't robust enough. But the functional code of each module shouldn't change once I've written it.

(*Some of the modules we have finished or have in the pipe are an I2C collection actor object, an analog signal collection state machine, a sequencer for the external actuators, a wireless collection module, a few analysis modules, etc. Currently each of these function independently. The intent is to combine them all into a single app later on in the project.)

Link to comment

Thank you. It's good to be wanted.

I knew some manual processing was going to be needed. The question was asking how other people dealt with the problem I was having. Jon suggested using the Configure Logging vi on the task. COsiecki suggested retrieving unscaled data and getting the scaling coefficients from the task. You suggested implementing everything as a configurable ini. All viable solutions. All with different strengths. All appreciated.

Easier, no doubt about it. Regardless of whether the app is implemented using OOP or procedural code, if they have to come to me for any little change it causes delays in the entire product development process.

I call them "throw aways." Unfortunately that isn't acceptable in this situation. Here's a bit more of an explanation of my environment:

I write test tools for a product development group. Early in the product's life cycle the dev team is evaluating all sorts of alternatives to decide which route to take. The alternatives can be entirely different components, different types of components, different combinations, different algorithms, etc. Anything about the details of the design can change. This process can take months. The tool I give them for qualification testing needs to be as flexible as possible while providing solid data that is easily comparable. If I or the test engineer is constantly going in to tweak the tool's source code, config file, etc., then it allows project managers and design engineers to more easily question the validity of results they don't particularly like.

In this particular project, part of component qualification is comparing the ADC signals to signals received from other sources (I2C, wireless capture, vision system etc.) to cross check the data from the various components. It also requires coordinating the motion of complex exteral actuators with the data collection. Using the external actuators and parsing the other data streams is beyond the programming capabilities of the test engineers. (And they don't have time for it anyway.)

As the product's design becomes solidified that kind of flexibility is deprecated. Design focus shifts from component qualification to product performance. Test efficiency and consistency become much more important as multiple samples are run through tests to obtain statistically significant data. Unfortunately the shift between component qualification and product testing is very blurry. There is no "we're done with component qualification, now let's build tools for product testing." The code written for qualification testing becomes the code used for product testing.

I do take shortcuts where I can during the component qualification. UI's are very simple wrappers of the module's* underlying functional code and are often set up as single shots. (Press the run arrow to execute instead of a start and stop button.) There isn't a lot of integration between the various functional modules. Error handling probably isn't robust enough. But the functional code of each module shouldn't change once I've written it.

(*Some of the modules we have finished or have in the pipe are an I2C collection actor object, an analog signal collection state machine, a sequencer for the external actuators, a wireless collection module, a few analysis modules, etc. Currently each of these function independently. The intent is to combine them all into a single app later on in the project.)

I've been in the same boat many times. the real problem (as I saw it) was that I was (end-of-line) so once something was achieved it was then "how do we test it? Lets write some software!" It was re-active rather than pro-active development. After all "its ONLY software and that takes 2 minutes.... right? Unfortunately that kind of thinking takes a long time to change and is really the domain of a "Test Engineer" rather than a "Software Programmer" since a test engineer has detailed knowledge of the products and how to test them from a very early stage and is privy to spec changes very early on.

Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?

Link to comment

After all "its ONLY software and that takes 2 minutes.... right?

There are some engineers who think like that, but most understand that there are layers of complexity to what they're asking for. When someone tells me what they want and then slip in a, "that should only take a couple weeks," I'm very blunt in my response. "*You* define your feature list and priorities. *I* define the delivery date."

Sounds like "departmental expansion" is the route. You are the bottle-neck so you need resources to overcome it. Are you the only programmer?

There are two LV programmers here now and another one starting on Monday. Our group has been the bottleneck at times in the past, primarily because of poor planning and unrealistic expectations. We're learning how to get better.

These days I put off almost all UI development until the end of the project and focus their attention on the core functionality they need. We've spent too much time in the past rewriting user interfaces to accomodate the latest change requests. I also make the test engineers directly responsible for what features get implemented. How do I do that? I develop in two week iterations. At the beginning of an iteration I sit down with the test engineers and their feature list and I ask them what features they want me to implement over the next two weeks? If I think I can do it, I agree. If not, they have to reduce their request. (Usually I have one or two features that are the primary goals and several items that are secondary goals. Sometimes I get to the secondary goals, sometimes not.)

At the end of the iteration I deliever a usable, stand-alone functional component with a very simple UI that does exactly what was requested, no more. They can use it immediately as part of the chain of individual tools I'm giving them every two weeks. Hopefully at the end we have time to wrap it all up in a larger application and make it easier to use, but if we don't they still have the ability to collect the data they need. If we hit their deadline and something isn't implemented, it's because of their decisions, not mine.

So far this is working out very well, but it's still very new (I'm the only one doing it) so we'll see how it goes over time. (I talked about this a little more in this recent thread on the dark side.)

Link to comment

There are some engineers who think like that, but most understand that there are layers of complexity to what they're asking for. When someone tells me what they want and then slip in a, "that should only take a couple weeks," I'm very blunt in my response. "*You* define your feature list and priorities. *I* define the delivery date."

There are two LV programmers here now and another one starting on Monday. Our group has been the bottleneck at times in the past, primarily because of poor planning and unrealistic expectations. We're learning how to get better.

These days I put off almost all UI development until the end of the project and focus their attention on the core functionality they need. We've spent too much time in the past rewriting user interfaces to accomodate the latest change requests. I also make the test engineers directly responsible for what features get implemented. How do I do that? I develop in two week iterations. At the beginning of an iteration I sit down with the test engineers and their feature list and I ask them what features they want me to implement over the next two weeks? If I think I can do it, I agree. If not, they have to reduce their request. (Usually I have one or two features that are the primary goals and several items that are secondary goals. Sometimes I get to the secondary goals, sometimes not.)

At the end of the iteration I deliever a usable, stand-alone functional component with a very simple UI that does exactly what was requested, no more. They can use it immediately as part of the chain of individual tools I'm giving them every two weeks. Hopefully at the end we have time to wrap it all up in a larger application and make it easier to use, but if we don't they still have the ability to collect the data they need. If we hit their deadline and something isn't implemented, it's because of their decisions, not mine.

So far this is working out very well, but it's still very new (I'm the only one doing it) so we'll see how it goes over time. (I talked about this a little more in this recent thread on the dark side.)

Well. Doesn't sound too bad. 3 people should be able to support a number of production environments. You have a predictable time-scale for implementation that can be planned for and you use an iterative life cycle. Which one of your team came from production?

Link to comment

Which one of your team came from production?

Need you ask? ;) I've held positions across nearly all the stages of a product's life cycle: Conception, development, transition to manufacturing, manufacturing sustaining, etc. The only stage I haven't been involved in is end-of-life.

Link to comment

Need you ask? ;) I've held positions across nearly all the stages of a product's life cycle: Conception, development, transition to manufacturing, manufacturing sustaining, etc. The only stage I haven't been involved in is end-of-life.

Well. You never know. Its a bit like mathematicians. there are pure mathematicians and applied mathematicians. Pure mathematicians are more interested in the elegance of arriving at a solution whereas applied mathematicians are more interested in what the solution can provide.

Well you've got the control and the expertise. But maybe not the tool-kit that comes from programming in those positions wink.gif

But back to MAX. I (and the engineers that use the file system) just find it much quicker and easier to maintain and modify. Like I said. We have lots of IO (analogue and digital) and find MAX tedious and time-consuming.A single excel spreadsheet for the whole system is much easier. And when we move to another project we don't have to change any configuration code, just the spreadsheet which can be done by anyone more or less straight from the design spec (if there is one wink.gif ).

But you know your processes. A man of your calibre I'm sure will look at the possible alternatives and choose one that not only fixes the problem now, but is scalable and will (with a small hammer) fit the tomorrow.biggrin.gif

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.