Sharon_ Posted January 10, 2011 Report Share Posted January 10, 2011 Hi, For other softwares it is no.of non-defective modules/total modules. How do we measure the quality of a labview application?. Is there a formula for this? I am a little bit confused. Thanks Sharon Quote Link to comment
Mark Yedinak Posted January 11, 2011 Report Share Posted January 11, 2011 You can use the same measure using the number of defect free VIs/Total number of Vis. However, there are many metrics you could use. This is true even for traditional programming languages. You might want to look at the VI Analyzer toolkit. This can give you various metrics for your code. These can be used to measure the quality of your code. Quote Link to comment
Daklu Posted January 12, 2011 Report Share Posted January 12, 2011 How do we measure the quality of a labview application? Labview is a graphical language, so we have a unique opportunity to measure quality: 1. Print out all your block diagrams in full color. 2. Rent an art gallary. 3. Have a showing. Quality = Gross Receipts - Expenses (I'm curious though, are any business decisions made on the basis of the quality metric, or is it just something for managers to fuss over?) 1 Quote Link to comment
crelf Posted January 12, 2011 Report Share Posted January 12, 2011 I'm curious though, are any business decisions made on the basis of the quality metric, or is it just something for managers to fuss over? Yes, and yes. The quality metric can be useful in tracking when a software product has acheived a steady state. It's common to think of a project as finished when two things occur: when all the requirements are met, and when the quality it acceptable. I'll leave it to another time to discuss "acceptable" So, in tracking the requirements coverage (NIRG is seriously excellent for that - it's seriously one of NI's least appreciated offerings - see an example of one of our project templates below) and quality, you don't have to rely on "I'm-almost-done" statements from engineers (hey, we all know how flaky they can be). Quote Link to comment
ShaunR Posted January 12, 2011 Report Share Posted January 12, 2011 It's common to think of a project as finished when two things occur: when all the requirements are met, and when the quality it acceptable. Or when the warrantee period has expired Quote Link to comment
Daklu Posted January 13, 2011 Report Share Posted January 13, 2011 I'll leave it to another time to discuss "acceptable" Is now a good time to discuss it? If you're using a calculated quality metric to determine when the quality is acceptable, how did you determine what the cutoff is between "good enough" and "not good enough." I'm also curious what you consider a "defect." Obviously a bug is a defect. I assume anything that doesn't adhere to your coding standards, such as no documentation on the block diagram, is also a defect. Doesn't categorizing them both as "defects" essentially give them the same weight and importance in the quality metric? I assume we agree a bug causing customer data loss is far more important than a block diagram without documentation. If so, then what value is the metric providing? I don't mean to be hammering on you about this--I'm genuinely curious. I did a lot of Six Sigma work from 1997 through ~2005 in manufacturing-related business, but I've never quite grasped how to apply the principles to software development. Quote Link to comment
ShaunR Posted January 13, 2011 Report Share Posted January 13, 2011 (edited) I don't mean to be hammering on you about this--I'm genuinely curious. I did a lot of Six Sigma work from 1997 through ~2005 in manufacturing-related business, but I've never quite grasped how to apply the principles to software development. The 6 sig part is fairly straight forward and pretty well documented with templates you can apply. The hard part is the defect metrics (as applied to the software code rather than the project) and defining what they are and methods of measuring them (GUIs especially). This is especially difficult for for LabVIEW as most knowledgeable sources talk about base code size where every 100 lines is about 1 function point and we already know that it's hard to equate text lines to G. I prefer numbers of VI's with weightings for collections every 200K (directory size) since it's easy to measure and validate. But I'm sure there is a more formal way of doing it. The Wikilibrary has a good article with some useful views on the subject. It's much easier when you can get a ruler out and measure something Edited January 13, 2011 by ShaunR Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.