Jump to content

Best Practices in LabVIEW


Recommended Posts

I have an opportunity to take an existing large project and re-architect it from the ground up. All options are open to me. So, I want to apply all the latest and greatest tools and techniques to build an eco system for my code and my company that will make this project and all others after it easier to maintain. I would like your thoughts on what things work and what are not worth the effort. Some ideas I am considering are:

Change the code to use LVOOP design patterns.

Organize all my reusable code into distributable packages (VIPM?) that are available in the palettes.

Add unit tests to all (or most) of my code.

Integrate the Diff and Merge tools into my SCC system (Perforce).

Use the GOOP Dev Suite to generate UML from my code (and vice versa) for doc purposes.

Use Packed Project Libraries to distribute plugin code instead of LLBs.

And on a more technical note specific to my project: decouple the UI from the code. What I mean is instead of the UI being a FP of a VI, implement it as a web page (using the UI builder?) and have that interface to the code via web services. This might be too radical to do right now...

So, please let me know what tools and techniques you use that work well in large applications with multiple devs. I want to try to be as state of the art as possible since I doubt I will get another chance to do this again.

-John

Link to comment

Just on the two (four) topics I have experience with.

LVOOP & uml:

I would take any chance to use LVOOP. I found it not really complicated using the simple equation 'type def cluster'(+bonus power-ups)=object.

Once you are using LVOOP, you will want to look into common OOP design patterns, also they not always translate into the by-val paradigm.

I think for any larger OOP app, UML is a must. Just reading OOP design patterns will confront you with this, and if you are already a graphical programming freak using LV, why for hell skip UML.

On GOOP I'm not really sure. It's the tool that integrates with LV. But if I understood it right, it's favoring by-ref classes. I find by-val more native. On free alternatives, I found eclipse/papyrus to be a very powerful uml modelling tool (coming from the java world).

Reuse, SCC and VIPM (+ Testing):

I had great fun of getting a good process in place to distribute my reuse code using VIPM. It's paying off in the 2nd project.

I think it's important to design the reuse-code deployment also with your SCC in mind. As I don't use perforce, I can't really say much about the specifics.

One of the places I really would like to have automated testing is my reuse code. As it's used frequently, this should very much be worth the effort.

To that complex I'd also put in a new tool: bug-trackers. You can misuse them as feature trackers as well. Integrate them with the SCC seems important. Sadly I gave up my own use of mantis (lack of time to maintain the IT infrastructure)

Felix

Link to comment

To that complex I'd also put in a new tool: bug-trackers. You can misuse them as feature trackers as well. Integrate them with the SCC seems important.

I like this idea. Our company uses Bugzilla. I will have to look into how that can integrate with Perforce and LabVIEW. I am sure the text coders have already sorted this out so I will have to learn from them.

Link to comment

Here't the first thing that crossed my mind as I began reading your post - Things You Should Never Do

Good point. But I don't plan to start from scratch. Just re-factor, reorganize and apply best practices. The original code works great for us but only one person really understand it and can maintain it. We started with best intentions but deadlines forced the usual shortcuts. And fixes and features were added over time but not in the most elegant way since they were not architect-ed in the first place. The goal here is to look at what we ended up with, convert it into a set of requirements and then alter the existing code (and rewrite parts as needed) using best practices and a team of developers. The end goal is a code base everyone understand and can help maintain.

Oh, and since the system is working now, we have no deadline, so we can take the appropriate amount of time to get it done right.

Link to comment

Here't the first thing that crossed my mind as I began reading your post - Things You Should Never Do

Thought of that too!

Good point. But I don't plan to start from scratch. Just re-factor, reorganize and apply best practices.

Oh sure, that's what they all say...

Anyway, as long as you understand the potential for quagmire, you are slightly better prepared to avoid it. Good luck!

Jason

Link to comment

Lots of great ideas in this thread!

I would recommend using project libraries, but be very careful if you want to use packed project libraries. I tried them when they came out last fall and regret that. They don't deliver on performance and they are a maintenance disaster since you can't replace them with a project library (currently one direction only), which you likely will need to do at some point. For us this was resulted in a very costly nightmare. Maybe packed project libraries will be good in a future version, but I think in LabVIEW 2010 they are best avoided. (On the other hand, if you want to help make them better by providing good feedback, that would be great.)

Of course, I highly recommend using objects (and I agree with using by-value objects generally) and design patterns. Also, I think it is important to learn how to create and use simple interfaces, since this can make sections of the code separable, helps to demarcate the limits of their design, and (because you can greatly reduce links between classes) makes building applications more robust and faster.

Good luck with the project!

Paul

Link to comment

Integrate the Diff and Merge tools into my SCC system (Perforce).

I really like these tools, especially Diff. Merge is a bit hard to use because it involves a lot of windows (at least 7 - block diagram and front panel for each of the VIs being merged and the result of the merge, plus the list of differences) but it is nice to have. Every once in a while a Perforce command takes an excessively long time to complete - over 30s - but in general it works well. When using Diff, I recommend doing it from within LabVIEW when possible, because it sorts out the search paths for you and you don't get lots of warnings about subVIs not found. This does not happen properly when launching diff from the command line (within P4V for example).

Link to comment

Unit Testing - Contrary to common perception, unit testing is not free. In fact, it is quite expensive. Not only does it take time for initial development, but you have to go in and fix the unit tests when a design change breaks them. When a test run results in a bunch of failure, chances are at least some of those failures are due to errors in your test cases. Every minute you spend fixing your test cases so they result in a pass is a minute you're not spending improving your code. Don't get me wrong; I think unit testing can be extremely helpful and I'm still trying to figure out how to best use it as part of my dev process. But I think it's a mistake to try and create a comprehensive unit test suite.

I agree with you that unit testing is not free, but am surprised to think it may be a common perception. Also, I would agree that its not valuable to create a comprehensive unit test suite (testing all VIs for all possible inputs). What's most valuable is to identify the core things you care about, and test those things for cases you care about. Unit tests can be a really useful investment if you plan on refactoring your existing code and you want to ensure that you haven't broken any critical functionality (granted that the tests themselves will most likely need to be refactored during the refactoring process). But they do add time to your project, so just make sure that you're testing the stuff that you really care about.

Link to comment

This is going to be painful yes.gif. Not so much that you are re-factoring code (many of us do that all the time). But you are switching paradigms so it's going to be a complete rewrite and you won't be able to keep perfectly good, working and tested code modules (even the worst programs have some).

But the good news is. There will still only be 1 person that understands the code only it won't be the other guy biggrin.gif

I usually find one of the hardest initial steps is where to start. I strongly recommend you don't do it all in one go, but rather use an iterative approach. Identify encapsulated functionality (e.g a driver) and rewrite that but maintain the same interface to the rest of the code (to begin with). This way you will be able leverage existing test harnesses and, once that module is complete; still be able to run the program for systems tests. Then move to the next.

At some point you will eventually run out of single nodal points and find that you need to modify the way the modules interact to enable to realise your new architecture. But by that point you have gotten over the initial learning curve and will be confident enough to make much riskier changes whilst still having a functioning application.

The big bonus of approaching it this way is that you can stop at virtually any point if you run into project constraints (run out of time/budget, another project gets higher priority, you contract a serious girlfriend etc) and still have a functioning piece of software that meets the original requirements. You can put on the shelf to complete later but still sell it or move it into production or whatever you do with your software.

Edited by ShaunR
Link to comment

Thanks for all the feedback. So very good points in there to consider.

Regarding PPLs, I currently package my plugins into an LLB using the OpenG builder. I do this to make them distributable. Essentially they become a 'dll' at this point, allowing me to install them on my target machines. The advantage of this is all the VIs used by the plugins are included and name-spaced at the time of the build so that I can never have a conflict with a similarly named file from another LLB or part of the code. The other advantage is I can pull from the same pool of reuse code for all plugins and I only get a snapshot of the VIs at the time of build.

The disadvantage is I am using a non-standard build tool (OpenG builder) and I want to separate my source from OBJ (new LV2010 feature) and I don't think that will work with the OpenG builder. I was hoping the PPLs would give me the same functionality that I get from these LLBs. For those of you who have had issues with PPLs, can you give me more details or reference some CARs so I can see if the bugs will affect me?

Regarding unit testing, my goal was to apply this to my reuse code. And I have a lackey I plan to utilize to write these. :-) The hope is this will make the reuse libraries more robust and ensures they continue to work as expected since everyone will be using them in their projects.

As for re-write vs re-factor, I plan to branch my code and develop the new version by editing the existing code. But, I need to continue to maintain the existing code while this re-factoring takes place, so I will be pulling existing code over from the branch and using it in the new version if it fits. This is not a total rewrite from scratch, all the functionality that exists will remain. Just the methods used to achieve it will be upgraded to more modern and best practices. I don't plan to release any of the new code to production until the re-factor is complete. And just so I am clear, this is not just about changing the code to use OOP but rather about changing the DEV process to project better software in a team environment instead of the lone LabVIEW ranger (me) cranking out code as fast as possible. I want to be a real CLA, not just a CLD on steroids... tongue.gif

And finally, I hope the end result is code that my whole team can understand and maintain, not just me. Yes it hurts job security but it allows me to build an efficient software dev environment that I and my team will benefit from for years.

I think when I am done, this will make a great case study to present at NI Week 2012. cool.gif

Link to comment

Any opinion on the NI Unit Test Framework vs the JKI unit test package?

(I have one licence for the NI version but need to pick one before getting everyone licensed)

Hey John, here are a few chats on the subject plus a couple of others on the subject of testing you may find interesting:

Unit Test Frameworks: NI vs JKI?

What unit test framework for LabVIEW do you use?

Unit Testing Strategies with xUnit?

Your favorite unit test solution for LabVIEW

Unit Testing dynamic dispatch VIs

Testing private class vis

Link to comment

Any opinion on the NI Unit Test Framework vs the JKI unit test package?

I've used both but am expert in neither. Both appear to be capable of doing the job. My impressions...

NI's UTF is easier to start using right out of the box and doesn't require any coding. It's more of an application than a framework. Select a vi to test, define the inputs, check the outputs, done. You'll probably need to write code for more advanced unit test scenarios though. Unfortunately I have had some issues that pretty much make NI's UTF unusable. (Link) ("Unusable" is probably too strong a word. "Not worth the effort" is better.)

JKI's UTF is a true framework with all the plusses and minuses you might expect. Based on LVOOP, it offers more flexibility in ways that are natural to an OO programmer. On the downside, it's taken me some time to figure out how to effectively design test cases and test suites and there aren't a lot of examples to learn from. (And I'm still not sure I'm doing it "right.") True to form, JKI (usually Omar) has always been quick to respond to questions and helpful.

One thing I've learned is unit testing is not a "I'll just quickly bang out what we need" task. It's a separate development effort all on its own, requiring careful planning and forethought along with it's own brand of expertise. I guess that's why software development houses have dedicated teams for testing. Give yourself (or your lacky) plenty of time to experiment with the framework, learn what kind and when the need to write supporting code, how to best reuse test code, etc.

Final thought: In the past I've naively viewed unit testing a bit like a magic bullet. Turns out... not so much. It's good at catching certain kinds of bugs, such as when an interface is implemented incorrectly. Ultimately it will only catch bugs you're testing for, and if you've thought to test for them chances are you wrote the code correctly in the first place. Unit testing is only one part of a good test process. User-driven scripted tests (a written list of steps for the tester to do) and exploratory testing are valuable techniques too.

Link to comment

Final thought: In the past I've naively viewed unit testing a bit like a magic bullet. Turns out... not so much. It's good at catching certain kinds of bugs, such as when an interface is implemented incorrectly. Ultimately it will only catch bugs you're testing for, and if you've thought to test for them chances are you wrote the code correctly in the first place. Unit testing is only one part of a good test process. User-driven scripted tests (a written list of steps for the tester to do) and exploratory testing are valuable techniques too.

Indeed. It is more risk management than a no-bugs solution. The mere fact that you are writing more code (for the purpose of test) means that even your test code will have bugs so you can consider that software, testing software, actually introduces the risk that you will expend effort to find a solution to a non-existent bug in the main code.

Unit testing (white-box and black-box) has it's place. But it is only one method of a number that should be employed. Each to a greater or lesser extent. We mustn't forget systems testing which tests the interaction between modules and the fitness for purpose, rather than that an individual module actually does what it s designed to do.

The main issue for any testing though is that the programmer that created the code under test "should" never be the person who tests, or writes any code that tests it. The programmer will always design a test with an emphasis on what the module is supposed to achieve, to prove that it meets the design criteria - that's his/her remit. Therefore the testing becomes weighted to proving the positive rather than the negative (relying on error guessing alone) whether it's a software testing solution or not. It's the negative (unanticipated) scenarios where the vast proportion of bugs lie and, to expect the programmer to reliably anticipate the exceptions when she/he is inherently focused on the operational aspects, is unrealistic and (probably) the biggest mistake most companies make.

Edited by ShaunR
Link to comment

I was hoping the PPLs would give me the same functionality that I get from these LLBs. For those of you who have had issues with PPLs, can you give me more details or reference some CARs so I can see if the bugs will affect me?

See these threads for a start:

packed project libraries

Libraries, Packed Libraries, Source Code Distributions, and the End of the Universe

Link to comment

The main issue for any testing though is that the programmer that created the code under test "should" never be the person who tests, or writes any code that tests it.

I disagree with the statement as it is written, but I suspect we agree on the bigger picture.

I think the developer should write unit tests for the code he has developed. (And I know this is common in some software dev environments.) As you said, it helps verify the 'positive' requirements have been met. Well written unit tests also help communicate the developer's intent to other programmers. The very act of writing tests puts me in a different mind set and helps me discover things I may have missed. Requiring at least a basic set of unit tests keeps the developer honest and can avoid wasting the test team's time on silly oversights.

However, that set of unit tests should not be blindly accepted as the complete set of unit tests that verifies all (unit testable) requirements have been met. When the component is complete and checked in, the test team takes ownership of the developer's unit tests and adds their own to create a complete unit test suite for that component. And of course, in a "good" software development process the developer never has the authority to approve the code for production. I'm pretty sure we agree on that.

Link to comment

For those of you who have had issues with PPLs, can you give me more details or reference some CARs so I can see if the bugs will affect me?

I am creating an application that uses packed project libraries for plugins. The only "issue" I ran into involved using classes. See this thread. It is causing me to consider either not using classes in my plugins unless they are used only in that plugin or to stick with using source distributions. Using "replace library with packed library" bit me since it is one way. Is there a plan to have "replace packed library with library" functionality?

Link to comment

I disagree with the statement as it is written, but I suspect we agree on the bigger picture.

Possibly :)

I think the developer should write unit tests for the code he has developed. (And I know this is common in some software dev environments.) As you said, it helps verify the 'positive' requirements have been met. Well written unit tests also help communicate the developer's intent to other programmers. The very act of writing tests puts me in a different mind set and helps me discover things I may have missed. Requiring at least a basic set of unit tests keeps the developer honest and can avoid wasting the test team's time on silly oversights.

Perhaps it was worded ambiguously since I did not mean to imply that the developer should never write any code to verify his software. But that it should not be used as the formal testing process. Most developers want to develop "bug free"software and it's useful for them to automate common checks. But I am promoting that this is for the developer to have confidence in his code before proffering it for formal acceptance. The formal acceptance (testing) should be instigated by a 3rd party that designs the test from the documentation, and that reliance on the developers test harness for formal acceptance is erroneous for the previously stated reasons.

However, that set of unit tests should not be blindly accepted as the complete set of unit tests that verifies all (unit testable) requirements have been met. When the component is complete and checked in, the test team takes ownership of the developer's unit tests and adds their own to create a complete unit test suite for that component. And of course, in a "good" software development process the developer never has the authority to approve the code for production. I'm pretty sure we agree on that.

I think this is probably where we diverge

My view is, "that" set of tests is irrelevant.. It is always the "customer" that designs the test (by customer I mean the next person in the deliverables chain - in your case, I think, production) The tests are derived from the documentation and it is the principle that you have two separate and independent thought processes checking the software. One thought process at the development level and - after RFA (release for acceptance) - one at the acceptance level. I think I should point out that when I'm talking about acceptance in this context, I just mean that a module or identifiable piece of code is marked as completed and ready to proceed past the next gate.

If the the test harness that the developer produced is absorbed into the next level after the gate, then you lose the independence and and cross check. If it didn't pass the developers checks (whether he employs a test harness or visual inspection or whatever) then it wouldn't have been proffered for acceptance - the developer knows it passes his checks.

Link to comment
  • 1 year later...

Bringing this thread back from the dead...

So, I am still planning to do this, it just got put off a bit. In the meantime I have been collecting more ideas and information. One area I want to address first is coding standards. I want to get my coding guild-lines down on paper, reviewed by the group and used to do future coding. I have lots of ideas of what I want to include but would welcome your thoughts. If you have a coding standard doc you want to share, that would be great! I am always happy to steal borrow ideas I find worthwhile.

Things I plan to address:

Style (mainly what is included in Peter Bloom's book)

Commenting

Documentation

Unit Testing

Use of VI Analyzer

Organization of source code on disk and in projects

Code reuse

Code review process

What else should be included?

thanks for the input.

-John

Link to comment

I'd also be interested in any coding standards that people want to share.

Some thoughts that I don't believe have been covered:

- using required inputs on subvis where appropriate instead of leaving as recommended

- use of typedef controls

- removing the default from a case structure

Link to comment

I want to get my coding guild-lines down on paper, reviewed by the group and used to do future coding. I have lots of ideas of what I want to include but would welcome your thoughts.

To be honest my first thought was, "I hope he doesn't over-specify the 'acceptable' way to write code." My second thought is some of the items you listed do fall under coding guidelines, some might be coding conventions, and some are more about your internal software development process.

More thoughts on specific topics posted for the sake of discussion...

Style - I define style fairly narrowly as the non-functional aspects of the vi--stuff the compiler doesn't care about. Things like fp/bd layout, using fp control terminals or icons on the bd, whether comments have a green background, etc. Blume's style book goes far beyond basic style. A lot of it is about really about convention, processes, and good code.

The book is an excellent starting point and required reading for new LV developers. At the same time, I can't imagine working at a company where all those rules were enforced--or even just the high priority rules. I'd spend more time worrying about checking off the style requirements than the customer's requirements. Happy developers are productive developers. Overly specific style rules = unhappy developer.

The whole point of good style is to make it easier for other developers to read and understand your code. As experienced developers, I think it is our responsibility to adapt to reasonable style variations, not enforce one particular set of rules as the "correct" way to write code. There are many style rules commonly accepted as the "right" way to code I have happily chucked out the window, and I think my code is more readable as a result. (The most obvious one is "never route wires behind a structure.")

Commenting - Helpful comments are good. I've seen suggestions that every vi should have a comment block. That smells of over-specification. Do we really need to add a comment block to every class accessor method? Good commenting is a learned skill. If a block diagram is insufficiently commented that can be pointed out in a peer review or code review meeting.

One thing to be aware of... I know my tendency is to write comments about what the code does, because that's what I'm thinking about during development. In general it is more useful to comment on why the code does what it does.

Unit Testing - Targeted unit testing is very useful. Comprehensive unit testing is burdensome. Omar hit the nail on the head with his earlier post,

...its not valuable to create a comprehensive unit test suite (testing all VIs for all possible inputs). What's most valuable is to identify the core things you care about, and test those things for cases you care about.

If you're using LVOOP and want to unit test, JKI's VI Tester is your friend. So is dependency injection. So is the book xUnit Test Patterns.

Unit testing is not the same as integration testing, but you can use a unit test framework to do some integration testing if you want.

Organization of source code on disk and in projects - Traditional LV thought is disk hierarchy and project hierarchy should match. This is another "best practice" I've discarded as unhelpful. During active development I'm constantly rearranging code in my project. I don't want to have to move them disk every time as well.

I use lvlibs to define each major component in my application, with each component consisting of several classes. On disk I'll have a directory for each lvlib and a subdirectory for each class. This lets me easily grab an entire component and copy it somewhere else if I want to. I don't partition them any further on disk, though I usually do in the project explorer. Occasionally I'll have to move stuff around on disk, but not too frequently.

I saw a post from AQ recently where he indicated he uses an even flatter disk structure than that. In his main project directory he'll have a subdirectory for each class. That's it. When the project is near completion he reorganizes the project on disk to reflect the final design. I may give that a go.

Code reuse - If you have reusable code, deploy it to your developers using vipm. Don't link to the source code.

Personally I don't worry about implementing reusable code on the vi level. It's usually not worth the effort for me. I focus on reusable components.

I've mentioned this several times before, but I prefer to let my reuse code evolve into existence rather than trying to design it up front. Trying to support poorly designed reusable code is worse than not having reusable code, and the initial design is never right.

using required inputs on subvis where appropriate instead of leaving as recommended - Yeah, but the question is where is it appropriate? Usually I make it required if the default value is not a valid input--like if I'm creating object A that requires a pre-configured object B. The CreateClassA method will have a required ClassB terminal because a default ClassB object is non-functional. Ultimately it's a decision for the api designer. I have a hard time imagining there is a universally acceptable rule that should go on a style checklist.

removing the default from a case structure - Depends on what's wired into the case selector. Often I will remove the default case if an enum is connected. On the other hand, if I'm casing out on a string the default case is where I post debug messages telling me about an unhandled string. One thing I try to avoid is putting important functional code in the default case.

Edited by Daklu
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.