Jump to content

Unit Testing Strategies with xUnit?


Daklu

Recommended Posts

A couple weeks ago I was getting ready to release a preview of the LapDog List Collection when I thought, I really ought to have some unit tests for this and make them available to end users. So I reinstalled JKI's VI Tester (the best free unit test framework available for LV) and started playing around with it again. VI Tester is based on xUnit so anyone who is familiar with that family of test frameworks can likely help me out.

I'm struggling a bit with figuring out the best way to organize the testing I want to do. The stuff I've read online about xUnit testing implies that each class being tested should have a single TestCase class for it. True? I'm finding I need different Setup methods (requiring different test cases) to test different aspects of my code. I'm also wondering if each TestCase should give valid results if executed directly? Several of my test cases require some sort of configuring before executing them. I do this in the TestSuite code, but it means the TestCase returns "incorrect" results if I run it independently.

I've included the project source and unit test code if anyone wants to take a look. Unfortunately it's dependent on an error handling library that I don't have access to right now, so you won't be able to execute the code. I'll try to fix that in the next couple days.

[Edit 03 Dec 10 - Added the missing library to the project.]

LapDog List Collection source.zip

Link to comment

I'm struggling a bit with figuring out the best way to organize the testing I want to do. The stuff I've read online about xUnit testing implies that each class being tested should have a single TestCase class for it. True? I'm finding I need different Setup methods (requiring different test cases) to test different aspects of my code. I'm also wondering if each TestCase should give valid results if executed directly? Several of my test cases require some sort of configuring before executing them. I do this in the TestSuite code, but it means the TestCase returns "incorrect" results if I run it independently.

I haven't had enough time to really document VI Tester testing strategies properly yet, partly because the *best* patterns are still emerging/evolving. Here are some of the quick notes I can give that relate to strategies I've found -- I really intend to create more documentation or blog posts but I've had no time to really do this.

1) TestSuites are your friend. I haven't really documented these enough or given good examples of how to use them, but they are a powerful way to improve test reuse. TestSuites can do three things for you: 1) Group tests of similar purpose 2) Allow you to configure a test 'environment' before executing tests 3) Allow you to set TestCase properties (this is something I only recently started to do -- you can create a TestCase property 'MyProperty' and in the TestSuite Setup you can set 'MyProperty=SomeValue' --> Note that you'll need an accessor method in the TestCase and it can't be prefixed with the word 'test' or it will be executed as a test when the TestCase executes).

2) A TestCase class can test a class or a VI --> In a truly ideal world each TestCase would be for one test VI and each testMethod in the TestCase would exercise different functionality for the VI under test. LVOOP doesn't really scale in a way that supports this (as far as I've used it so far) and so I typically create a TestCase for each Class and I design my testMethods to test the public API for that class.

3) It doesn't matter if you can't execute a TestCase method without it running within a TestSuite. From the VI Tester GUI, you can select a test method at any level in the tree hierarchy and press the 'Run Test' button and it will execute all of the TestSuite setups that are needed to create the test harness. Similarly, if you execute a test from the API, as long as the test is called from the TestSuite, it will be executed correctly. For debugging these types of tests, I find 'Retain Wire Values' to be my best friend.

I hope this helps, I aim to look at your code at some point but it won't happen until next week as I'm out of the office now.

Link to comment

Thanks for the reply Omar. I take it you're the resident VI Tester expert? (You seem to be the one always responding to questions about it. :) ) BTW I updated the original post with code that includes the missing library.

1) TestSuites are your friend.

That's what I figured, but without someone to bounce ideas off of I'm left fumbling around in the dark. It feels a lot like when I first started designing OO applications. :)

I have created several test cases with properties that can be set up by the test suite. That's what prompted the question about independently executing test cases. Since the algorithm determining the test case results are hard coded into each test method, the test methods themselves will return test failures if the properties are not set up correctly.

I haven't really documented these enough or given good examples of how to use them, but they are a powerful way to improve test reuse.

Visualization is reportedly a powerful way to make good things happen, but I haven't figured that one out either. :D

2) A TestCase class can test a class or a VI --> In a truly ideal world each TestCase would be for one test VI and each testMethod in the TestCase would exercise different functionality for the VI under test. LVOOP doesn't really scale in a way that supports this (as far as I've used it so far)

I tried that the last time I dug heavily into VI Tester. I didn't think it worked very well either.

and so I typically create a TestCase for each Class and I design my testMethods to test the public API for that class.

I haven't figured out a good way to do that yet without making the test case really complicated. Not every test method needs to be executed for every test environment. (I don't think there's a way to exclude *some* of a test case's test methods for a specific test suite is there?) So I ended up putting a lot of checking code in each test case to determine what the input conditions were so I'd know what to compare and whether the test passed or not. Ugh... my test methods got more complicated than the code it was testing.

In this project I'm testing a single class. I have five test cases for it right now:

ListTestCase-Initialized -- Happy path test cases. The object has been set up correctly before calling any methods.

ListTestCase-Uninitialized -- To test behavior with objects where the Create method hasn't been used.

ListTestCase-ErrorIn -- Test error propogation and make sure the object's data hasn't changed.

ListTestCase-CreateListMethod & ListTestCase-DestroyMethod -- I created independent test cases for the creator and destroyer in order to make sure they obtain and release run-time resources correctly. I do this by injecting a mock dependency object with a queue refnum I can access after the creator or destroyer is called in the test method. But there's no need to test all the other methods with the mock dependency, so they ended up with their own test cases. *shrug*

From the VI Tester GUI, you can select a test method at any level in the tree hierarchy and press the 'Run Test' button and it will execute all of the TestSuite setups that are needed to create the test harness. Similarly, if you execute a test from the API, as long as the test is called from the TestSuite, it will be executed correctly.

I think I understand a little better now... the Test Hierarchy view is a virtual hierarchy. With the latest release (I think) each test case can show up under more than one test suite. The test suite setup code will be called depending on where in the hierarchy view you chose to start the test. Correct?

I hope this helps, I aim to look at your code at some point but it won't happen until next week as I'm out of the office now.

Absolutely it helps. Thank you.

I realized that if you're going to look at my code I probably need to explain how the code is expected to be used so you understand what I'm trying to accomplish with these tests. I'll try and get something up this weekend. In the meantime the discussion has helped me identify a few places where I think I can improve my test cases.

Link to comment

I have been using VI tester and I really like it.

!Warning! I don't use it to its full potential and I don't currently use an OO paradigm.

When I am coding along I make test vi's that test the code (usually a subvi) I am working on. The test VI's usually provide the inputs to the VI under test and other setup stuff. When I get passed some initial testing I move it over to VI tester. When in VI tester I may add some boundary conditions, maybe move the setup to the test setup class etc.

This has been working well for me. I have caught bugs that I wouldn't have caught using this method. It has also decreased debug time when the program gets deployed.

So in short I like doing initial proof of concept testing just in LabVIEW. When I get a better idea of how I want the VI to perform I move it over to VI tester.

Dan

Link to comment

I have been using VI tester and I really like it.

Really cool! I'm happy to hear this!

!Warning! I don't use it to its full potential and I don't currently use an OO paradigm.

Actually, while VI Tester is built using OO, it is intended that users without OO experience can use it. I'm glad to see that this is the case.

This has been working well for me. I have caught bugs that I wouldn't have caught using this method. It has also decreased debug time when the program gets deployed.

At JKI, this has saved us from deploying our products (including VIPM) with major bugs that would not have been caught through normal user testing. I am glad VI Tester has worked for you as well. I think of the tests as an investment against things breaking in the future without me knowing about it.

So in short I like doing initial proof of concept testing just in LabVIEW. When I get a better idea of how I want the VI to perform I move it over to VI tester.

Awesome! That is how I tend to use VI Tester as well at the moment.

Link to comment

I haven't figured out a good way to do that yet without making the test case really complicated. Not every test method needs to be executed for every test environment. (I don't think there's a way to exclude *some* of a test case's test methods for a specific test suite is there?) So I ended up putting a lot of checking code in each test case to determine what the input conditions were so I'd know what to compare and whether the test passed or not. Ugh... my test methods got more complicated than the code it was testing.

Actually, this is not true. You can call the TestCase.skip method to skip a test. You can do this in two different ways ...

1) You can call skip within the test itself which is what is how we show it being used in our shipping example via the diagram disable structure.

2) You can use it from within a TestSuite -- basically invoking the skip method during TestSuite.New will cause the execution engine to skip that test when the TestSuite is run. This is an undocumented feature and is not obvious.

Here is a screenshot example of what I mean:

post-5746-0-18262000-1292258908_thumb.pn

In this project I'm testing a single class. I have five test cases for it right now:

ListTestCase-Initialized -- Happy path test cases. The object has been set up correctly before calling any methods.

ListTestCase-Uninitialized -- To test behavior with objects where the Create method hasn't been used.

ListTestCase-ErrorIn -- Test error propogation and make sure the object's data hasn't changed.

I think your implementation is interesting. I think that it should let you scale well in that you can test various input conditions pretty easily based on your accessor methods in the TestSuite.New class. I typically have done it differently --> namely sticking all of the List methods in one testCase and then using multiple testSuites to test the different input environments. This causes my hierarchy to have more TestSuites than yours. However, I like what you are doing as it makes your tests very easy to read and you can easily reuse the test (by creating a new TestSuite for example) for a ListTest.Specialized child class in the future and your tests should all still work, which is nice.

ListTestCase-CreateListMethod & ListTestCase-DestroyMethod -- I created independent test cases for the creator and destroyer in order to make sure they obtain and release run-time resources correctly. I do this by injecting a mock dependency object with a queue refnum I can access after the creator or destroyer is called in the test method. But there's no need to test all the other methods with the mock dependency, so they ended up with their own test cases. *shrug*

I think this is just a personal choice. You can code it this way so that you don't 'accidentally' misuse the mock object. Or you could have just included a Create/Destroy test in each of the other testCases and had a mock object available for all of those cases and just had each test choose whether or not to utilize the mock object. (Side note - I know this code was in progress but just want to point out that I think you forgot to set the mockObject in the TestSuite.New method for this test case --> it looks like the test will just run with default data right now where I think you intended to inject a mock object using the accessor methods).

I think I understand a little better now... the Test Hierarchy view is a virtual hierarchy. With the latest release (I think) each test case can show up under more than one test suite. The test suite setup code will be called depending on where in the hierarchy view you chose to start the test. Correct?

Yes, this is correct. And TestSuites can contain other TestSuites so it is possible for different parts of the test environment to be configured in different 'stages' essentially and the test will run in the combined test harness.

Link to comment

I really appreciate you taking the time to look this over Omar. Obviously I didn't get to post more details about what I was trying to do. I hope you didn't have too much trouble deciphering my intent.

Actually, this is not true. You can call the TestCase.skip method to skip a test... Here is a screenshot example of what I mean:

My initial reaction to your example is that the test method "testInvalid" is going to be skipped in all of the test environments, where what I'd need to be able to do is skip them only in certain test environments. Then the pieces starting coming together...

Whereas I have multiple test cases for slightly different initial conditions that I establish (to some extent) in the testCase.setup method, you would have one test case for the List class with unique test methods for different initial conditions. For example,

Where I have

ListTestCase-Initialized:testInsertIndexZero, and

ListTestCase-Uninitialized:testInsertIndexZero,

You would have

ListTestCase:testInsertIndexZeroInitialized, and

ListTestCase:testInsertIndexZeroUninitialized

Which is followed by

ListTestSuite-Initialized, and

ListTestSuite-Uninitialized

Each test suite then skips the test methods that don't apply to the initial conditions it established. Essentially what I am doing with the different test cases you push up to the lowest test suite level. Correct? This naturally leads to the question, if all you setup is done in the test suite, do you ever use the test case's Setup or Teardown methods for anything?

I think your implementation is interesting.

Heh heh... that's what one developer says to another when they think the implementation is wrong. :lol: (j/k)

I like what you are doing as it makes your tests very easy to read and you can easily reuse the test (by creating a new TestSuite for example) for a ListTest.Specialized child class in the future and your tests should all still work, which is nice.

Being able to easily test List and ListImp subclasses was a major goal. Is this something you've had difficulty achieving? (I can't quite get my head around all the ramifications of your implemention.)

I think you forgot to set the mockObject in the TestSuite.New method for this test case --> it looks like the test will just run with default data right now where I think you intended to inject a mock object using the accessor methods.

I believe you are correct. Nice catch!

Link to comment

Whereas I have multiple test cases for slightly different initial conditions that I establish (to some extent) in the testCase.setup method, you would have one test case for the List class with unique test methods for different initial conditions. For example,

Where I have

ListTestCase-Initialized:testInsertIndexZero, and

ListTestCase-Uninitialized:testInsertIndexZero,

You would have

ListTestCase:testInsertIndexZeroInitialized, and

ListTestCase:testInsertIndexZeroUninitialized

Which is followed by

ListTestSuite-Initialized, and

ListTestSuite-Uninitialized

Each test suite then skips the test methods that don't apply to the initial conditions it established. Essentially what I am doing with the different test cases you push up to the lowest test suite level. Correct? This naturally leads to the question, if all you setup is done in the test suite, do you ever use the test case's Setup or Teardown methods for anything?

Yes, this is typically what I have done. My TestCase Setup methods are still useful for things that MUST be initialized for that specific test. For example, if I need to create a reference I can do it in the TestCase.Setup. Its really more of an art than a science for me right now but my end goals are simple:

  1. Easy to write tests
  2. Easy to debug tests (alternatively, easy to maintain tests)

Heh heh... that's what one developer says to another when they think the implementation is wrong. :lol: (j/k)

Usuallly :P But not in this case, at least for me. Until I started working on the last round of VI Tester changes where I added support for the same test in multiple TestSuites, I wasn't really using properties of the TestCase with accessor methods for configuring a test. But I find it is a powerful tool and intend to use it now that it is supported better. I also haven't really had a use case for creating a bunch of 'Setup' methods for each test case that is called in the TestSuite.New --> so I really am thinking that this is an interesting way to setup tests.

Being able to easily test List and ListImp subclasses was a major goal. Is this something you've had difficulty achieving? (I can't quite get my head around all the ramifications of your implemention.)

I like your approach for this and right now I can't think of a better way to do this. Just thinking out loud ... Since its inception, I've wanted to create a way for tests to inherit from other tests within VI Tester (which is not supported right now) and maybe that would make it easier to run the same tests against your child cases --> I can't recall where I got stuck trying to make this happen within VI Tester development but I think it may be what would really be the best ultimate solution.

I believe you are correct. Nice catch!

Glad that I could help!

Link to comment

Okay... this weekend I spent some time on this project and ended up rewriting all my unit tests. However, I do think I am getting a much better handle on how to develop unit tests. As I was working on it Saturday morning it dawned on me that my concept of what a "test" is was still too broad.

After fiddling around for a while I stumbled upon the following process Sunday evening that seems to be pointing me in the right direction:

1. Create a single ListTestCase for testing the List class.

2. Go through all the List methods and add copies of all the input terminal controls to the ListTestCase class cluster. (No need to duplicate terminal controls unless a vi has more than one input of the same type.) This lets me unbundle all of the tested vi's inputs from the test case class.

3. Add public setter methods for each of those cluster elements. This lets me completely set up the testing environment in the test suite.

4. Now, looking at each List method, figure out comparisons that cover all possible output terminal values are for any set of inputs. For example, for the List:Insert method, if there's an error on the input terminal the List out object should be identical to the List in object. If there isn't an error, then List out should be different from List in. Bingo, there's two test cases right there:

testInsert_ListObjectIsChanged

testInsert_ListObjectIsUnchanged

A few minutes of considering that quickly led to more test cases:

testInsert_CountIsIncremented

testInsert_CountIsUnchanged

testInsert_ErrorIsUnchanged

testInsert_ErrorOutEqualsRefErrorCode (This verifies Insert raises the correct error when appropriate.)

Using the correct combination of 3 of those 6 test methods allows me to test the Insert method under any known set of conditions. Understanding step 4 feels like a significant advance in my overall comprehension of unit testing.

After going through that with all the methods (which in fact turned out to be very straightforward and pretty quick) I set to work creating test suites to define test environments. I've only got two so far... ListTestSuite-ErrorIn, which checks the methods for correct behavior when an error is on the input terminal, and ListTestSuite-EmptyList, which checks behavior of the methods when the list is empty.

I chose to use the array of strings in my test suites to define which test methods are used instead of auto-populating the list. When I'm setting up the test suite I just open the ListTestCase and look down the list of available test methods. If the name of the test method should be a true condition in the environment I'm setting up, I add it to the list. This part goes pretty quickly too.

I am still not quite sure how to organize the tests that use the MockListImp object... I'm thinking it ought to be a separate test case. Still undecided on that one.

Collection-List source v0.zip

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.