Jump to content

Unit Testing Error Pass Through


Recommended Posts

I'm new to unit testing, and new to using VI Tester, so I have a question for which I would like to get some group thought.

If I want to test that my VI "under test" passes an error through on its error In connector; should I do that as a separate unit test?

I realize that the answer may be, "it depends", but I wanted to get some opinion on this idea before I move forward to modify unit tests, or make new ones. I'm new to the idea of creating unit tests, but I like the idea, and the results, and I'm curious what the group response is.

Link to comment

If I want to test that my VI "under test" passes an error through on its error In connector; should I do that as a separate unit test?

What you're really asking is 'how do I do error propagation testing using VI Tester? One way would be to just create unit tests that hard code an error into your VI under test. However, that is actually a difficult way to scale as you add various error conditions, etc.

Another way would be to create a TestCase with a property called 'TestError'. See my screenshots below.

post-5746-125515875095_thumb.png

post-5746-125515875733_thumb.png

Now, your only issue is "how do I set the TestError" property. One way would be to set the TestError property using the TestCase.SetUp method. The problem with this method is that it becomes essentially a code change to your TestCase if you want to test different error conditions. Another way would be to create a method called 'SetTestError' and then to add your TestCase to a TestSuite (pictured below).

post-5746-125515894008_thumb.png

Now, you have a lot of power because you can easily create multiple tests in your TestCase that will get tested with the same error conditions you've set up in your TestSuite - and you can validate that all of your VIs propagate the error as expected.

The only problem I found with this method is that there is currently a bug in VI Tester GUI that prevents having multiple TestCases of the same type within the same TestSuite. So, this means you will probably need to create another TestSuite for the NoError (or other ErrorConditions) case(s). I've created a sample project and attached it here (in LV 2009 - although there are no LV 2009 specific VIs used).

VI Tester Error Propagation Example.zip

Link to comment

Omar,

Thanks for your reply. I'm trying to make sense of it and having some difficulty. I'll try to explain what I think you are saying and then let you correct me, if needed.

I think you are saying I would write a specific VI to test error prorogation for each of the VI's I want to test, which was what Chris suggested, and similar to what I've pictured below. I'm guessing that I might be able to create VI's like this for every subVI I want to test, via scripting, if I put my mind to it.

post-2547-125535735201_thumb.png

Then if I use your method of carrying the error in the object, I can use a method to change that error in the setup VI, or in individual unit test VIs. If I want to check how a VI reacts to a specific error, then I can use that "Set Test Error" method to provide the specific error to the VI under test. This would allow me to test how a VI reacts to a specific error, or specific set of errors by running the unit test with those errors set.

I've also attached a picture of a unit test that I'm working with now, just to make sure I'm on the right track.

post-2547-125535736778_thumb.png

Link to comment

I think you are saying I would write a specific VI to test error prorogation for each of the VI's I want to test, which was what Chris suggested, and similar to what I've pictured below. I'm guessing that I might be able to create VI's like this for every subVI I want to test, via scripting, if I put my mind to it.

This is one way to do the error propagation test. It works, there is absolutely nothing wrong with doing it this way.

Then if I use your method of carrying the error in the object, I can use a method to change that error in the setup VI, or in individual unit test VIs. If I want to check how a VI reacts to a specific error, then I can use that "Set Test Error" method to provide the specific error to the VI under test. This would allow me to test how a VI reacts to a specific error, or specific set of errors by running the unit test with those errors set.

Actually, the benefits of using my method are this:

1) You can use Set Test Error to test specific errors (as you point out)

2) You can add unit tests for your other VIs under test easily and you will be able to test their ability to test the exact same error conditions based on the TestCase (rather than hard-coding the error-in for N-tests).

I've also attached a picture of a unit test that I'm working with now, just to make sure I'm on the right track.

post-2547-125535736778_thumb.png

This test looks fine to me :) It looks like you are on the right track. You are right that scripting can help automate these tasks, however I don't have any scripting code for generating tests to share. I don't think I understand where you're having difficulty understanding the previous post, so if you have any new questions, that would help.

Link to comment

This is one way to do the error propagation test. It works, there is absolutely nothing wrong with doing it this way.

I'm sure this will do the job too, but I'm not sure if it is the best way. In fact, my main concern is how it will scale when I have more subVIs. Any thoughts? I'm looking at a project with ~30-40 subVI's that I want to verify have error propagation. I've done 10-15 and it didn't take long to create the tests, and the tests run quickly.

One thing going for this method is that it is simple, and from my reading on unit testing, simple is good. I think I'm reading that unit tests should test one thing, and one thing only. If you need to test 5 different ways your subroutine could fail, you should write 5 different unit tests.

I guess I'm just wondering if I'm missing a shortcut, or if the long road (writing more unit test subVIs) is the better (or only) road to take in this case.

Thanks for your thoughts.

Link to comment

One thing going for this method is that it is simple, and from my reading on unit testing, simple is good. I think I'm reading that unit tests should test one thing, and one thing only. If you need to test 5 different ways your subroutine could fail, you should write 5 different unit tests.

I guess I'm just wondering if I'm missing a shortcut, or if the long road (writing more unit test subVIs) is the better (or only) road to take in this case.

Ya, one problem with writing unit tests is that you basically write a lot of tests. I'm pretty sure that whether you use VI Tester or NI's UTF - if you want to test that 30 VI's have error propagation, you'll need to write 30 tests.

It comes down to how many test paths you want to test to ensure the quality level you need. As an example, JKI's EasyXML product consists of just 4 public API VIs. We have > 60 unit tests for these VIs. One great thing about writing unit tests as it relates to TDD is that you can write unit tests for bugs the have been identified before the fix is in place. We do this for every bug reported to JKISoft forums for our products. Some unit tests currently fail because we haven't implemented a solution for those issues yet. But the great thing about a unit test is that once it passes, you know from that point forward that if the test fails, something is probably wrong with your code. This means that your unit tests add value and stability to your code over time.

Ultimately, you have to decide what things add value by testing them, and test the hell out of those things. Although, with scripting, it may be possible for us to create more unit tests quickly and not have to make the value decision as often.

Link to comment

Ya, one problem with writing unit tests is that you basically write a lot of tests. I'm pretty sure that whether you use VI Tester or NI's UTF - if you want to test that 30 VI's have error propagation, you'll need to write 30 tests.

Thanks, that is what I thought I was gathering from my unit tests, and my online reading.

It comes down to how many test paths you want to test to ensure the quality level you need. As an example, JKI's EasyXML product consists of just 4 public API VIs. We have > 60 unit tests for these VIs. One great thing about writing unit tests as it relates to TDD is that you can write unit tests for bugs the have been identified before the fix is in place. We do this for every bug reported to JKISoft forums for our products. Some unit tests currently fail because we haven't implemented a solution for those issues yet. But the great thing about a unit test is that once it passes, you know from that point forward that if the test fails, something is probably wrong with your code. This means that your unit tests add value and stability to your code over time.

Thanks for the real world example, I was hoping you would provide one from JKI's products.

Ultimately, you have to decide what things add value by testing them, and test the hell out of those things. Although, with scripting, it may be possible for us to create more unit tests quickly and not have to make the value decision as often.

I think its time for me to make those decisions on the project I'm working on.

Thanks,

Chris

Link to comment

Thanks for the real world example, I was hoping you would provide one from JKI's products.

No problem, I'm hoping to write some blog articles in the future related to unit testing with VI Tester but I've been really busy.

I think its time for me to make those decisions on the project I'm working on.

Well, technically, its always the right time create unit tests :)

Here are some more criteria that we generally use at JKI:

1) Always test code that is critical for security of apps -- security may just mean that it functions correctly, or in JKI's case, we have tests to validate that we have security for our VIs (like password protection, etc. --> yes, you can create unit tests to validate properties of VIs, see this blog article).

2) Always create unit tests for bugs identified by end users -- usually, if an end user finds a bug, it means its a bug in a shipped product (even if the customer is internal) and you don't want the bug to come back, so create a unit test for the issue.

3) Try to create unit tests for public APIs. If you have a public API (for example, if you are using LVOOP, your public API is the set of Public scoped methods, for JKI's Easy XML toolkit, your public API is the four VIs our users call), create unit tests that exercise your API. I don't find it critical to test every possible input to each VI in your API. However, after some time you come to realize that there are certain things you want to test -- like what happens if you input a 'NaN' into a numeric input of some VI -- those things should definitely be tested. We try to create a unit test for each public method, that way its easy to create additional tests (by duplication) as other issues arise. In some cases, we have been able to create tests before the code we needed to write was complete -- unit testing your API is a great way to artificially create a customer for an API, and this tends to result in a better API. You start to realize that if you can't unit test your code via your public API, you probably have an issue with your API that you need to correct.

I hope this helps!

Link to comment

No problem, I'm hoping to write some blog articles in the future related to unit testing with VI Tester but I've been really busy.

I appreciate the example too. I've been having a little trouble figuring out the best way to organize my tests. In the past I've created a new test case for each unique initial condition of the class I'm testing. For example, if I have a class that has a private refnum I'll have a test case with a valid refnum and another test case with an invalid refnum. I used the setUp method to set the object's initial conditions. I think this is similar to what you described when you referred to 'setting the TestError property in the TestCast.setUp method.' It looks like the preferred breakdown is to use Test Suites to set an object's initial conditions and have a unique Test Case for each sub vi? Previously I had thought of Test Suites as simply a way to enable 'single click' execution of a subset of Test Cases.

In the VI Tester Example project (VI Tester Help -> Open Example Project...) the Queue TestCase has a diagram disable structure in the setUp so we can see what the response would be if the queue were not created. Suppose I wanted to test both conditions: valid queue and invalid queue. What would be the best way to set that up in the context of the Example Project?

Another thing I have done in the past is enclose the vi under test in a for loop and create an array of different inputs I want to test. I'm pretty sure this violates the idea of 'each unit test should test one thing,' but it is easier than creating a unique test VI for each string input I want to test. I'm wondering how well that technique would carry over in this paradigm. Maybe it would be completely unnecessary...?

Link to comment

It looks like the preferred breakdown is to use Test Suites to set an object's initial conditions and have a unique Test Case for each sub vi? Previously I had thought of Test Suites as simply a way to enable 'single click' execution of a subset of Test Cases.

I generally use the TestSuite.SetUp method to setup the 'environment' of my test -- for example, if I'm testing a class that depends on some other class, I'll create the dependent class in the TestSuite.SetUp. Single click execution (i.e. test grouping) is only one of the benefits of the TestSuite, the real power lies in its ability to help create very flexible test harnesses. Also, TestSuites can contain other TestSuites (not just TestCases), so you can really do a lot of stuff to keep your test environment modular.

In the VI Tester Example project (VI Tester Help -> Open Example Project...) the Queue TestCase has a diagram disable structure in the setUp so we can see what the response would be if the queue were not created. Suppose I wanted to test both conditions: valid queue and invalid queue. What would be the best way to set that up in the context of the Example Project?

I think the way that I would do this would be to create a method that sets the queue ref and modify the TestCase as described in my first post for setting the error propagation testing. Ultimately, you'd have a 'Valid' and 'Invalid' TestSuite...

Another thing I have done in the past is enclose the vi under test in a for loop and create an array of different inputs I want to test. I'm pretty sure this violates the idea of 'each unit test should test one thing,' but it is easier than creating a unique test VI for each string input I want to test. I'm wondering how well that technique would carry over in this paradigm. Maybe it would be completely unnecessary...?

Running unit tests in a for loop means you have a bad test design (for VI Tester). Creating a bunch of TestCases in a for loop and adding them to a TestSuite however, is perfectly OK. I would prefer to do the latter method, unfortunately I found a bug in VI Tester's UI that currently makes this really difficult (unless you just want to run your tests through the programmatic API). I have no ETA on when the bug will get fixed, but its definitely a high priority for VI Tester.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.