Jump to content

Omar Mussa

Members
  • Posts

    291
  • Joined

  • Last visited

  • Days Won

    10

Posts posted by Omar Mussa

  1. I found the CLA-R to be basically as advertised in the prep materials (practice exam, etc) - I also found it helpful to take the free 'skills assesment' exam online (which basically is designed to steer you to NI Training -Basics?) as it helps to practice answering NI style exam questions.

  2. I'm trying to print a "Standard" text report using the Report Generation Toolkit - for some reason if I set the font type to 'Courier' (using "Set Report Font.vi") and print the report, it always prints at the same size (10 I think) no matter what I actually enter as a font size (trying to use a smaller size --> 8). However, if I set the font to 'Courier New' I can set the font size to 8 and it seems to work (the printout is smaller). I'm using LV2009 SP1. Anyone else see this before? Is it a LabVIEW bug in the Report Generation Toolkit (RGT)? Is it a printer driver bug on my machine?

    Also, the 'Font Output' always returns the default data - which makes it useless to me right now - I was hoping to see it update the font name/size to ensure that the data was set correctly. Another RGT bug?

  3. I've been using LV2010 for the past two months on my relatively new laptop (Dell Inspiron w/ Core i5, Windows 7) without any problems. Then, this evening, out of the blue, LV won't start; I get a dialog box saying "LabVIEW Development System Has Stopped Working" See attached screenshot.

    I had the exact same thing happen last week with my installed copy of LV2009. I have customers using various versions of LV, so I have multiple versions installed. The only way that I was able to get 2009 up and running again was to uninstall and reinstall it; trying to do a repair did not help. This is very time consuming, considering that I have Vision, RT, and FPGA installed.

    When you get done with this trip, you should definitely consider using virtualization to prevent getting stuck in these situations. If you're already on Windows 7, I'd recommend installing VMWare Workstation. Basically, it lets you support each customer project in isolation - which greatly reduces risk for your multiple projects/LV Versions. Also, when really painfully annoying issues like this happen, you can generally revert a VM (via a backup or snapshots, etc) a lot easier than fixing an issue on your physical machine.

  4. Unit Testing - Contrary to common perception, unit testing is not free. In fact, it is quite expensive. Not only does it take time for initial development, but you have to go in and fix the unit tests when a design change breaks them. When a test run results in a bunch of failure, chances are at least some of those failures are due to errors in your test cases. Every minute you spend fixing your test cases so they result in a pass is a minute you're not spending improving your code. Don't get me wrong; I think unit testing can be extremely helpful and I'm still trying to figure out how to best use it as part of my dev process. But I think it's a mistake to try and create a comprehensive unit test suite.

    I agree with you that unit testing is not free, but am surprised to think it may be a common perception. Also, I would agree that its not valuable to create a comprehensive unit test suite (testing all VIs for all possible inputs). What's most valuable is to identify the core things you care about, and test those things for cases you care about. Unit tests can be a really useful investment if you plan on refactoring your existing code and you want to ensure that you haven't broken any critical functionality (granted that the tests themselves will most likely need to be refactored during the refactoring process). But they do add time to your project, so just make sure that you're testing the stuff that you really care about.

  5. My question for you guys is, what would be the best way to improve the user experience to avoid this problem for occurring.

    I can't think of a single instance where I'd want a refnum constant to not have a default value other than a null refnum. There isn't really such thing as 'refnum persistence' that I know of so why should the data type even allow storing a value other than null for its default value?

  6. Whereas I have multiple test cases for slightly different initial conditions that I establish (to some extent) in the testCase.setup method, you would have one test case for the List class with unique test methods for different initial conditions. For example,

    Where I have

    ListTestCase-Initialized:testInsertIndexZero, and

    ListTestCase-Uninitialized:testInsertIndexZero,

    You would have

    ListTestCase:testInsertIndexZeroInitialized, and

    ListTestCase:testInsertIndexZeroUninitialized

    Which is followed by

    ListTestSuite-Initialized, and

    ListTestSuite-Uninitialized

    Each test suite then skips the test methods that don't apply to the initial conditions it established. Essentially what I am doing with the different test cases you push up to the lowest test suite level. Correct? This naturally leads to the question, if all you setup is done in the test suite, do you ever use the test case's Setup or Teardown methods for anything?

    Yes, this is typically what I have done. My TestCase Setup methods are still useful for things that MUST be initialized for that specific test. For example, if I need to create a reference I can do it in the TestCase.Setup. Its really more of an art than a science for me right now but my end goals are simple:

    1. Easy to write tests
    2. Easy to debug tests (alternatively, easy to maintain tests)

    Heh heh... that's what one developer says to another when they think the implementation is wrong. :lol: (j/k)

    Usuallly :P But not in this case, at least for me. Until I started working on the last round of VI Tester changes where I added support for the same test in multiple TestSuites, I wasn't really using properties of the TestCase with accessor methods for configuring a test. But I find it is a powerful tool and intend to use it now that it is supported better. I also haven't really had a use case for creating a bunch of 'Setup' methods for each test case that is called in the TestSuite.New --> so I really am thinking that this is an interesting way to setup tests.

    Being able to easily test List and ListImp subclasses was a major goal. Is this something you've had difficulty achieving? (I can't quite get my head around all the ramifications of your implemention.)

    I like your approach for this and right now I can't think of a better way to do this. Just thinking out loud ... Since its inception, I've wanted to create a way for tests to inherit from other tests within VI Tester (which is not supported right now) and maybe that would make it easier to run the same tests against your child cases --> I can't recall where I got stuck trying to make this happen within VI Tester development but I think it may be what would really be the best ultimate solution.

    I believe you are correct. Nice catch!

    Glad that I could help!

  7. I haven't figured out a good way to do that yet without making the test case really complicated. Not every test method needs to be executed for every test environment. (I don't think there's a way to exclude *some* of a test case's test methods for a specific test suite is there?) So I ended up putting a lot of checking code in each test case to determine what the input conditions were so I'd know what to compare and whether the test passed or not. Ugh... my test methods got more complicated than the code it was testing.

    Actually, this is not true. You can call the TestCase.skip method to skip a test. You can do this in two different ways ...

    1) You can call skip within the test itself which is what is how we show it being used in our shipping example via the diagram disable structure.

    2) You can use it from within a TestSuite -- basically invoking the skip method during TestSuite.New will cause the execution engine to skip that test when the TestSuite is run. This is an undocumented feature and is not obvious.

    Here is a screenshot example of what I mean:

    post-5746-0-18262000-1292258908_thumb.pn

    In this project I'm testing a single class. I have five test cases for it right now:

    ListTestCase-Initialized -- Happy path test cases. The object has been set up correctly before calling any methods.

    ListTestCase-Uninitialized -- To test behavior with objects where the Create method hasn't been used.

    ListTestCase-ErrorIn -- Test error propogation and make sure the object's data hasn't changed.

    I think your implementation is interesting. I think that it should let you scale well in that you can test various input conditions pretty easily based on your accessor methods in the TestSuite.New class. I typically have done it differently --> namely sticking all of the List methods in one testCase and then using multiple testSuites to test the different input environments. This causes my hierarchy to have more TestSuites than yours. However, I like what you are doing as it makes your tests very easy to read and you can easily reuse the test (by creating a new TestSuite for example) for a ListTest.Specialized child class in the future and your tests should all still work, which is nice.

    ListTestCase-CreateListMethod & ListTestCase-DestroyMethod -- I created independent test cases for the creator and destroyer in order to make sure they obtain and release run-time resources correctly. I do this by injecting a mock dependency object with a queue refnum I can access after the creator or destroyer is called in the test method. But there's no need to test all the other methods with the mock dependency, so they ended up with their own test cases. *shrug*

    I think this is just a personal choice. You can code it this way so that you don't 'accidentally' misuse the mock object. Or you could have just included a Create/Destroy test in each of the other testCases and had a mock object available for all of those cases and just had each test choose whether or not to utilize the mock object. (Side note - I know this code was in progress but just want to point out that I think you forgot to set the mockObject in the TestSuite.New method for this test case --> it looks like the test will just run with default data right now where I think you intended to inject a mock object using the accessor methods).

    I think I understand a little better now... the Test Hierarchy view is a virtual hierarchy. With the latest release (I think) each test case can show up under more than one test suite. The test suite setup code will be called depending on where in the hierarchy view you chose to start the test. Correct?

    Yes, this is correct. And TestSuites can contain other TestSuites so it is possible for different parts of the test environment to be configured in different 'stages' essentially and the test will run in the combined test harness.

  8. I have been using VI tester and I really like it.

    Really cool! I'm happy to hear this!

    !Warning! I don't use it to its full potential and I don't currently use an OO paradigm.

    Actually, while VI Tester is built using OO, it is intended that users without OO experience can use it. I'm glad to see that this is the case.

    This has been working well for me. I have caught bugs that I wouldn't have caught using this method. It has also decreased debug time when the program gets deployed.

    At JKI, this has saved us from deploying our products (including VIPM) with major bugs that would not have been caught through normal user testing. I am glad VI Tester has worked for you as well. I think of the tests as an investment against things breaking in the future without me knowing about it.

    So in short I like doing initial proof of concept testing just in LabVIEW. When I get a better idea of how I want the VI to perform I move it over to VI tester.

    Awesome! That is how I tend to use VI Tester as well at the moment.

  9. I'm struggling a bit with figuring out the best way to organize the testing I want to do. The stuff I've read online about xUnit testing implies that each class being tested should have a single TestCase class for it. True? I'm finding I need different Setup methods (requiring different test cases) to test different aspects of my code. I'm also wondering if each TestCase should give valid results if executed directly? Several of my test cases require some sort of configuring before executing them. I do this in the TestSuite code, but it means the TestCase returns "incorrect" results if I run it independently.

    I haven't had enough time to really document VI Tester testing strategies properly yet, partly because the *best* patterns are still emerging/evolving. Here are some of the quick notes I can give that relate to strategies I've found -- I really intend to create more documentation or blog posts but I've had no time to really do this.

    1) TestSuites are your friend. I haven't really documented these enough or given good examples of how to use them, but they are a powerful way to improve test reuse. TestSuites can do three things for you: 1) Group tests of similar purpose 2) Allow you to configure a test 'environment' before executing tests 3) Allow you to set TestCase properties (this is something I only recently started to do -- you can create a TestCase property 'MyProperty' and in the TestSuite Setup you can set 'MyProperty=SomeValue' --> Note that you'll need an accessor method in the TestCase and it can't be prefixed with the word 'test' or it will be executed as a test when the TestCase executes).

    2) A TestCase class can test a class or a VI --> In a truly ideal world each TestCase would be for one test VI and each testMethod in the TestCase would exercise different functionality for the VI under test. LVOOP doesn't really scale in a way that supports this (as far as I've used it so far) and so I typically create a TestCase for each Class and I design my testMethods to test the public API for that class.

    3) It doesn't matter if you can't execute a TestCase method without it running within a TestSuite. From the VI Tester GUI, you can select a test method at any level in the tree hierarchy and press the 'Run Test' button and it will execute all of the TestSuite setups that are needed to create the test harness. Similarly, if you execute a test from the API, as long as the test is called from the TestSuite, it will be executed correctly. For debugging these types of tests, I find 'Retain Wire Values' to be my best friend.

    I hope this helps, I aim to look at your code at some point but it won't happen until next week as I'm out of the office now.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.