Jump to content

ak_nz

Members
  • Posts

    88
  • Joined

  • Last visited

  • Days Won

    7

Posts posted by ak_nz

  1. Hey Guys,

    I just want to mention that JKI is committed to making sure that VI Tester works well in LabVIEW. We can't live without it at JKI and it's going to continue to be improved.  We've been really busy lately, so that's why there hasn't been a new version pushed out.

     

    We see the LV UTF as addressing a totally different market need than VI Tester.  As I see it: UTF is great for people who want to show 100% test coverage through its static analysis tools -- NI created it to address the needs of people developing for regulated industries.  VI Tester is great for people who want to do better and faster software engineering in LabVIEW -- JKI created it to help test object-oriented (and other) LabVIEW applications using the industry-standard xunit architecture.

    I can't comment on time-frames, but please stay tuned and don't give up on us.  BTW, VIPM 2013 SP1 (one of the areas where we've been busy) was just released (do a check for updates)! :)  We couldn't have done that without VI Tester.

     

    Thanks for responding to this! Personally I quite like the VI Tester tool as running unit tests quickly is a great advantage to me.

     

    As some feedback I operate in a regulated industry (medical devices); however operating in a regulated industry does not mean that standard architecture and unit testing techniques (such as xUnit) are ignored. They are an obvious advantage and, perhaps compared to other industries, are virtually compulsory for our projects that are not LabVIEW based since they provide a proven mechanism for test infrastructure. And everyone wants to design and test software engineering better and faster - especially those of us in regulated industries ;).

     

    However I concede that the reporting features and code coverage of the UTF is a distinct advantage and adds to our ability to prove sufficient test coverage. I can also understand that it is not desirable to tread on the UTF toes, so to speak.

     

    I look forward to hearing more from VI Tester in the future.

    • Like 1
  2. I'm looking at recommending a Unit Testing tool at our workplace for LabVIEW code. The majority of the code will be OO-based with relatively simple APIs. There are appear to be three options:

    • UTF from NI. This has the integration into the IDE, test vectors, code coverage. We also have a license to use it. Unfortunately it also seems quite slow (second or more per test case). My main concern is that the tests will take too long to run under our larger projects (we will have hundreds or more cases), meaning they won't be run frequently. This goes against the common thought that unit tests should be quick to execute.
    • VI Tester from JKI. This custom framework does support management of cases and suites but also requires test customisation and does not provide any mechanism for analysing coverage. It also appears to not be updated to support LabVIEW 2012 and 2013 and thus has a few gotchas (eg. incorrect template copying).
    • Roll our own. Not the best solution with the obvious down-sides.

    What are the community's thoughts on this? Are there more alternatives?

    • Like 1
  3. Hi All,

     

    I have a project where a LVOOP class is accessed by reference (using a DVR). The method of using a DVR was chosen because the object data will be modified internally by private methods, multiple clients need access to it and we are likely to need more than one of them, making LVOOP and DVRs a natural fit. This means that the object is only accessed by a DVR of the object class (with New, and Delete methods to create and delete the DVR). Built and tested in LV 2012 SP1 patch f3.

     

    The object is intended to be reusable, so it was built into a Packed Project Library (eg. "PPL.lvlibp"). Using this PPL in another project works comfortably.

     

    It is often useful for us to rename the original PPL filename to prevent conflicts (eg. "PPL Renamed.lvlipb"). Validation is not an issue here since we can programatically access the Oriignal Filename and Version of the PPL for verification. However if I attempt to access the class inside the renamed PPL, LabVIEW informs me I have a "class conflict". I assume this is because the qualified name of the class and methods inside it are now namespaced by the PPL.

     

    If I attempt to run a top-level VI making the calls to the DVR class LabVIEW will frequently crash (often without the NI Error Reporting Service detecting the shutdown), sometimes with a variety of .cpp errors. Scary stuff, someone's telling me I'm doing something wrong.

     

    Does anyone have any experience with this? If I don't rename the PPL, then things seem to work as expected (ie I can create, play with and then delete my new reference object). It seems interesting that renaming the PPL has this effect if there are no other dependencies, unless subVIs in the class can't be properly resolved. Is there any information on the stability of using LVOOP DVR classes within PPLs? Or the structure of PPLs that renaming the filename would effect? NI recommends using a renaming scheme in their white-papers regarding dependencies amd PPLs, but this strategy appears to cause more problems than it solves in this particular scenario.

     

    Any help or guidance appreciated.

  4. Thanks for your replies everyone.

    I have been experimenting with using OOP with TestStand. I am using 2012 for both, so can directly access class methods in TestStand.

    My strategy has been to have pre-defined hardwre objects (eg. DMM) that inherit from a Hardware_Base (with inherited dynamic dispatch methods Init and Close). I also have a static Factory Create method VI in this base class that creates, given a string, the required descendent object type that has been initiailised by a dynamic dispatch call to Init. Typical Factory Pattern. I can save the object references in TestStand and re-use Static methods VIs to operate on the class object (private data) to perform the necessary hardware functions. The base class also has a static method call that calls the dynamic dispatch method Close on the provided descendent object. This is all fine and dandy, and the classes allow me to provide a clean, encapsulated interface for unit testing purposes and minimising tampering of my interfaces.

    Since I'm running a LabVIEW interface to house the TestStand ActiveX objects, I'd like to be able to access the object data from this interface (eg. DMM debug Panel showing the allowable private data such as configuration etc.). There seems to be no easy way to access the object without copying it, turning every object into a singleton or removing the object concept entirely (such as posting changed data via UIMessages, generated in a seperate, continuous TestStand thread). I'm sure there must be a way to eat my cake too...

  5. Hi All,

    I am starting a new project and, being a relative new-comer to LVOOP (besides a little play-around with examples etc.), am wondering whether this project is a suitable use case for LVOOP ie. will I gain any advantages.The project scope is:

    - Automated test equipment to test one and only one product type (ie. model). Tested using Labview for hardware interface and TestStand for sequencing the actual tests. This is unlikely to change (duplicating the entire equipment for a new model is more likely).

    - Hardware interactions with the product are "one only" ie. there aren't several kinds of DMMs or SMUs etc. I have one of each type of hardware interface. This is also unlikely to change since changes to the product are unlikely. There will be around 8-9 hardware interfaces (read. PXI modules).

    - The equipment is under a controlled system ie. change is a prolonged process, of which actual development is a very small part. Improvement in development time for modifications may not contribute to the total time needed for changes.

    Besides encapsulation, I'm not sure LVOOP offers me enough advantages for this project, but I am interested in thoughts and ideas. An idea I have had thus far is implementing a simulation interface to each hardware layer (similar to the HAL concept) to allow me to perform unit testing of the architecture levels above but substituting in a simulation class, derived from a base class (eg. DMM_Simulation derived from DMM_Base) instead of the actual hardware-connected class (ie. DMM_Physical also derived from DMM_Base) by way of a the Factory Pattern. This is useful for development purposes but not practical use in production.

    Thanks for any replies!

    • Like 1
  6. Hmmm. In 2009 and 2010 it was max 8 per exec, per priority irrespective of the number of cores. Max 200+1 in total.

    I've just looked in 2011 (I rarely use it) and indeed you seem to be right; you can set 20. However, it can be set to 20 on both my Quad core (8) and my dual core (4) so I think they just bumped up the limit rather than made it scalable in accordance with number of cores.

    OK, very useful to know; I'll bear that in mind for some of our legacy projects running in 2009/2010. Thanks!

  7. The maximum is 8 per execution system, per priority. You can get your twenty by spreading them over 3 execution systems (for normal priority)

    Hi ShaunR. Is this figure of 8 independent of CPU core count (ie. in my case I have 8 logical cores)? The reason I ask is that the threadconfig.vi in the sysinfo.llb allows me to specify up to 20 threads in each combination of execution system and priority. I might be understanding this wrong, maybe this ability in threadconfig.vi is misleading? On my machine all the table entries were set to 8 initially as you suggested.

    Or is this a case of a single thread, per execution system, per priority per CPU core (hence the default of 8 in the tables on my machine)?

  8. Thanks everyone for your posts.

    The solution I have run with is separating the tasks out into each execution system. In particular, I placed only the tasks with .net calls into the "Other 2" execution system and specified, by the configuration ini file, for 20 threads at Normal priority (there are 20 maximum simultaneous calls to the dll) for this ES. I added this as a Post-build action in the executable build configuration to automate that process.

    This has fractionally increased CPU loading by a few percent as expected but also allowed all the remaining tasks to operate within their required timings (ie. now properly "insulated" from the dll calls). So it would appear the .net dll calls were consuming all available threads, as suggested by Mark Smith.

    I'll dig deeper into seeing if I can improve the performance any further (ie. via tweaking VI priorities) but this helps me out for now, thanks all for your help!

  9. Hi All,

    I'm looking for a good resource that explains LabVIEW's Execution System / Thread Allocation / Thread Priority system.

    As a background to the reason for my request:

    I have an application with over 50 parallel loops running at fixed but configurable times. Twenty of these loops are calling a .net Dll and are thus not in a Timed Loop (there is a known issue according to NI support with calling a .net dll in a timed loop where the call time is large ie. upwards of a second). The remaining loops are performing other data acquisition. Each loop is what I call a Task Controller - it looks after a specific piece of hardware, taking requests for data (via queues), performing data acquisition and then pumping the result back to the requester. In order to seperate the timing of the functionality (and allow multiple requesters access to the same data), this process is not sequential but occurs in parallel loops.

    So there is a lot of parallel activity going on. I notice that as more of these loops fire up, the slower the remaining loops are. The CPU usage tends to stay around 7-8% during this time irrespective of how many loops are executing. Note that the .net dll calls (up to 20) are reasonably slow calls and each could take up to 6 seconds to execute. The .net dll has been written to handle multi-threading. The PC is a hyper-threaded quad core (ie 8 logical cores) @ 3.3GHz. Kinda a meaty machine.

    I should also mention that the majority of the VIs are re-entrant. The only non-rentrant VIs are some FGVs and a few User Interface VIs that reference the data in these FGVs. And before you ask the FGVs are simply Get/Set for a handful of cluster points.

    So I figure it's a simple case of thread starvation. Every VI is currently set to the Standard Execution System (via Same as Caller) with Normal Priority. I figure that adjusting these settings on the top level Task Controller vis may assist in spreading the load to the remaining available, but not executing, threads. The SubVIs under each Task Controller will continue to use the Same As Caller setting, allowing me to seperate logically each Task into appropriate Execution Systems.

    Any thoughts?

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.