ak_nz
Members-
Posts
88 -
Joined
-
Last visited
-
Days Won
7
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ak_nz
-
Hello forum LAVA-ites, I am attempting to write a Tools Menu item to slot into the IDE. The intent of this "Tools" VI is that it determines the application instance that launched it and dynamically run a VI in that application instance so that the launched VI can access VIs in that instance. This is basically a custom Unit Testing tool that looks for particular VIs in the project that houses the application instance. I was attempting to use the App.MenuLaunchApp property in this "Tools" VI so that I could wire up an Open VI Reference node with the right application instance and the name of the VI to run dynamic VI to run. I added an extra indicator on the launched VI to show what application context it is running in. If I have a VI open from a Project and then launch the "Tools" VI from the Tools menu, then the dynamically run VI indicator shows "My Computer" (referring to the project housing the application instance). However - if I launch the "Tools" VI from the Tools Menu while Project Explorer is showing then the dynamic VI launches in the NI.LV.Dialog irrespectively (the indicator on the front panel of this VI shows this). Is this expected behaviour? Is there a way around this so that the dynamic VI gets loaded into the right application instance even if launched from Project Explorer? Thanks in advance.
- 8 replies
-
- 1
-
- tools menu
- dialog
-
(and 1 more)
Tagged with:
-
REx - Remote Export Framework and Remote Events
ak_nz replied to Norm Kirchner's topic in Application Design & Architecture
Any chance of this ending up on the Tools Network eventually?- 34 replies
-
Thanks for responding to this! Personally I quite like the VI Tester tool as running unit tests quickly is a great advantage to me. As some feedback I operate in a regulated industry (medical devices); however operating in a regulated industry does not mean that standard architecture and unit testing techniques (such as xUnit) are ignored. They are an obvious advantage and, perhaps compared to other industries, are virtually compulsory for our projects that are not LabVIEW based since they provide a proven mechanism for test infrastructure. And everyone wants to design and test software engineering better and faster - especially those of us in regulated industries . However I concede that the reporting features and code coverage of the UTF is a distinct advantage and adds to our ability to prove sufficient test coverage. I can also understand that it is not desirable to tread on the UTF toes, so to speak. I look forward to hearing more from VI Tester in the future.
-
I'm looking at recommending a Unit Testing tool at our workplace for LabVIEW code. The majority of the code will be OO-based with relatively simple APIs. There are appear to be three options: UTF from NI. This has the integration into the IDE, test vectors, code coverage. We also have a license to use it. Unfortunately it also seems quite slow (second or more per test case). My main concern is that the tests will take too long to run under our larger projects (we will have hundreds or more cases), meaning they won't be run frequently. This goes against the common thought that unit tests should be quick to execute. VI Tester from JKI. This custom framework does support management of cases and suites but also requires test customisation and does not provide any mechanism for analysing coverage. It also appears to not be updated to support LabVIEW 2012 and 2013 and thus has a few gotchas (eg. incorrect template copying). Roll our own. Not the best solution with the obvious down-sides. What are the community's thoughts on this? Are there more alternatives?
-
Anyone else OCD about alignment and positioning in block diagrams?
ak_nz replied to Sparkette's topic in LabVIEW General
Yes to the first and thus no to the second . But I'm in recovery, honest. -
Hi All, I have a project where a LVOOP class is accessed by reference (using a DVR). The method of using a DVR was chosen because the object data will be modified internally by private methods, multiple clients need access to it and we are likely to need more than one of them, making LVOOP and DVRs a natural fit. This means that the object is only accessed by a DVR of the object class (with New, and Delete methods to create and delete the DVR). Built and tested in LV 2012 SP1 patch f3. The object is intended to be reusable, so it was built into a Packed Project Library (eg. "PPL.lvlibp"). Using this PPL in another project works comfortably. It is often useful for us to rename the original PPL filename to prevent conflicts (eg. "PPL Renamed.lvlipb"). Validation is not an issue here since we can programatically access the Oriignal Filename and Version of the PPL for verification. However if I attempt to access the class inside the renamed PPL, LabVIEW informs me I have a "class conflict". I assume this is because the qualified name of the class and methods inside it are now namespaced by the PPL. If I attempt to run a top-level VI making the calls to the DVR class LabVIEW will frequently crash (often without the NI Error Reporting Service detecting the shutdown), sometimes with a variety of .cpp errors. Scary stuff, someone's telling me I'm doing something wrong. Does anyone have any experience with this? If I don't rename the PPL, then things seem to work as expected (ie I can create, play with and then delete my new reference object). It seems interesting that renaming the PPL has this effect if there are no other dependencies, unless subVIs in the class can't be properly resolved. Is there any information on the stability of using LVOOP DVR classes within PPLs? Or the structure of PPLs that renaming the filename would effect? NI recommends using a renaming scheme in their white-papers regarding dependencies amd PPLs, but this strategy appears to cause more problems than it solves in this particular scenario. Any help or guidance appreciated.
-
Don't freak us out there...
-
Thanks for your replies everyone. I have been experimenting with using OOP with TestStand. I am using 2012 for both, so can directly access class methods in TestStand. My strategy has been to have pre-defined hardwre objects (eg. DMM) that inherit from a Hardware_Base (with inherited dynamic dispatch methods Init and Close). I also have a static Factory Create method VI in this base class that creates, given a string, the required descendent object type that has been initiailised by a dynamic dispatch call to Init. Typical Factory Pattern. I can save the object references in TestStand and re-use Static methods VIs to operate on the class object (private data) to perform the necessary hardware functions. The base class also has a static method call that calls the dynamic dispatch method Close on the provided descendent object. This is all fine and dandy, and the classes allow me to provide a clean, encapsulated interface for unit testing purposes and minimising tampering of my interfaces. Since I'm running a LabVIEW interface to house the TestStand ActiveX objects, I'd like to be able to access the object data from this interface (eg. DMM debug Panel showing the allowable private data such as configuration etc.). There seems to be no easy way to access the object without copying it, turning every object into a singleton or removing the object concept entirely (such as posting changed data via UIMessages, generated in a seperate, continuous TestStand thread). I'm sure there must be a way to eat my cake too...
-
Hi James, Thanks for your reply - I'll ponder this while I make the decision
-
Hi All, I am starting a new project and, being a relative new-comer to LVOOP (besides a little play-around with examples etc.), am wondering whether this project is a suitable use case for LVOOP ie. will I gain any advantages.The project scope is: - Automated test equipment to test one and only one product type (ie. model). Tested using Labview for hardware interface and TestStand for sequencing the actual tests. This is unlikely to change (duplicating the entire equipment for a new model is more likely). - Hardware interactions with the product are "one only" ie. there aren't several kinds of DMMs or SMUs etc. I have one of each type of hardware interface. This is also unlikely to change since changes to the product are unlikely. There will be around 8-9 hardware interfaces (read. PXI modules). - The equipment is under a controlled system ie. change is a prolonged process, of which actual development is a very small part. Improvement in development time for modifications may not contribute to the total time needed for changes. Besides encapsulation, I'm not sure LVOOP offers me enough advantages for this project, but I am interested in thoughts and ideas. An idea I have had thus far is implementing a simulation interface to each hardware layer (similar to the HAL concept) to allow me to perform unit testing of the architecture levels above but substituting in a simulation class, derived from a base class (eg. DMM_Simulation derived from DMM_Base) instead of the actual hardware-connected class (ie. DMM_Physical also derived from DMM_Base) by way of a the Factory Pattern. This is useful for development purposes but not practical use in production. Thanks for any replies!
-
Hi ShaunR. Is this figure of 8 independent of CPU core count (ie. in my case I have 8 logical cores)? The reason I ask is that the threadconfig.vi in the sysinfo.llb allows me to specify up to 20 threads in each combination of execution system and priority. I might be understanding this wrong, maybe this ability in threadconfig.vi is misleading? On my machine all the table entries were set to 8 initially as you suggested. Or is this a case of a single thread, per execution system, per priority per CPU core (hence the default of 8 in the tables on my machine)?
-
Thanks everyone for your posts. The solution I have run with is separating the tasks out into each execution system. In particular, I placed only the tasks with .net calls into the "Other 2" execution system and specified, by the configuration ini file, for 20 threads at Normal priority (there are 20 maximum simultaneous calls to the dll) for this ES. I added this as a Post-build action in the executable build configuration to automate that process. This has fractionally increased CPU loading by a few percent as expected but also allowed all the remaining tasks to operate within their required timings (ie. now properly "insulated" from the dll calls). So it would appear the .net dll calls were consuming all available threads, as suggested by Mark Smith. I'll dig deeper into seeing if I can improve the performance any further (ie. via tweaking VI priorities) but this helps me out for now, thanks all for your help!
-
Hi All, I'm looking for a good resource that explains LabVIEW's Execution System / Thread Allocation / Thread Priority system. As a background to the reason for my request: I have an application with over 50 parallel loops running at fixed but configurable times. Twenty of these loops are calling a .net Dll and are thus not in a Timed Loop (there is a known issue according to NI support with calling a .net dll in a timed loop where the call time is large ie. upwards of a second). The remaining loops are performing other data acquisition. Each loop is what I call a Task Controller - it looks after a specific piece of hardware, taking requests for data (via queues), performing data acquisition and then pumping the result back to the requester. In order to seperate the timing of the functionality (and allow multiple requesters access to the same data), this process is not sequential but occurs in parallel loops. So there is a lot of parallel activity going on. I notice that as more of these loops fire up, the slower the remaining loops are. The CPU usage tends to stay around 7-8% during this time irrespective of how many loops are executing. Note that the .net dll calls (up to 20) are reasonably slow calls and each could take up to 6 seconds to execute. The .net dll has been written to handle multi-threading. The PC is a hyper-threaded quad core (ie 8 logical cores) @ 3.3GHz. Kinda a meaty machine. I should also mention that the majority of the VIs are re-entrant. The only non-rentrant VIs are some FGVs and a few User Interface VIs that reference the data in these FGVs. And before you ask the FGVs are simply Get/Set for a handful of cluster points. So I figure it's a simple case of thread starvation. Every VI is currently set to the Standard Execution System (via Same as Caller) with Normal Priority. I figure that adjusting these settings on the top level Task Controller vis may assist in spreading the load to the remaining available, but not executing, threads. The SubVIs under each Task Controller will continue to use the Same As Caller setting, allowing me to seperate logically each Task into appropriate Execution Systems. Any thoughts?