-
Posts
1,824 -
Joined
-
Last visited
-
Days Won
83
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Daklu
-
Congratulations Mark! ROFL!
-
Would it be possible to alter the 'Submitted' and 'Last Altered' dates to reflect the real dates the code was added rather than the date it was imported into the new website?
-
But the question is how is LV discovering the dependencies. It's doing more than just reading the dependencies from the .lvproj file, even for those files that it's not supposed to be loading. I just did some quick experiments with dynamically calling classes and if you're concerned about the number of classes in your project that's the way to go. The Factory Pattern example is an excellent starting point. The Orange class and Blue class can be removed from the project and they don't show up in dependencies, meaning they won't be loaded when the project is opened. Since the main app generally only uses vis from the parent class (Generic Plugin, in this case) you don't have to go hunting for vis from Orange and Blue. On a large project with many classes I would have a main project that contains the top level vis and parent classes, but not the dynamically called child classes. To simplify child class development I would have a separate project file for each class hierarchy that contains the parent and all the child classes as part of the project. Using this kind of scheme would have greatly reduced our project load time.
-
I've been debugging the issue in that thread over the past couple days and I discovered there is a device driver on my computer that is starting interrupt storms. When they hit LV slows to a crawl and completely locks up during certain operations. (Save all, mass compile, right clicking the project window when lots of vis are selected, etc.) The only way to stop the storm is to put the computer to sleep and wake it up again. That gives me anywhere from 1 minute to 2 hours of unhindered programming. If I load the project when my computer is storm free it takes ~3 minutes, including LV startup time. This is more in line with what I would expect. However, it is still longer than I like and I prefer using subprojects to manage the amount of code that's loaded up at any one time. AQ, in your response you said, "All library files listed in the project are full loaded into memory when the project loads. VIs that are listed by the project are not..." If non-library vis in a project aren't loaded when the project is loaded, how does LV populate the Dependencies section with its non-project sub vis? I just ran the following experiment: Start a new project. Create a main vi. Create a sub vi and put it on the main vi's bd. Remove the sub vi from the project. Save and close everything. Check the .lvproj file with a text editor. The sub vi shows up in the Dependencies section. Close the text editor. Without opening the project, open the main vi. Create another sub vi and add it to the main vi. Save and close everything. Open the project with Labview. Sub VI 2 appears in the Dependencies section. Open the .lvproj file with a text editor. Sub VI 2 does not show up in the Dependenies section. LV is discovering Sub VI 2 is a dependency at the time the project is loaded, so there *must* be more going on than just checking to see if the files listed in the .lvproj file are at the specified location in the file system.
-
Are there unwritten guidelines for what types of posts/threads we vote up? Should we restrict it to helpful/Labview related topics, or are off-topic posts that are especially humorous okay? I think there's a logic flaw in the thread rating system. Suppose there's a thread with no value (maybe it's devolved into flame wars or I posted the Gettysburg Address), if I can convince one person to give it 5 starts it will forever show up as a "good" topic. If enough people give it 1 star the average will come down, but the poster would be rewarded with additional stars. Is it possible to give a thread 0 stars? Or maybe -1 star at the cost of one of your own stars?
-
Very cool Norm. Thanks for sharing that with us! Couple points/questions: Do you have an online source code repository or should we just make changes and post them here for review? (I think this tool would benefit from a plug-in architecture.) TRef Traverse for CursorContext.vi and TRef LocatorCallback.vi are both password protected. Just pointing that out in case it was unintentional. The Quick Drop package doesn't have the core package listed as a dependency. Why did you distribute it as an llb? I've noticed lots of tools continue to be distributed that way even though NI has depreciated them.
-
In order from bad to good: "Lab what?" "What's a wire?" "Look at my cool strip chart" "Sub VIs are da bomb" "VI Server? Is that new?" "State machines for everything!" "How can I pass-by-reference?" "I'm a LVOOPer" ('Lavooper?') "Give me a sec to script this..." "I'll just build a framework..." ...
-
...and the two at the top of the list aren't being put on by NI. (Link to NI Week brochure) - Advanced Error Handling Techniques in LabVIEW by VI Engineering - LabVIEW Code Reuse for Teams and Large Projects by JKI Any chance of getting the presentations posted on LAVA after NI Week? Or maybe even (dare I ask?) put a webcast online? There are a couple NI presentations I'd like to see too: - Best Practices for Memory Management and LabVIEW Code Optimization - Beyond the Basics: LabVIEW Debugging Techniques ("Probe to disk?") - New Features in LabVIEW Object Oriented Programming If I recall correctly NI has posted a few presentations in the past. These get my vote.
-
Math would have been much easier had my teachers thought that way...
-
QUOTE (ShaunR @ May 19 2009, 10:23 AM) Not true... you did a fine job explaining... *ahem*... polyconfigurism. :laugh: QUOTE (Ic3Knight @ May 19 2009, 11:48 AM) Thanks for the hint. My only minor issue being that I've never worked with classes at all and don't really know where to start... If you're under a tight deadline, stick with what you know. You will spend a lot of extra time learning the ins and outs of Labview classes. (Moreso if you're not familiar with OOP concepts at all.) You will also spend extra time trying to design a class hierarchy that works. On my first significant OOP project I had to do major rewrites at least twice and lots of in-between refactoring. That's the cost of learning when you don't have an experienced developer to bounce ideas off of. QUOTE (Ic3Knight @ May 19 2009, 11:48 AM) I was just wondering... how does the "classes" approach lend itself to application building and distributing (i.e. generating exe files etc)? Classes compile into executables just fine. It takes extra work to make classes available to other programming languages via a dll though. QUOTE (Ic3Knight @ May 19 2009, 11:48 AM) If I were to use the classes approach, would I have to include the "drivers" for every type of hardware that might be used? Or could I just supply the ones most appropriate for the specific configuration? You only have to supply the drivers for the instruments that are used on that specific test station. If you change instruments you need to make sure the correct drivers are available for that instrument. QUOTE (Ic3Knight @ May 19 2009, 11:48 AM) One of the reasons we're looking at using the "call by ref" plugin approach is that we want a top level architecture which (in the case of the new project I'm about to start work on, rather than the tuneable laser I mentioned above) would allow us to add support for new/specific hardware (function generators etc) further down the line and simply provide end users with a new "driver" set for the hardware they're using. Here are some blunders I made tidbits I learned from my last project that I believe are good rules for those learning LVOOP: Don't try to make a general class hierarchy that will be universally usable across projects. Focus on a class hierarchy that works for this application. In time, if you find you're using the same classes over and over again then you can consider releasing them as internal shared code. Decide what your instrument class' methods should be based on what the application needs, not on what the instrument can do. For example, if your application only ever needs the function generator to output a square waveform at 10 kHz, your Function Generator class doesn't need Set Waveform or Set Frequency methods. You can setup the instrument when you Open it. Hardware-based class hierarchies can easily get very complicated so figure out what combinations of instruments are likely to be used. For example, suppose your application uses 3 multimeters to do measurements so you implemented a Multimeter class in your code. Down the road they decide they want to replace the multimeters with 2 ammeters and a voltmeter to reduce cost. Can your application handle that? Conversely, suppose you write the application with an Ammeter class and a Voltmeter class, but later want to be able to swap in multimeters. How does that fit in your framework? If all your instruments used string-based commands, you might want to take a look at the 'configuration file based polymorphism emulation' architecture ShaunR talks about in the Exterfaces thread. (He doesn't like it when I call it 'polyconfigurism.' )
-
Wanted: Ideas and guidance! Plus an offer of dinner!
Daklu replied to Norm's topic in Application Design & Architecture
<offtopic> QUOTE (crelf @ May 15 2009, 07:58 PM) Ring me up next time you're in the Seattle area. I know VI Engineering did some manufacturing test equipment for a project I was working on 1-2 years ago and had 2-3 developers here for several months. </offtopic> -
I couldn't find them there...? Yes, the ACE, BAM, etc files located in the device driver section simulate responses from real devices because a) there is no such thing as an Ace voltmeter or BAM currentmeter, and b) nobody downloading the sample code would have them anyway. In a real application with real devices, the files I listed in the device driver section would generally be supplied by the manufacturer and shared between projects. Only the code in the 'Project Source Code' folder is project specific. So for the 34401, I wrote the exterface (which is the thin wrapper) and put it with the rest of the exterfaces. I left out the 34401 device driver for the reasons I stated previously. I think we're still mixed up on terminology. When I say 'device driver' I'm referring to the set of Labview vis that expose all of the instrument's low level settings and functions. Often times each device driver vi will require some sort of reference to the device, such as a handle or address. In my experience many manufacturers supply Labview device drivers for their instruments (often they are just Labview wrappers around their dll calls) or you can find them on the Instrument Driver Network. Installing them is a matter of unzipping the file and putting them in the <instr.lib> directory. The exterfaces do wrap the device driver so there will always be some coding associated with that, but they do not need to wrap the entire device driver. Same here. I can't maintain the abstraction so I don't ever bother trying with exterfaces. LOL. Give me a better name for your system and I'll use it. (FWIW, I don't really like the name "exterfaces." I think it's klunky. But I didn't want to call it "interfaces" because that would cause confusion with people who already understand interfaces in other languages and I haven't thought of another word for it. On the other hand, "polyconfigurism" rolls off the tongue rather nicely. )<p>
-
QUOTE (ShaunR @ May 15 2009, 08:02 AM) No it isn't. The code I added to the project, XAgilent 34401.lvclass, is stuff I wrote in response to your request. If you open those vis you will see they wrap calls to the 34401 Labview driver supplied by Agilent [Edit: I was wrong. I got the Labview driver is from the Instrument Driver Network, not Agilent.] and contained in another file, Agilent 34401.lvlib. Maybe I'm misunderstanding what you're saying...? QUOTE (ShaunR @ May 15 2009, 08:02 AM) Except since my interface definitions are on the project level, it's no big deal to add "Personal Servant.vi" to what I already have. I don't have to worry about coding for exceptions that may occur in future applications or maintaining compatibility with previously deployed applications that use the interface. I just have to worry about making it work in this application. And adding the drivers Yes, but that is typically just a matter of downloading and installing them. No coding required. QUOTE (ShaunR @ May 15 2009, 08:02 AM) I have the difficulty that I may have 20 instruments on that project and will never see those instruments again, and there will be another 20 different ones in 3 months time on the next project. Like I think you pointed out. Its the difference between engineering tools and product. Yeah, if I had to derive 20 child classes containing essentially the same code every three months exterfaces would get old really quick--regardless of how thin and light they are. I'm really curious what kind of testing you're doing that contains so many instruments used for such a short time. It sounds like a fairly chaotic environment. I also think it would be interesting if you did a white paper and sample project. Share the knowledge so we're not all reinventing the same thing. Question: As Shoneill pointed out, exterfaces and polyconfigurism are different solutions for slightly different problems. Is there a way to combine them and gain the advantages of both? Off the top of my head I think it would just unnecessarily complicate things, but maybe you can think of something. QUOTE (shoneill) I didn't realise you were dynamically loading the Read/Write functions using a LUT. LUT?
-
QUOTE (shoneill @ May 14 2009, 01:32 AM) It might provide similar functionality, but the implementation is different I think. I don't have a backend running a parallel process or use any user events. Essentially one device driver instance is created for each instrument connected to the system. Each device driver is placed in a named single element queue. The exterfaces "attach" themselves to an instrument by obtaining the queue that carries the desired instrument and storing the queue as private data. Furthermore, I don't currently have a mechanism for broadcasting available interfaces. I've played around with creating collection objects and managing the interfaces that way, but I wanted to get feedback on what I've got so far before expanding it further. QUOTE (shoneill @ May 14 2009, 01:32 AM) So I could have my Spectrometer servicing both a Spectrometer and a Colorimeter and a Temperature sensor interface. Is that correct? Is that the idea behind the examples posted? I have to confess I've read the document but not tried out the code. You're way outside my knowledge with those instruments so I can't say one way or the other. However, if those are three different types of functions that spectrometers do, then yes. QUOTE (shoneill @ May 14 2009, 01:32 AM) In addition I could jerry-rig a filter wheel (RS-232), a monochromator (GPIB) and a Photomultiplier (DAQ) together to create an Interface for a Spectrometer using three different devices witht hree different protocols. My top-level software doesn't care, it just calls the "Get Spectrum" method and (after a little while for a scanning spectrometer) delivers the data. That's the idea. The example code treats each interface as an independent entity and doesn't model combining interfaces into a single object yet, although that is the next step. -------------- QUOTE (ShaunR @ May 14 2009, 12:22 PM) None of them are SCPI. I think your getting hung up on that. I understand a little better now with your example config files. But it's still not clear to me how you deal with instruments that aren't based on string commands. If you look at the 8451 driver (here) or the Aardvark driver (here) you'll see the calls into the dll don't use string based parameters. Do you write a new driver that does accept string commands so you can store the sequences in a config file? I also still don't understand how you have one read function work for different instruments. Suppose (for example) you're measuring peak-to-peak voltages on an oscilloscope. What if one scope returns the value in volts and another returns the value as the number of divisions and requires you to calculate the voltage? QUOTE (ShaunR @ May 14 2009, 12:22 PM) But if I use your exterfaces (I think) I can wrap 1 piece of code and use the exterfaces to define higher level function like entire tests. Sort of. Your interface definition defines the higher level functions. The exterface implements the higher level functions for each instrument that you want to use for those higher level functions. So you do have to wrap more than one piece of code... unless all your instruments use string based commands, then you can probably use polyconfigurism and a single vi. QUOTE (ShaunR @ May 14 2009, 12:22 PM) I didn't think so, until you put the device driver as an exterface instead of a device driver (as I was expecting). I'm not sure we're on the same page here yet. The 34401 code I added is an exterface, not a device driver. You would still need the 34401 device driver from agilent to get the exterface to work. QUOTE (ShaunR @ May 14 2009, 12:22 PM) USB DVM's? Sorry, I meant the 8451, Aardvark, and CAS-1000. Straight usb devices... no RS232/USB converters. QUOTE (ShaunR @ May 14 2009, 12:22 PM) And then someone comes along and says "Oooooh. Our Tektronics scope has a really useful feature that enables me to wash the car whilst toasting a muffin. We need that feature too". And you end up doing it anyway or you end up back in your original conundrum where everything is an exception. Except since my interface definitions are on the project level, it's no big deal to add "Personal Servant.vi" to what I already have. I don't have to worry about coding for exceptions that may occur in future applications or maintaining compatibility with previously deployed applications that use the interface. I just have to worry about making it work in this application.
-
QUOTE (ShaunR @ May 13 2009, 11:40 PM) If all your instruments already conform to SCPI then I can see how there would be little benefit to implementing another interface on top of it. What do you do when you have to add a non-SCPI compliant instrument or one that doesn't accept string commands? QUOTE (ShaunR @ May 13 2009, 11:40 PM) Non-VISA instrument?. Not sure what you mean by this since VISA is a hardware abstraction (Serial, TCPIP etc). Like I said, 90% of devices use these interfaces. I should have said non-VISA driver. The drivers supplied with the 8451, Aardvark, and CAS-1000 did not use VISA. I suppose it might be possible to wrap their driver and make it VISA compatible and SCPI compliant... I did not consider that at the time and even now I'm not sure I'd want to take that bull by the horns. QUOTE (ShaunR @ May 13 2009, 11:40 PM) "polyconfigurism". Now your just making things up ...lol. Well yeah... but I kinda like the name. QUOTE (ShaunR @ May 13 2009, 11:40 PM) I'm just pointing out that it is possible to envisage an abstraction that is not based around the object you are trying to interface to, which tends to make the software specific for those objects, and you end up writing/copying, pasting code for new devices because the abstraction is miss-targeted. It isn't my intent for the Exterfaces Architecture to be based around a hardware device. That's what burned me in the first place. Interface definitions, like interfaces in text languages, are based on a set of common functions. The IVoltmeter interface can be applied to any device that can measure or calculate voltages: DVMs, oscilloscopes, DACs. It doesn't even have to be a physical instrument. In theory you could implement an exterface that reads current and resistance measurements from a text file and returns the calculated voltage. (Though it would be a bit tricky to implement that in this particular interface definition.) The example I've included happens to have 4 instruments that are fairly narrow in their capabilities and so I can see how it would look as if that is what I was doing. QUOTE (ShaunR @ May 13 2009, 11:40 PM) I was using "Device Driver" as in the context of your ACE or BAM etc since that is where they appear in the project. "Instrument Driver" perrhaps? Either one is fine. Some of your earlier comments made me think you might be referring to the exterface as the device driver. QUOTE (ShaunR @ May 13 2009, 11:40 PM) The exterface (and I may be wrong) looks to me like a wrapper around the "Instrument/Device Driver" to enable multiple inheritance. Yep. (Well, not so much enable multiple inheritance as simulate multiple inheritance.) QUOTE (ShaunR @ May 13 2009, 11:40 PM) Where you could have (for example) a class that takes the an HW interface (TCPIP,SERIAL etc)... None of the instrument drivers I used (as supplied by the mfg) required me to be concerned about the HW transport layer. All three of the instruments mentioned were USB only. I admit it had not occurred to me to be concerned about that. I'll have to think on that for a bit. QUOTE (ShaunR @ May 13 2009, 11:40 PM) I disagree that copying and pasting code between projects is a good thing if no modification is required. I didn't say it was good, I said it's not necessarily bad. I have several small utility vis that I routinely copy and use in projects. Why? Several reasons: If another developer checks out my source code the file will be there and he won't have to worry about finding and installing my reuse library. (This was not a mature Labview development house; it was a bunch of engineers working on (for the most part) quickie tools.) I had lots of related vis in that particular .lvlib but typically only used one or two of them. I didn't want to pull the whole library into the project. Managing shared reuse code tends to take a lot of time. Copying and pasting takes very little time. QUOTE (ShaunR @ May 13 2009, 11:40 PM) And if you are modifying the code it isn't being re-used so it shouldn't be in a re-use library. Agreed! (And I did say "modifying" in post #9. ) QUOTE (ShaunR @ May 13 2009, 11:40 PM) But the ideal scenario is that all features of all devices are exposed and available, and you just choose which ones to use in the higher level. I disagree. The ideal scenario is where just the right amount of features are exposed and available. Let's say the Ace VM that requires 6 steps to initialize and get into the proper state for a particular application and the CAL VM requires 2 steps. Following the traditional class hierarchy we create an abstract Voltmeter base class and do all sorts of work to create a common command set for two very different APIs that still exposes all of the functionality of each. Then we wrap each of the device drivers in Voltmeter child classes. We've just done lots and lots of work to ensure dynamic dispatching for functions we may not ever use and may need to change when we derive a Delta MM child class. On top of that, what do we do with those 6 steps needed to setup the Ace VM? We undo almost all of our work by wrapping them back up in a project sub vi and naming it Init! Would have been much easier to just implement Init using the original device drivers... QUOTE (ShaunR @ May 13 2009, 11:40 PM) I'd go for the "Auto" only. Doesn't give people the opportunity to get it wrong then. See, I'd use the explicit calls. It would make debugging easier. 'Auto' would be the default though.
-
QUOTE (ShaunR @ May 12 2009, 11:44 PM) It's a good instrument too... much better than the 8451 IMO and $150 cheaper. QUOTE (ShaunR @ May 12 2009, 11:44 PM) I have a solution for that....but you won't like it Hmm... I don't necessarily dislike it, but it seems like it's just a manual way to get dynamic dispatching type functionality (polyconfigurism?) except less universal and more complicated. If I grant you those three assumptions sending commands to the device I think would be pretty straightforward. Your Read.vi could get very messy with parsing and special cases. Do all the instruments you're ever going to use to measure voltages return string values in the same format? I suppose you could encode regex strings in the config file and use them to extract the value you're interested in. Workable? Maybe, but why bother when classes and inheritance already do that? (The one case I can see for doing this is if you have a shortage of Labview licenses and really smart technicians who don't mind writing obscure codes in config files.) And what happens when you try to implement a non-VISA instrument or one that doesn't use string commands? I don't see any way for polyconfigurism to handle that. The 8451, Aardvark, and CAS-1000 are not VISA devices. Or what if an engineer whips up a DAC circuit that returns a 12-bit integer that needs to be converted to a floating-point voltage? Where does the conversion happen? I changed my mind. I don't like it. It might work with a known set of instruments that fit your assumptions but as a general solution I think it gets way too complicated way too quickly. QUOTE (ShaunR @ May 12 2009, 11:44 PM) If your system is such that you only have to code for exceptions. Then you are winning. Until everything is an exception... QUOTE (ShaunR @ May 12 2009, 11:44 PM) If its for internal use only, you are better off with a toolkit that you can throw together bespoke tests quickly - something Labview is extra-ordinary at doing. My 'application' actually was designed as a toolkit of top-level VIs that would be sequenced using TestStand. (And it would have worked great if it weren't for those pesky design engineers!) The problem is that the top-level toolkit was built on several other layers of toolkits I was developing in parallel. I haven't worked through how to set up the entire architecture using exterfaces instead of the design I did use, but it looks promising. QUOTE (ShaunR @ May 12 2009, 11:44 PM) But there does seem to be a lot of "do it anyway" mentality about. I tried and ended up beating my head against a wall for months. Thanks, but I'll pass on the next round. Time to try a different approach... QUOTE (ShaunR @ May 12 2009, 11:44 PM) OK. Well lets look at your "exterfaces" in the light of the Agilent example that you kindly provided. I noticed that you didn't put it under "Device Drivers" which I hadn't expected and is why I asked for a driver (although typical of drivers) that didn't fit nice and snugly with the simulated ones so I could see how this worked. I didn't include the device driver itself simply to save space and as far as I know they are freely available. I don't normally include device drivers as part of a project file. Did the project link to your drivers correctly? I can add the driver I used and repost if needed. I'm not sure why you expected the 34401 exterface to be listed in the device drivers section of the project. An exterface isn't a device driver to my way of thinking. In this project the interfaces are an abstraction of a particular type of measurement. (Voltage measurements and current measurements.) The exterfaces implement the abstraction for a specific piece of hardware using the device driver supplied by the vendor. The files in the device drivers folder represent the drivers that are supplied by the instrument vendors and would normally reside in <instr.lib>. QUOTE (ShaunR @ May 12 2009, 11:44 PM) Now. If your "Device Driver" was based on a class implementation of my previous example with all those obnoxious classical techniques with files and whatnot, the exterface now just defines the sequence of operations, the files (or class alternative) to configure, and the order of the aliases (or class alternative) to retrieve the result. Then you would have an implementation, that can instantiate multiple tests/measurements on multiple devices with different configurations. I'm sure you could find a way of incorporating this better than I've described, but this my current thought process. You lost me. By "Device Driver," do you mean the exterface or the actual device drivers? When you say "a class implementation of my previous example," do you mean a class implementation of polyconfigurism? Isn't the point of polyconfigurism to avoid classes so you can add instruments without writing G code? Can you show me what you mean, maybe by stubbing out a simple example? (Text is fine, or if you're particularly ambitious you could try ascii art. ) QUOTE (ShaunR @ May 12 2009, 11:44 PM) Lots of copying and pasting means that you haven't encapsulated and refined sufficiently. If you restate that as 'copying and pasting in the same project,' I'll agree with you. But I contend copying and pasting code between projects doesn't necessarily make it a good candidate for inclusion in a shared reuse library, especially if you are modifying the code. I view the interface definitions and exterfaces somewhat like a template or boilerplate code. Copy, paste, edit as needed. QUOTE (ShaunR @ May 12 2009, 11:44 PM) I think this is a particularly bad case in the class implementation strategy which forces you to do this over traditional Labview which would encapsulate without much effort. You lost me again. Do you mean this is a bad case in the Exterface Architecture class implementation strategy or in the Labview class implementation strategy? QUOTE (ShaunR @ May 12 2009, 11:44 PM) In other languages, changes to the base class effect all subsequent inherited classes with no changes whatsoever. Labview does this too...? QUOTE (ShaunR @ May 12 2009, 11:44 PM) Lots of small changes as opposed to one big change? I'd rather not change anything but I don't mind adding. The interface definition provides the application with the appropriate instrument control resolution. At one extreme we have a simple, high-level interface with Open, Read, and Close methods. At the other extreme is a very low-level interface that defines the superset of all instrument commands. Different applications require different resolutions of instrument control. I can't define an interface that is suitable for all future applications, so I don't even bother trying. The small changes I make are simply to customize the interface's resolution for the application's specific needs. QUOTE (ShaunR @ May 12 2009, 11:44 PM) I've no problems with "create" (that's the same in other languages or you could use "New" as some do). But I can't help thinking that the linkto is a clunky way to return a reference to an object. If the "Create" operated in a similar way to things like queues and notifiers where you can create a new one or it returns a reference to an exisiting one, it would save an extra definition. Ahh... I get it. If the developer "creates" an instance using a name that already exists, it would automatically link to the previously created instance. I'll have to think about that. Maybe rename it "Attach" with a three-state enum input for 'Create New Instance,' 'Link to Pre-existing Instance,' and 'Auto.' QUOTE (ShaunR @ May 12 2009, 11:44 PM) Indeed. But having created 4 objects already, what happens if you create a 5th? There's nothing in the architecture that prevents the developer from creating a 5th instance. What happens depends on the instrument and the vendor's device driver. If the instrument is connection-based and a connection has already been established with another instance, the driver will probably return an error. If the instrument is not connection-based then yeah, an inattentive developer could screw things up.
-
Duh. I just realized I forgot to attach the project with the new 34401 example. What a maroon... :thumbdown: Here it is: Download File:post-7603-1241991088.zip QUOTE (ShaunR @ May 10 2009, 05:57 AM) Far be it from me to tell you how you should code your apps. If that's what works for you then go with it. But before you make your decision let me explain the problems I encountered following the more traditional(?) approach and how this whole concept arose. As a hardware test engineer for a consumer electronics product in development, I had to devise ways to test capacitive touch technologies from various vendors. The vendors' chips typically communicated via I2C, although a few early samples sent data to my test system via serial port. While I have long been familiar with oop concepts, this was the first time I had designed and built a "real" oop application of any significance. Since I hate rewriting code I set out to design my class hierarchy with reusable code a primary goal. (I'll focus just on part of the device driver stack. It illustrates the problems I encountered.) We had two main instruments that we used as I2C masters to communicate with the vendors' chips: The NI-8451 and the Total Phase Aardvark. We also had a third instrument that we used for I2C signal validation, the Corelis CAS-1000E. So, following what I thought were standard practices, I designed an abstract class, "I2C Master," and created child classes for those instruments. These child classes were intended to be long-term, shared reuse code. (i.e. One installation shared among many applications.) The base class defined some general I2C functions such as Get/Set Slave Address, Get/Set Clock Rate, Find Devices, Open, Read, Write, Close, and a few others. The common functions are what developers would use if dynamic dispatching is needed. Each child class also implemented the rest of the instrument's api so the developer could access all of the functions while staying within the same device driver. Obviously those functions were not available to use with dynamic dispatching. I designed the entire test system following that type of inheritance pattern. I implemented .lvlibs for each vendor's touch api. I created an abstract "Touch Base" class and derived child classes to wrap the lvlibs. This setup actually worked pretty well... right up until I started getting requests that violated the original scope of the application. While I had tried to predict the types of testing I would be asked to do and designed the test system for as much flexibility as was feasible, my crystal ball simply wasn't up to the task. To accomodate the new requirements within a reasonable timeframe I frequently had to remove some modularity (goodbye 8451) and/or rework my driver stack--that distributed framework code that was supposed to be untouchable. As time went on my 'reusable' code became more and more customized for that particular application. On top of that, I couldn't always guarantee backwards compatibility with previous versions of the drivers. Had those drivers actually been released and used by other developers I would have had a maintenance nightmare on my hands. As an example, at one point in the project one of the vendors implemented a ~Reset line that had to be pulled low for the chip to operate. Hmm... the 8451 and the Aardvark both have gpio lines and I built those functions into the drivers, but gpio doesn't necessarily fall in the realm of "I2C Master." The CAS-1000 doesn't have gpio lines. Should I create an abstract "GPIO Base" class and derive child classes for the gpio modules (such as the USB-6501 and the gpio functions of the 8451 and Aardvark) so I can continue using the CAS-1000? How would I make sure the Touch:Aardvark class and the GPIO:Aardvark class weren't stepping on each other's toes? After all, they would both be referring to the same device. To avoid a huge redesign I ended up implementing gpio functions in the "I2C Master" abstract class so I could continue using the 8451 and Aardvark. I ditched the CAS-1000 and serial port functionality. (The serial port requirement came back later in the project... ugh.) This is when I started wishing for interfaces. Near the end of the project much of my reuse code was no longer reusable and modularity was almost completely gone. The system required the Aardvark and only worked with a chip from a single vendor. The system I had created worked well when the requirements remained within my original assumptions. Once those assumptions were violated software changes either took weeks to implement (not an option) or required hard coded customizations. I learned a lot on that project. Two of the main things I learned were: Before I design the software, I need to understand if I'm creating a finished product or an engineering tool. Finished products have requirements that probably won't change (much.) Engineering tools will be asked to do things you haven't even though of yet. The two require very different approaches. I had designed and built a finished product while the design engineers expected it to behave as an engineering tool. Engineering tools require flexibility above all else (except reliable data.) It's damn hard to design a good class hierarchy that will fit all future (unknown) needs--much too hard for me and my little brain. With those lessons in mind I started mulling over ways to implement interfaces in LabVIEW, which would have (I believe) made my development task much easier. Exterfaces is the result. I created it specifically for engineering tools with the understanding that more flexibility requires more coding. I'm not really sure if there's much value in using in finished product applications--if the requirements are known you don't necessarily need the flexibility exterfaces provide. Regarding reusing exterfaces across projects... depends on what you define as "reuse code." I expect I might do a lot of exterface code copying and pasting between projects. If you consider that reuse code then I'm right there with you. Given point 2 above, I'm in no hurry to distribute interface definitions or exterfaces as shared reuse code. Getting it wrong is far too easy and far, far more painful than customizing them for each project. QUOTE The interface definition needs to be a superset of the project definitions, but that is easy for me since the higher functions are repetitive from project to project (as I explained to Omar). If you're going to reuse interface definitions, not only does the interface definition need to be a superset of all project definitions, it needs to be a superset of all future project definitions. Like I said earlier, my crystal ball isn't that good. (If yours is turn off the computer and head out to the horse track. ) Releasing new versions of reuse code adds a lot of overhead. I found maintaining backwards compatibility can be a huge time sink. Regression testing can be difficult. The api can get very messy. You also need to implement the new functions in all the child classes to make sure your new ideas are workable. Then you have to worry about distribution and, if you distribute the new reuse code to deployed systems, system level testing. That's a lot of mundane and, IMO, unnecessary work. However, if you keep the interface definition and exterfaces on the project level they don't have to be huge monolithic structures. You define the interface based on what that application needs--no more. There's no need to wrap the entire device driver because chances are your application doesn't need every instrument function exposed to it. If you have to make a change to the interface definition there's no worrying about maintaining compatibility with other applications. Small, thin, light... that's the key to their flexibility. QUOTE The intermediary "translation" could work well for me but the prohibitive aspect is still the overhead of writing driver wrappers which prevents me from taking advantage. See above. You only wrap what the application needs, not the entire driver. QUOTE I have one question though. Why do you have to "Link" to a device? Presumably you would have to know that the device driver is resident before hand for this. Can it not be transparent? "Create Instance" creates a new instance of the device driver and links that exterface to it. "Link to Instance" links an exterface to an already created instance. An exterface will use one or the other, not both. Use "Link" when an instrument is going to use more than one of it's functions. The vi "Many to One Example" illustrates this. (Look at the disabled block in "Init.") The system has a single Delta multimeter but it is used to measure both current and voltage. If I create two instances I end up with two device driver objects referring to the same device. That leads to all sorts of data synchronization problems and potential race conditions. By having the second exterface link to the same device driver object the first exterface is using, those problems are avoided. QUOTE Devices are (generally) denoted by addresses on the same bus (e.g RS485) or their interface or both. Can the "Exterface" not detect whether an instance already exists and attach to it, or create it if it doesn't? Suppose you have more than one Delta multimeter in your test system. On the second "Create Instance" call should it create a new instance for the second device or link to the first device? There's not really any way for the software to know. Seems to me requiring the developer to explicitly create or link instances makes everything less confusing. Rereading this I realize I didn't make one of my requirements clear. In my model, each instrument has its own instance. Ergo, if I have four Delta multimeters in my system then I'll have four instances of the device driver object. If the device driver is not class based (such as the 34401) obviously there aren't any device driver objects to instantiate. Since all calls to the 34401 device driver use the VISA Resource Name to uniquely identify the instrument, I put that on the queue instead of a device driver object. This allows the 34401 exterface to behave in the same way as exterfaces to class-based device drivers. --------- I really appreciate the feedback and questions. Hopefully the discussion will lead to something that is useful. Contrary to how it may sound, I don't believe I have all the answers or this architecture is "the new black." I'll happily explain why I made certain decisions and if there are better ways to achieve the benefits I'm looking for I'm perfectly willing to implement them. If problems come up that blow holes in exterfaces, that's fine too. Better to learn that now than 8 months into a project.
-
I added an exterface for the Agilent 34401 to illustrate how I would implement it using that particular IVoltmeter interface definition. Check the readme for notes on it. Remember that interfaces definitions are determined on the project level--I don't anticipate them being distributed as part of reusable code modules. I designed the IVoltmeter interface based on the needs of my fictional top-level application. Your top-level application may require a different interface definition. Also, I don't think there is any value in implementing the exterface unless you know you'll be swapping out the 34401 with different instruments in this application. Let me know if you have any more questions. Sounds like you have it. For example, the 34401 can measure voltage and current so you could implement both an X34401 Voltmeter exterface and an X34401 Ammeter exterface and have the instrument maintain polymorphism with other voltmeter and ammeter instruments.<p>
-
QUOTE (ShaunR @ May 9 2009, 09:31 AM) The JKI vis are part of JKI's VI Tester toolkit. You can download it from http://forums.jkisoft.com/index.php?showtopic=985' rel='nofollow' target="_blank">here. Use VIPM to install it. Or you could just remove them from the project... you don't need them for the example.
-
I see lots of downloads but no comments. I'll hazzard a guess that I'm leaving readers confused. Which part is confusing? Have I defined the problem well enough? Is my description of text-language interfaces appropriate? Is it accurate? Do I need to provide more details on implementing the architecture? (I admit the document ends rather abruptly. I was struggling and wanted to get eyes on it.) Should I describe the evolutionary process that resulted in the Exterface architecture?
-
QUOTE I think you will find most software engineering books are geared towards businesses that develop software products that will be released at some future date. After release, there may or may not be bug fixes released or new versions developed, but that product is essentially done. Before you start down the path of adopting that latest software engineering process, ask yourself the question: Is the software you are developing a "product" or a "tool?" The processes and patterns used for software products may not work well for engineering tools. How do you know if you are developing a product or a tool? Here are a couple rules of thumb that might be useful, roughly in order of my arbitrary sense of importance... -How does your feature set change over time? If the feature set starts out big and decreases over time, it's a product. If it start small and increases over time, it's a tool. -How solid are the software requirements? If you have a document that is mutually agreed upon and relatively difficult to change, it's a product. If the requirements exist only in someone's mind and changes depending on the lunch menu, it's a tool. -What is the lifetime of your application? If it's used for more than a year without any major changes, it's a product. If it needs updating or new functionality less than 3 months after deployment, it's a tool. -What is the tolerance for bugs in your software? If heads roll when the application crashes, it's a product. If bugs are "okay" as long as the customer can get the software to work, it's a tool. -What is the retail purchase price of your software? >$0, it's a product. $0, it's a tool. -What attitude does management have towards your software schedule? If they allow you the time to do things "right," it's a product. If they want you to just get it done as soon as possible, it's a tool. At a recent position I had I designed and built a software "product" for testing a component during development of a consumer electronics device. I designed it to be very flexible, allowing for as many different system configurations as I could reasonably conceive. Near the end of the design cycle the design engineers started asking for things I couldn't have predicted. Needless to say, the architecture I had developed couldn't support their requests very easily. I had to make dirty hacks that compromised the overall product just to be able to perform the tasks they requested. It wasn't pretty. It wasn't effective. And it certainly wasn't efficient. I've come to the conclusion that when building tools you must expect the next "one little change" to break your entire architecture and require major refactoring. The trick (one that I have not yet figured out) is how to minimize that down time.
-
Ever since I started using LVOOP I've wished LV had some sort of Interface construct as way to address diamond inheritance. Rather than continuing to complain about it, I decided to see what I could come up with on my own. After several months of experimenting and getting some valuable feedback from orko, I've come up with a concept I call 'Exterfaces,' which borrows heavily from the Plugin architecture and Singleton pattern. I suspect many experienced developers already do something like this, but I haven't seen it documented anywhere so I thought I'd give it a go. The attached pdf attempts to explain what interfaces are in text languages and how to create interface-like behavior in LV. The attached project includes device drivers for simulated fictional instruments, the Interface framework, project-specific interfaces and exterfaces, and a couple examples. The project also include many test cases using JKI's VI Tester, so if you don't have that you'll get lots of errors on startup. That shouldn't stop you from running the examples though. Any feedback on the architecture or documentation is appreciated. A couple things in particular I'm interested in... Does this pattern/architecture already exist? If so, what's it called? Is this a worthwhile architecture or is it an anti-architecture? What pitfalls am I overlooking? What improvements have I missed? What do I need to do to make the example project or white paper more clear? Do I need to go into more details in certain sections? (I've spent so much time on the document I've lost all objectivity.) Any other comments or suggestions. Thanks, Dave
-
I will be at DevDays in Portland and Seattle next week
Daklu replied to Aristos Queue's topic in LAVA Lounge
I found this DevDays much more relevant than the last two I attended, which seemed to be geared more towards beginning users. It was nice to get to discuss intermediate/advanced topics. As a side note, maybe at the next DevDays I'll make a LAVA flag and lay claim one of the tables. -
QUOTE Sounds like a Collection object. I did some prototyping of LabVIEW collection objects in my current project and it seemed pretty straightforward. I didn't get too far into the nitty gritty details though.
-
Dynamic Dispatched VIs staying in run mode
Daklu replied to mje's topic in Object-Oriented Programming
No help for your problem but I can at least confirm I get the same error you do.