Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. QUOTE (Gavin Burnell @ May 19 2009, 04:57 PM)

    I use this sort of architechture for working with my instruments - I have a very generic "instrument" class that provides vi's that wrap VISA functions but with additional debugging code that I can turn on and off, a layer of classes that define APIs for different type of instruments - e.g. source meters, temperature controllers, lockins, magnet powersupplies and then a layer of classes that implement the interface for specific instruments (or even combinations of instruments). This is sort of reinventing the wheel in that it duplicates IVI drivers, but (a) I know exactly what commands are being sent to the instrument and (b) I get to design the interface layer.

    This is exactly what I was trying (rather failing) to describe on the "Exterfaces" thread.

  2. QUOTE (keat007 @ May 18 2009, 01:14 AM)

    Indeed. The USART is just a communications interface to the microcontroller to talk to other devices like a bluetooth or PC. Nearly all modern microcontrollers come with at least 1 (usually used for programming it) and many come with 2 or 3. With a USART (which has digital voltage levels) , you can wire directly to other comms chips to provide your microcontroller with virtually nay interface you desire (bluetooth, USB, RS485, RS232, WIFI etc).

    The "Stamp" is just another flavour of microcontroller (from what I can tell) like a PIC or ARM. The only problem with the Stamp is it only has digital IO and you need to measure an analogue signal. You can (as you pointed out) add additional hardware to convert the analogue signal into a digital one, but I see that as doing it the hard way :P

    As my Toothpic is on its way :) I decided to look at the software for programming it. You can download a free toolset but I think what will be of more interest to you is that they have existing firmware for acquisition. I think if your looking to get up and running quickly and not too bothered if you use Labview or not, then it might be worth reading this. Looks like the server is in the bluetooth device (ike a webserver) so data can be viewed on anything from a PC to a mobile phone.

    http://www.flexipanel.com/Docs/DARC-II%20DS382.pdf' rel='nofollow' target="_blank">DARC II

    Oh. they ship to Singapore from that link I gave you ;)

  3. QUOTE (keat007 @ May 17 2009, 02:25 PM)

    Thanks for the reply again. I am not really sure if Stamps have UART or not. However, what we are using is the bluetooth which can be fit into the development board directly and is able to connect to the PC via the Bluetooth transmitter and a Bluetooth dongle. Yes, even though it couldn't be used to measure voltage directly, but by implementing ADC0831 and some PBASIC codes, it should do the voltage measurement job. So basically, what we do not know is how to acquire the data using labview. Time is indeed really short and the budget is somehow limited. :blink:

    There are examples of acquiring via bluetooth in the Labview examples.

  4. Just hunting around on google (wonderful tool) and found this.

    Tooth PIC

    It would be ideal for your app and cheaper than bolting together all the different components your intending to use.

    You only have a short time to develop. So I would suggest not faffing around trying to get different modules to work together and go for a one-box solution.

  5. It's very easy to use bluetooth with Labview. There are a few examples in the examples directory.

    I would suggest a PIC to aquire the data and interface via its UART to your Bluetooth transmitter and you can use Labview to receive and chart/graph/analyse the data on a PC with your dongle.. A PIC is small, and cheap but requires C knowledge (probably cost you about £5-£10 to build) but have UARTS, and several analogue and digital inputs - more than enough for your task. It would also be very portable and could be battery powered (matchbox size?). If you go for labview for the acquisition then you will require another PC to run the labview run-time with some sort of compatible acquisition device, or something like cRIO, fieldpoint etc which can run labview (which are really overkill and expensive).

  6. QUOTE (Daklu @ May 15 2009, 08:45 PM)

    Theres also one in the examples directory ;)

    It just confused me at first because I expected everything you have under exterfaces for the agilent to be in the same place as the Device drivers. And a "thin" wrapper under the exterfaces like the others. But looking closer, your ACE and BAM ect are really simulators????

    QUOTE (Daklu @ May 15 2009, 08:45 PM)

    Hmmm. Wasn't the case for the agilent. I think even if it was an active X component you would still have to write some wrappers. Be nice if it was like that though. Download and install just like a video driver in windows etc.

    QUOTE (Daklu @ May 15 2009, 08:45 PM)

    Yeah, if I had to derive 20 child classes containing essentially the same code every three months exterfaces would get old really quick--regardless of how thin and light they are. I'm really curious what kind of testing you're doing that contains so many instruments used for such a short time. It sounds like a fairly chaotic environment.
    :)
    I also think it would be interesting if you did a white paper and sample project. Share the knowledge so we're not all reinventing the same thing.
    :)

    I think I referred to the current one on the Q's thread. If your really interested I'll take some pictures of it on Monday and PM them to you. Generally they are production inspection/test machines that have all sorts of measurements for inspecting things like turbo bushes, gudgeon pins, train brake valves. So they can have bowl feeders, motors, measurement probes, micrometers, laser distance meters, cameras and all sorts. We might get an order for a couple at most, but they are custom built to spec for specific production lines of our customers.

    Don't mind sharing, but write a paper? I wouldn't know where to start :P

    QUOTE (Daklu @ May 15 2009, 08:45 PM)

    Well. this is why I'm still in the thread. I think it can be done. Although I'm still looking for the elegant angle and a way to slowly introduce it rather than throw everything out the window and start again since time constraints would make this impossible (if its not broke....don't fix it).

    The way I'm looking at it is this. At the "mid" layer we both arrive at the same point, where the functional abstraction is realised and the underlying interfacing mechanics is transparent (although via different means). I'm loath to switch to classes because I can see a lot of work in maintaining the abstraction from project to project which I currently don't have.

    But.

    The sorts of tests/inspections are pretty much constant (once you've measured one tubular piece of metals' diameter the method works for all tubular pieces of metal which is basically what a gudgeon pin or turbo bush is). Its only the hardware that changes (bigger/smaller/more motors, more/less accurate micrometers electrical instead of air etc). So my "Top" is fairly constant" but my "bottom" is fat fluid. Your "Top" is fluid, but your "bottom" is constant. But in the middle (where your exterfaces sit as I see it). We have the same goal. What I do have from project to project at present is a constantly changing "mid" layer where the glue between top and bottom could be a lot better and I haven't so far found an elegant solution for this.

    QUOTE (Daklu @ May 15 2009, 08:45 PM)

    LUT?

    Look Up Table.

    ------------------------------

    P.S.

    If your going to use the word "Polyconfigurism" then i'm going to call your "exterfaces" "Midglue" :P

  7. Well.

    You don't have much for us to go on.

    In your temp vi, you don't really need 2 loops but it a long, long way from being able to manage a greenhouse. Did you take a look at the example in the examples directory. I think your lecturer might be looking for something like that for a remote monitoring system.

    post-15232-1242415655.png?width=400

    In your other VI its not really defined what you want to do?

    Is there a temperature and humidity control for each crop? Or will it tell the greengrocer how much the seeds are. :o

  8. QUOTE (Daklu @ May 15 2009, 02:20 AM)

    Well. If it's a dll driver we won't use it as its not cross platform, so it wouldn't be considered at the Systems Design Stage. The NI one is interesting, I haven't come across that before and at a first glance I'd say it is in the 10% area.

    QUOTE (Daklu @ May 15 2009, 02:20 AM)

    Haven't come accross one of those in a long time. The table I was describing can have a formatting string for the read and write. But I havn't had to use that feature in along while and I'm not even sure I can remember the syntax off hand :P

    QUOTE (Daklu @ May 15 2009, 02:20 AM)

    Sort of. Your interface definition defines the higher level functions. The exterface implements the higher level functions for each instrument that you want to use for those higher level functions. So you do have to wrap more than one piece of code... unless all your instruments use string based commands, then you can probably use polyconfigurism and a single vi.

    Woohooo. Thats what I was thinking.

    QUOTE (Daklu @ May 15 2009, 02:20 AM)

    That IS the device driver from Agilent. You don't need anything else.

    QUOTE (Daklu @ May 15 2009, 02:20 AM)

    Except since my interface definitions are on the project level, it's no big deal to add "Personal Servant.vi" to what I already have. I don't have to worry about coding for exceptions that may occur in future applications or maintaining compatibility with previously deployed applications that use the interface. I just have to worry about making it work in this application.

    And adding the drivers ;)

    I think I get the picture. You have a small set of instruments that don't change much (might be adding one every six months maybe). Whereas once you have your 3 instruments, you are pretty much done with the drivers and everything else is just using them or building on what you have already I have the difficulty that I may have 20 instruments on that project and will never see those instruments again, and there will be another 20 different ones in 3 months time on the next project. Like I think you pointed out. Its the difference between engineering tools and product.

  9. QUOTE (shoneill @ May 15 2009, 08:20 AM)

    Not quite. We use DIO and AIO, but they sit on the RS485 or TCPIP bus. I suppose its the difference between being able to spec the instruments and just being handed them and have to deal with it. But they will be from different manufactures and have different command sets from project to project. And were not just talking 1 or 2 instruments per project here. Were talking between 10 and 30.

    QUOTE (shoneill @ May 15 2009, 08:20 AM)

    Exactly. One implementation is INDEPENDENT of the protocols used with the instrument(s) whereas the other is not.

    Ummmm . as far as I can tell they are both independent.

    There still has to be a way of explaining to the instruments what you want them to do in their own language regardless if it is a class or not. In OOP you create a class based around the common functions in the drivers, and that class (the Agilent, ACE or BAM Class for example) has the specifics for that instrument with a unified set of properties and methods (Configure, Read, Write set this, set that for example). You may have "inherited" and overridden a base class of "instrument", but the commands sets still have to be implemented for every instrument and function that you support. It's not until you abstract the function (like Configure) that every "Driver" has in its methods list that you get the independence.

    The file example does it a little bit differently but the upper layers of the software are the same as your describing.

    It doesn't care what the instrument is, or what functions the instrument supports. It's a scripting utility that (almost) blindly reads the file and squirts out the contents over an interface. And that interface can be any hardware platform you choose to support. The file defines what is and isn't sent , what the protocol/command set is and (as you've probably seen by the filenames) the target device(s).

    At the upper levels of the software, you have your configure, read and write as you would in classes, but you don't have to worry about creating the Agilent, BAM and ACE classes and exposing all the properties and methods. Just a file. This means if I replace the BAM, ACE or Agilient with a Keithly,, I just add a file to configure it and no software changes. Choosing which file, switches from one to another (on the fly if necessary).

    But his isn't about my "old" file thingy. this is about exterfaces.

    The merit I see in Daklus implementation (which I think we agree on) is not at the driver level, its at the function level where we both sit, files and classes alike.

    QUOTE (shoneill @ May 15 2009, 08:20 AM)

    With LVOOP ALL protocol-specific information and code is encapsulated within the class. With LVOOP I can release plug-in modules which will work seamlessly with the parent VI without having to change a bit of code in the parent VI.

    This is post-release flexibility which can be pretty cool.

    Shane.

    Ditto. Only I don't have to change the plugin either ;) See above.

  10. QUOTE (Tubi @ May 15 2009, 01:49 PM)

    Well the latest example you supplied seems to work fine. I changed the variables on the front panel and it didn't complain. Are you saying that it works in simulation but not on the hardware?

    If this is the case. It might be useful to capture the data from the machine to a file and run that through the simulation. Then w can all see and investigate the differences.

  11. Sorry Shoneil. We must have been posting simultaneously and I didn't see your post.

    QUOTE (shoneill @ May 14 2009, 09:32 AM)

    Actually, its instrument independent. And "hopefully" your instruments have a pre-defined protocol otherwise it really shouldn't be on the market.

    The idea is not to replace Daklus approach. It's a way I've used over and over again on many projects with many comapnies with many different instruments to alleviate the "Driver Wrapping" which I always see as the ball-ache of not only classes, but programming in general around instruments. It's just a very quick way of incorporating an instrument and configuring it with a minimal amount of coding and total re-use.

    QUOTE (shoneill @ May 14 2009, 09:32 AM)

    I have also realised a LVOOP driver for a spectrometer which, while it has a LVOOP Backend running in a parallel process provides an interface via User Events. A spectrometer is pretty much like any other so defining the classes was not too hard. In order to incorporate a new Spectrometer I simply create a new class and feed this into my background process (which thanks to LVOOP runs with the new class without any code changes). Daklu's Idea takes this a step further. If I have understood correctly, you do something similar but then provide a way for broadcasting which Interfaces are available for a device. So I could have my Spectrometer servicing both a Spectrometer and a Colorimeter and a Temperature sensor interface. Is that correct? Is that the idea behind the examples posted? I have to confess I've read the document but not tried out the code.

    In addition I could jerry-rig a filter wheel (RS-232), a monochromator (GPIB) and a Photomultiplier (DAQ) together to create an Interface for a Spectrometer using three different devices witht hree different protocols. My top-level software doesn't care, it just calls the "Get Spectrum" method and (after a little while for a scanning spectrometer) delivers the data.

    This is what I see too. The abstraction can be a functional abstraction of the task rather than abstraction of the devices.

  12. QUOTE (Daklu @ May 14 2009, 10:36 AM)

    In fact. None of them are SCPI. I think your getting hung up on that. I was just making the point that if they are then you can use the same files.

    Here's some examples from a real project (apart from the 34401 which I added just so we have a common reference.).

    Download File:post-15232-1242328643.zip

    Different manufacturers, different command sets, 2 are RS485, 1 is TCPIP and of course the 34401 is GPIB. All I had to do to incorporate them was copy and paste from the examples in the user manuals to the files (and add some comments) and add a couple of entries in the lookup table...Job done. Probably took me longer to find the manuals on our network...lol.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    I don't know about those products, but I do know most DVM's, Oscilloscopes, Spectrum Analysers, Drive Controllers, Temperature Controllers, PIC programmers....you name it...generally have (or can be ordered with) a serial (232/422/485) Ethernet or GPIB interface. Regardless, if they don't, you don't have to make them VISA or SCPI compliant. You just have to know the command syntax (user manual or existing driver) and add the hardware interface to the write-read vi. The important thing to note here is that once you do this you can interface to any device on that interface. If you take a look in the bowels of my OPP over bluetooth somewhere on this site, you will see a cut down version in action. Although it says bluetooth, it actually works on TCPIP/UDP, bluetooth and IRDA interfaces since OBEX is a protocol running on a transport layer as is Ascii for instruments.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    It isn't my intent for the Exterfaces Architecture to be based around a hardware device. That's what burned me in the first place. Interface definitions, like interfaces in text languages, are based on a set of common functions. The IVoltmeter interface can be applied to any device that can measure or calculate voltages: DVMs, oscilloscopes, DACs. It doesn't even have to be a physical instrument. In theory you could implement an exterface that reads current and resistance measurements from a text file and returns the calculated voltage. (Though it would be a bit tricky to implement that in this particular interface definition.) The example I've included happens to have 4 instruments that are fairly narrow in their capabilities and so I can see how it would look as if that is what I was doing.

    Great! Back on topic :P

    Indeed. And this is why I think its far more useful than anything else I've seen in classes based around instruments, which are always peddled as infinite extensibility IF you write 100 similar snippits of code to wrap already existing functions (which is what I thought first of all with yours). Add a new DVM? Write another 20 function wrappers like the last lot only a bit different. But if I use your exterfaces (I think) I can wrap 1 piece of code and use the exterfaces to define higher level function like entire tests. Instead of an IVoltmeter. I (could) have an IRiseTime and choose which subsytem to run it on (for the one to many). Or maybe I'm just barking up the wrong tree.lol.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    I didn't think so, until you put the device driver as an exterface instead of a device driver (as I was expecting).

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    Yep. (Well, not so much
    enable
    multiple inheritance as
    simulate
    multiple inheritance.)

    A nod's as good as a wink to a blind man :P Symantics.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    USB DVM's? Maybe they just supply a lead which is a USB->RS232 converter.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    I didn't say it was
    good
    , I said
    it's not necessarily bad
    . I have several small utility vis that I routinely copy and use in projects. Why? Several reasons:

    1. If another developer checks out my source code the file will be there and he won't have to worry about finding and installing my reuse library. (This was not a mature Labview development house; it was a bunch of engineers working on (for the most part) quickie tools.)


      You mean Like not having to download the JKI Test Toolikt eh? :P
      QUOTE (Daklu @ May 14 2009, 10:36 AM)



    2. I tend to store single vi's in directories. Never use libs, and never use LLB's, so this isn't an issue for me or anyone else that uses my toolkits.
      QUOTE (Daklu @ May 14 2009, 10:36 AM)


    3. Managing shared reuse code tends to take a lot of time. Copying and pasting takes very little time.

    It does? I dind it the other way round, not to mention the fact that you have to to-arrange everything and get all the wires straight again.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    Let's say the Ace VM that requires 6 steps to initialize and get into the proper state for a particular application and the CAL VM requires 2 steps. Following the traditional class hierarchy we create an abstract Voltmeter base class and do all sorts of work to create a common command set for two very different APIs that still exposes all of the functionality of each. Then we wrap each of the device drivers in Voltmeter child classes. We've just done lots and lots of work to ensure dynamic dispatching for functions we may not ever use and may need to change when we derive a Delta MM child class. On top of that, what do we do with those 6 steps needed to setup the Ace VM? We undo almost all of our work by wrapping them back up in a project sub vi and naming it Init! Would have been much easier to just implement Init using the original device drivers...

    And then someone comes along and says "Oooooh. Our Tektronics scope has a really useful feature that enables me to wash the car whilst toasting a muffin. We need that feature too". And you end up doing it anyway or you end up back in your original conundrum where everything is an exception.

    But you've hit the "nail on the head" and as I pointed out to someone else. OOP in labview make development as slow as the other languages for precisely those reasons when its historically been a lot faster. And it is why you don't want to expose the full feature set of the device when you already have a supplied one from the manufacturer. It takes too damn long.

    QUOTE (Daklu @ May 14 2009, 10:36 AM)

    See, I'd use the explicit calls. It would make debugging easier. 'Auto' would be the default though.

    Your baby, your call. Just means you get a load of idiots like me asking why its not working when they've created instead of linked. :P

  13. QUOTE (Daklu @ May 14 2009, 02:22 AM)

    Hmm... I don't necessarily dislike it, but it seems like it's just a manual way to get dynamic dispatching type functionality (polyconfigurism?) except less universal and more complicated. If I grant you those three assumptions sending commands to the device I think would be pretty straightforward. Your Read.vi could get very messy with parsing and special cases. Do all the instruments you're ever going to use to measure voltages return string values in the same format? I suppose you could encode regex strings in the config file and use them to extract the value you're interested in. Workable? Maybe, but why bother when classes and inheritance already do that? (The one case I can see for doing this is if you have a shortage of Labview licenses and really smart technicians who don't mind writing obscure codes in config files.)

    And what happens when you try to implement a non-VISA instrument or one that doesn't use string commands? I don't see any way for polyconfigurism to handle that. The 8451, Aardvark, and CAS-1000 are not VISA devices. Or what if an engineer whips up a DAC circuit that returns a 12-bit integer that needs to be converted to a floating-point voltage? Where does the conversion happen?

    I changed my mind. I don't like it. :) It might work with a known set of instruments that fit your assumptions but as a general solution I think it gets way too complicated way too quickly.

    Told you you wouldn't like it :P The config files have straight strings (no regex). Config files don't extract anything. They are just a way to stream multiple commands to the instrument. A vast majority of instruments (DVM's, Temperatures controllers, drives...you name it) supply an ascii command set for their instruments and they are usually the same commands regardless of HW interface (Wheter it be RS232/485, TCPIP or Bluetooth). If it's someone like Agilent, Keithly etc, then they are SCPI compliant, which means you can pretty much use the same files for similar devices from each manufaturer and you can support any device from them by just peeking at their driver (which is really a parser) and extracting the strings (in fact I have a vi that does that and generates the files from their examples).

    Non-VISA instrument?. Not sure what you mean by this since VISA is a hardware abstraction (Serial, TCPIP etc). Like I said, 90% of devices use these interfaces. But my particular read-write "tool" also supports CAN, FIP, MVB and STANAG. Once the read write has been "upgraded" you can support any device on those interfaces.

    "polyconfigurism". Now your just making things up ...lol. I'm just pointing out that it is possible to envisage an abstraction that is not based around the object you are trying to interface to, which tends to make the software specific for those objects, and you end up writing/copying, pasting code for new devices because the abstraction is miss-targeted.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    My 'application' actually was designed as a toolkit of top-level VIs that would be sequenced using TestStand. (And it would have worked great if it weren't for those pesky design engineers!) The problem is that the top-level toolkit was built on several other layers of toolkits I was developing in parallel. I haven't worked through how to set up the entire architecture using exterfaces instead of the design I did use, but it looks promising.

    I'm not sure it will help for what you are envisioning. But if your exterfaces are based around the function the engineers are trying to achieve, rather than the devices they "may" be using I think it will work great. But, you know your design, and your target. I've also found that giving the engineers a (slight" ability to affect the software (like my technicians example) means that they end up making the changes and not you :)

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    I don't use the Agilent :o. so cannot say whether it works or not. But it loaded and compiled fine. I was just interested in seeing how you integrate a previously defined driver in your architecture and chose one that you have.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    I'm not sure why you expected the 34401 exterface to be listed in the device drivers section of the project. An exterface isn't a device driver to my way of thinking. In this project the interfaces are an abstraction of a particular type of measurement. (Voltage measurements and current measurements.) The exterfaces implement the abstraction for a specific piece of hardware using the device driver supplied by the vendor. The files in the device drivers folder represent the drivers that are supplied by the instrument vendors and would normally reside in <instr.lib>.

    Because it's a DVM in the same light as your ACE or BAM. I had expected the Agilent to apear in the list of Device Drivers" and a simpler "Exterface" wrapper to interface to the architecure.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    I was using "Device Driver" as in the context of your ACE or BAM etc since that is where they appear in the project. "Instrument Driver" perrhaps? The exterface (and I may be wrong) looks to me like a wrapper around the "Instrument/Device Driver" to enable multiple inheritance.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    When you say "a class implementation of my previous example," do you mean a class implementation of polyconfigurism? Isn't the point of polyconfigurism to avoid classes so you can add instruments without writing G code? Can you show me what you mean, maybe by stubbing out a simple example? (Text is fine, or if you're particularly ambitious you could try ascii art.
    :)
    )

    I meant a base class implementation the of hardware abstraction. Where you could have (for example) a class that takes the an HW interface (TCPIP,SERIAL etc). And methods such as "Read", "Write", "Configure From File" sitting below your BAM, ACE and Agilient. Your instruments can inherit from that (basically your parser) and your exterfaces would be the equivelent to the alias lookup (maybe).

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    We agree with copying/pasting the same project. I disagree that copying and pasting code between projects is a good thing if no modification is required. And if you are modifying the code it isn't being re-used so it shouldn't be in a re-use library.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    You lost me again. Do you mean this is a bad case in the
    Exterface Architecture
    class implementation strategy or in the
    Labview
    class implementation strategy?

    Labview.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    Not without changing the overrides (I think)

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    The interface definition provides the application with the appropriate instrument control resolution. At one extreme we have a simple, high-level interface with Open, Read, and Close methods. At the other extreme is a very low-level interface that defines the superset of all instrument commands. Different applications require different resolutions of instrument control. I can't define an interface that is suitable for all future applications, so I don't even bother trying. The small changes I make are simply to customize the interface's resolution for the application's specific needs.

    Indeed. But the ideal scenario is that all features of all devices are exposed and available, and you just choose which ones to use in the higher level. This was the same problem that IVI tries to address. Just because we use classes doesn't make this any easier.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    I'd go for the "Auto" only. Doesn't give people the opportunity to get it wrong then.

    QUOTE (Daklu @ May 14 2009, 02:22 AM)

    There's nothing in the architecture that prevents the developer from creating a 5th instance. What happens depends on the instrument and the vendor's device driver. If the instrument is connection-based and a connection has already been established with another instance, the driver will probably return an error. If the instrument is not connection-based then yeah, an inattentive developer could screw things up.

    Well. There is no reason that the instrument shouldn't give a result from the request, as long as the "object" ensures that the instrument is in an appropriate state to give a correct response. I'm thinking here of....say... you create an "Ammeter" instance and a "voltmeter" instance rather than an "ACE" instance.

  14. We had two main instruments that we used as I2C masters to communicate with the vendors' chips: The NI-8451 and the Total Phase Aardvark.

    Thats a fantastic name....lol.

    We also had a third instrument that we used for I2C signal validation, the Corelis CAS-1000E. So, following what I thought were standard practices, I designed an abstract class, "I2C Master," and created child classes for those instruments. These child classes were intended to be long-term, shared reuse code. (i.e. One installation shared among many applications.) The base class defined some general I2C functions such as Get/Set Slave Address, Get/Set Clock Rate, Find Devices, Open, Read, Write, Close, and a few others. The common functions are what developers would use if dynamic dispatching is needed. Each child class also wrapped the rest of the instrument's api so the developer could access all of the functions while staying within the same device driver object. Obviously those functions were not available to use with dynamic dispatching.

    I designed the entire test system following that type of inheritance pattern. I implemented .lvlibs for each vendor's touch api. I created an abstract "Touch Base" class and derived child classes to wrap the lvlibs. This setup actually worked pretty well... right up until I started getting requests that violated the original scope of the application. While I had tried to predict the types of testing I would be asked to do and designed the test system for as much flexibility as was feasible, my crystal ball simply wasn't up to the task. To accomodate the new requirements within a reasonable timeframe I frequently had to remove some modularity (goodbye 8451) and/or rework my driver stack--that distributed framework code that was supposed to be untouchable. As time went on my 'reusable' code became more and more customized for that particular application. On top of that, I couldn't always guarantee backwards compatibility with previous versions of the drivers. Had those drivers actually been released and used by other developers I would have had a maintenance nightmare on my hands.

    I have a solution for that....but you won't like it :P

    If you abstract the interface, rather than the device, you end up with a very flexible, totally (he says tentatively) re-usable driver.

    I'll speak generically, because there are specific scenarios which make things a bit more hassle, but they are not insurmountable.

    Take our ACE,BAM and HP devices. From a functional point of view we only need to read and write to the instrument to make it do anything we need. I'll assume your fictional DVMs are write-read devices (i.e you write a command and get a response rather than streaming) and I'll also make the assumption that they are string based as most instrument we come across are generally.

    Now....

    To communicate with these devices we need to know 3 things.

    1. The transport (SERIAL/GPIB/TCPIP etc.)

    2. The device address.

    3. The protocol.

    Visa takes care of 75% of No.1. No.2 is usually a part of No.3 (i.e the first number in a string). So number 3 is the difficulty.

    So I create a Write/Read VI (takes in a string to write and spits out the response...if any), and I will need an open and close to choose the transport layer and shut it down. I now have the building blocks to talk to pretty much 90% of devices on the market. I'll now imbue the read/write vi with the capability to get its command strings from a file if I ask it to. So now, not only can it read and write single parameters, I can point it to a file and it will spew a series of commands and read the responses. This means I can configure any device in any way I choose just by pointing it to the corresponding config file. New device? New config file. No (labview) code changes so you can get a technician to do it :P

    Now, in your application, you have a lookup table (or another file) which has a name (alias), the transport, the address, the config file and/or the command for the value you want to read.(....say DC:VOLTS?); The read-write file vi is now wrapped in a parser which takes the info from the table and formats the message and sends it out through the read/write file vi or it loads the config file.

    I now have a driver scheme that not only enables me to add new devices just by adding a config file and an entry in the table, but also enables me to send the same config to multiple instruments on different addresses or different configs to the same devices on different addresses and read any values back I choose. And all I need is 1 VI that takes the alias.

    Told you you wouldn't like it :P because OOP programmers start frothing at the mouth as soon as you mention config files...lol. But I'll come back to this in context a little later on.

    As an example, at one point in the project one of the vendors implemented a ~Reset line that had to be pulled low for the chip to operate. Hmm... the 8451 and the Aardvark both have gpio lines and I built those functions into the drivers, but gpio doesn't necessarily fall in the realm of "I2C Master." The CAS-1000 doesn't have gpio lines. Should I create an abstract "GPIO Base" class and derive child classes for the gpio modules (such as the USB-6501 and the gpio functions of the 8451 and Aardvark) so I can continue using the CAS-1000? How would I make sure the Touch:Aardvark class and the GPIO:Aardvark class weren't stepping on each other's toes? After all, they would both be referring to the same device. To avoid a huge redesign I ended up implementing gpio functions in the "I2C Master" abstract class so I could continue using the 8451 and Aardvark. I ditched the CAS-1000 and serial port functionality. (The serial port requirement came back later in the project... ugh.) This is when I started wishing for interfaces.

    If your system is such that you only have to code for exceptions. Then you are winning.

    Near the end of the project much of my reuse code was no longer reusable and modularity was almost completely gone. The system required the Aardvark and only worked with a chip from a single vendor. The system I had created worked well when the requirements remained within my original assumptions. Once those assumptions were violated software changes either took weeks to implement (not an option) or required hard coded customizations.

    If anything is changing rapidly, then any software "architecture" is bound to be compromised at some point and the more you try to make things fit...the more they don't...lol. If its for internal use only, you are better off with a toolkit that you can throw together bespoke tests quickly - something Labview is extra-ordinary at doing.

    I learned a lot on that project. Two of the main lessons I learned were:

    1. Before I design the software, I need to understand if I'm creating a finished product or an engineering tool. Finished products have requirements that probably won't change (much.) Engineering tools will be asked to do things you haven't even though of yet. The two require very different approaches. I had designed and built a finished product while the design engineers expected it to behave as an engineering tool. Engineering tools require rapid flexibility above all else (except reliable data.)
    Amen.
    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) [/post]</cite>



    Agreed (apart from the little brain bit). But there does seem to be a lot of "do it anyway" mentality about.
    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM)

    </cite>


    OK. Well lets look at your "exterfaces" in the light of the Agilent example that you kindly provided. I noticed that you didn't put it under "Device Drivers" which I hadn't expected and is why I asked for a driver (although typical of drivers) that didn't fit nice and snugly with the simulated ones so I could see how this worked.

    From your exterfaces up, everything is hunky-dorey (as it always is with classes in labview) and your implementation seems to overcome a big drawback of the current Labview implementation. But this is what I was looking at.

    If the exterface is higher up in the tree (lets take the previous example of a Waveform test). Which is an exterface based around defining a sequence of operations (Set this, set that, wait 1 second then read result). We can instantiate that with different arguments and do different tests with methods such as "Start", "Abort", "Get Status", "Get Result etc, etc. If we now have the same test, but you need to move a motor into position, set some digital IO, wait 1 second and read the result, then your exterface can be implemented to do that in the same way that you implemented the HP driver, but the underlying method is transparent from the application AND you can have the same test running on different devices. There is value added to the extra coding since you would have to do that anyway in classic labview.

    Now. If your "Device Driver" was based on a class implementation of my previous example with all those obnoxious classical techniques with files and whatnot, the exterface now just defines the sequence of operations, the files (or class alternative) to configure, and the order of the aliases (or class alternative) to retrieve the result. Then you would have an implementation, that can instantiate multiple tests/measurements on multiple devices with different configurations. I'm sure you could find a way of incorporating this better than I've described, but this my current thought process.

    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite>

    Regarding reusing exterfaces across projects... depends on what you define as "reuse code." I expect I might do a lot of exterface code copying, pasting, and modifying between projects. If you consider that reuse code then I'm right there with you. Given point 2 above, I'm in no hurry to distribute interface definitions or exterfaces as shared reuse code. Getting it wrong is far too easy and far, far more painful than customizing them for each project.

    Lots of copying and pasting means that you haven't encapsulated and refined sufficiently. I think this is a particularly bad case in the class implementation strategy which forces you to do this over traditional Labview which would encapsulate without much effort. After all. In other languages, changes to the base class effect all subsequent inherited classes with no changes whatsoever.

    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite>

    ;)

    True. But if the definition is broken down into manageable chunks (think of my toolkit comment earlier). Then adding new "tests" doesn't become an issue.

    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite>

    However, if you keep the interface definition and exterfaces on the project level they don't have to be huge monolithic structures. You define the interface based on what that application needs--no more. There's no need to wrap the entire device driver because chances are your application doesn't need every instrument function exposed to it. If you have to make a change to the interface definition there's no worrying about maintaining compatibility with other applications. Small, thin, light... that's the key to their flexibility.

    Lots of small changes as opposed to one big change? I'd rather not change anything but I don't mind adding.

    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite>

    I think this means (in my case) that you end up with lots of applications that can do very specific tasks with very specific hardware.

    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite>

    "Create Instance" creates a new instance of the device driver and links that exterface to it. "Link to Instance" links an exterface to an already created instance. An exterface will use one or the other, not both. Use "Link" when an instrument is going to use more than one of it's functions. The vi "Many to One Example" illustrates this. (Look at the disabled block in "Init.") The system has a single Delta multimeter but it is used to measure both current and voltage. If I create two instances I end up with two device driver objects referring to the same device. That leads to all sorts of data synchronization problems and potential race conditions. By having the second exterface link to the same device driver object the first exterface is using, those problems are avoided.

    [
    Edit: Your question made me realize "Create Instance" and "Link to Instance" aren't good names for developers who are using the api and aren't familiar with the underlying implementation. "Link to Instrument" works, but "Create Instrument" doesn't. Any ideas for better names?
    ]

    I've no problems with "create" (that's the same in other languages or you could use "New" as some do). But I can't help thinking that the linkto is a clunky way to return a reference to an object. If the "Create" operated in a similar way to things like queues and notifiers where you can create a new one or it returns a reference to an exisiting one, it would save an extra definition.

    <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) <a href="index.php?act=findpost&pid=62740"></cite>

    Suppose you have more than one Delta multimeter in your test system. On the second "Create Instance" call should it create a new instance for the second device or link to the first device? There's not really any way for the software to know. Seems to me requiring the developer to explicitly create or link instances makes everything less confusing.

    Rereading this I realize I didn't make one of the exterface ideas clear. In my model, each instrument has its own instance. Ergo, if I have four Delta multimeters in my system then I'll have four instances of the device driver object. If the device driver is not class based (such as the 34401) obviously there aren't any device driver objects to instantiate. In those cases the "instance" is simply a way to reference that particular instrument. Since all calls to the 34401 device driver use the VISA Resource Name to uniquely identify the instrument, I put that on the queue instead of a device driver object when XAgilent 34401:Create Instance is called. This allows the 34401 exterface to behave in the same way as exterfaces to class-based device drivers.

    Indeed. But having created 4 objects already, what happens if you create a 5th?<p>

  15. QUOTE (normandinf @ May 13 2009, 03:07 AM)

    I've noticed that a lot of newcomers on this forum are already using LabVIEW 2009... (or so they say :P )

    I feel I've been left behind... :(

    Awwww. Don't feel left out. Feel smug in the fact of knowing that they will find all the bugs first so that when you upgrade to 2009.1, it actually works :P

  16. QUOTE

    OK. How can I change the size of the control array at run-time? The Array Property Node "NumCols" seems to be read-only.

    Right click on the property node and choose "Change To Write" from the menu.

    QUOTE

    Also, how do I unpack this on the Block Diagram? If I attach an "Index Array" VI with no index (to indicate I want the first element of the array), the output data type is an array of one element, which is not what I would expect. I would expect a cluster of one WaveGraph.

    It should be a cluster containing a 1D array. If you put a Waveform graph on your front panel then right click and "create" constant on the diagram. You will see it creates a 1D array of Double Precision constant.This is the default format. You can see more on the input options in the help for Waveform Graph (a 2D Array is a multi-plot for example).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.