jeffwass Posted January 7, 2005 Report Share Posted January 7, 2005 I am designing a test-executive framerwork for my lab, and I'm using OpenGOOP for the instrument drivers. At Michael Ashe's suggestion, I've been using arrays of variants for sending data between the various modules. While dereferencing the variant each time can get annoying, the perceived polymorphism is certainly worthwhile. My concern stems from using variants in the device driver calls themselves due to the extra processing they require. Namely, for a generic driver (as created in OpenGOOP), I'll have a variety of methods that I can call, but the underlying communication parameters will be different depending if it's Serial Port or GPIB (And ISOBUS, which is basically a daisy-chain on the serial port, where the first data character identifies which device to be addressed). Each GPIB call will require an address string and a 'mode' integer, each ISOBUS call will require an address and the VISA descriptor (as one seems only able to access the serial port through VISA in LabVIEW), etc. [if I knew how to make subclasses in OpenGOOP, such that GPIB and Serial instruments would inherent from the generic device class, that would really make things much easier. But I'm still wondering about variants anyway, as I use them in QSM's and other things.] What I am trying to do instead of subclassing is to give each instrument a refnum to a specific handler VI, which then does the instrument-specific processing. Putting these channel-specific variables in an array of variants (or as variant attributes) and then extracting them seems to add alot of extra processing (at least in terms of wiring). Given that they're not just simple data, but can have attributes, my gut feeling is that variants will be slower to access than just having an integer go straight to the module. Maybe the delay is relatively minor compared w/ the overhead of actually accessing the hardware, but I know how little delays add up quickly when communicating excessivly with the hardware. If I were to use variants, then there's the question of whether to catch errors at each conversion to/from the variant, which would add to the overhead. Or at the lowest driver level, I could just assume that the calling function (these would be private functions, so the user couldn't mess up here, but the programmer can) formatted the data right and not handle variant conversion errors. Also - is there ever any need to ever use an array of variants, instead of just one variant? What I've been doing lately w/ variants is actually ignoring the variant data and using the attribute functions to extract relevent terms, so that way the order of formation isn't important. I originally designed the VI's to communicate w/ arrays of variants, but in light of using attributes I don't see the need for that. And finally, I think variants could have been implemented far more elegantly. Namely, LabVIEW would be far more useful if variants automatically polymorphed into the appropriate type as you connect them to a subVI (as if you converted into the data using the conversion function). That would make it similar to simulated polymorphism in C (passing by reference and then typecasting at the receiving end). But LabVIEW forces you to do the conversion at each point, instead of letting the variant figure it by the function's input type, which doesn't really give them (other than the attributes) benefits over the flatten to/from string functions. (Unless I'm really missing something here). Quote Link to comment
MichaelS Posted January 18, 2005 Report Share Posted January 18, 2005 I am designing a test-executive framerwork for my lab, and I'm using OpenGOOP for the instrument drivers. At Michael Ashe's suggestion, I've been using arrays of variants for sending data between the various modules. While dereferencing the variant each time can get annoying, the perceived polymorphism is certainly worthwhile.My concern stems from using variants in the device driver calls themselves due to the extra processing they require. Namely, for a generic driver (as created in OpenGOOP), I'll have a variety of methods that I can call, but the underlying communication parameters will be different depending if it's Serial Port or GPIB (And ISOBUS, which is basically a daisy-chain on the serial port, where the first data character identifies which device to be addressed). Each GPIB call will require an address string and a 'mode' integer, each ISOBUS call will require an address and the VISA descriptor (as one seems only able to access the serial port through VISA in LabVIEW), etc. [if I knew how to make subclasses in OpenGOOP, such that GPIB and Serial instruments would inherent from the generic device class, that would really make things much easier. But I'm still wondering about variants anyway, as I use them in QSM's and other things.] What I am trying to do instead of subclassing is to give each instrument a refnum to a specific handler VI, which then does the instrument-specific processing. Putting these channel-specific variables in an array of variants (or as variant attributes) and then extracting them seems to add alot of extra processing (at least in terms of wiring). Given that they're not just simple data, but can have attributes, my gut feeling is that variants will be slower to access than just having an integer go straight to the module. Maybe the delay is relatively minor compared w/ the overhead of actually accessing the hardware, but I know how little delays add up quickly when communicating excessivly with the hardware. If I were to use variants, then there's the question of whether to catch errors at each conversion to/from the variant, which would add to the overhead. Or at the lowest driver level, I could just assume that the calling function (these would be private functions, so the user couldn't mess up here, but the programmer can) formatted the data right and not handle variant conversion errors. Also - is there ever any need to ever use an array of variants, instead of just one variant? What I've been doing lately w/ variants is actually ignoring the variant data and using the attribute functions to extract relevent terms, so that way the order of formation isn't important. I originally designed the VI's to communicate w/ arrays of variants, but in light of using attributes I don't see the need for that. And finally, I think variants could have been implemented far more elegantly. Namely, LabVIEW would be far more useful if variants automatically polymorphed into the appropriate type as you connect them to a subVI (as if you converted into the data using the conversion function). That would make it similar to simulated polymorphism in C (passing by reference and then typecasting at the receiving end). But LabVIEW forces you to do the conversion at each point, instead of letting the variant figure it by the function's input type, which doesn't really give them (other than the attributes) benefits over the flatten to/from string functions. (Unless I'm really missing something here). 3413[/snapback] The variants is a bit clumsy attempt by NI to introduce data type abstraction into LabView. Clumsy because the LabView design paradigm implies the data type is well defined at the point of data manipulation (there is a good article to that end in NI dev exchange). As compared to flatten/unflatten to/from string, the variant is meaningful mostly in so far as it offers in effect another way of referencing an array. And, it lacks truly universal variant type / data type conversion capability. It seems to me the simple flatten/unflatten + good old array of strings (or clusters carrying data type info in addition) can offer you better overall performance. At least you have more transparency! Michael Quote Link to comment
jeffwass Posted January 20, 2005 Author Report Share Posted January 20, 2005 It seems to me the simple flatten/unflatten + good old array of strings (or clusters carrying data type info in addition) can offer you better overall performance. At least you have more transparency!Michael 3556[/snapback] The problem is that a routine won't know how to convert back from a flattened string if it doesn't know the type. I am trying to shy away from clusters because even if they're type-def'd, I'm having lots of problems as I change the clusters later on. And especially if I write data to a file, I cannot read back the old cluster. What I'm doing right now with my generic device drivers is using a single variant for holding everything, and the variant's attributes have the relevent information. For example, with a GPIB instrument, the variant has attributes "Address" and "Mode" that holds a string and an integer, for the GPIB call. Of course the specific routines in each individual device driver must know what data types each parameter is, but the generic routines for passing data back/forth haven't handled variants very well. You have suggested a good idea, though. Perhaps I'll save some overhead if I use a 2-D array of strings (each parameter will have a Name and a value). This will still be annoying because I must search the array for the Address string (and Mode string) for each GPIB call, and then convert to the proper type. But this is what's going on in getting data from the variant attributes anyway. So for now I've got my drivers only using variants, and each call will set/get attributes. I still don't have complete drivers finished (I'm pretty close, though). I hope to measure roughly how much slower the variant conversion drivers are from the direct GPIB calls that I have from my old programs. [For instance, I'll measure the I-V curve of a device by setting a current on a current source, then reading voltage on a nanovoltmeter. This involves 3 GPIB calls for each point on the curve. I'd normally want this routine to go as fast as possible, so it will be interesting to see how much slower the variants really are. Although some of the devices (not all) can be connected externally between them with a digital cable and a buffer of points filled up, so the only GPIB command is a trigger. But I'm not at that point yet, because not all the I-V instruments can do this, which is why I'm working on generic drivers. Eventually I'll have the option of doing that, but it will be (mostly) transparent to my program's user interface, instead of the current situation where each instrument needs an entirely separate routine] Quote Link to comment
JackHamilton Posted February 13, 2005 Report Share Posted February 13, 2005 Jeff, I use queue message clusters that contain variants quite a bit and don't have too much problem keeping track of the prototypes for each specific command. The main reason is I employ a fucntional architecture. I don't put naked queue's on the diagram! I construct a messaging function that is a communication MATCH to the module it's design to communicate with. As I use a match control and communications module - they internally maintain exactly what the prototype is of the queue message cluster and variant. This internal constuct is known as "Private Data" within the function. When you coding up you top level function - you should not care nor have to manage the prototypes for the messaging. I only thing you have to wiring into the function is the parameter to modify. Using a variant input is not good - you're just cause the top level code to manage the message prototypes - which can cause potential problems. As a default you can have as inputs to the comm function, a numeric array and a string array - thus you can input a single value or multilple values. However, my function inputs are more specific to the actual module it controls. Regards Jack Hamilton Quote Link to comment
CraigGraham Posted May 24, 2005 Report Share Posted May 24, 2005 It seems to me the simple flatten/unflatten + good old array of strings (or clusters carrying data type info in addition) can offer you better overall performance. At least you have more transparency!Michael 3556[/snapback] I did a check of this since a recent app makes a lot of use of abstracted data types for plugin modules. Converting a test cluster from a variant to data, modifying an element then converting it back to a variant takes about twice as long as when the conversions are to and from flattened strings. So aside from the implicit conversion to variant, variants no longer have any appeal to me. Would be nice to be able to define a string input to a VI as being "implicitly flatten". Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.