Jump to content

jeffwass

Members
  • Posts

    35
  • Joined

  • Last visited

    Never

Everything posted by jeffwass

  1. Okay, I'm already using those methods, I actually implemented a GOOPish kind of LV2 that maintains an array of 'object' references, which can take a variety of commands. I also use the producer/consumer type model, but with TWO consumer queues, one where FP commands pop onto the main acquisition-processing queue, and another queue in which any subroutine can pop commands to update the main GUI display. That works pretty well so far for things like the list of instruments, a tree for list of routines with some config data, but haven't gotten the actual acquired-data visualization stuff worked out yet.
  2. This is the way I'd like to do it, I don't like working with the subpanels. THe thing is the data types, amount of data, etc, will be variable and depends on the subroutine running. So sometimes I'd want to display an XY graph, sometimes strings, sometimes an array of floats, etc. But what would be a decent architectural way of moving data to the FP of ther other VI? Since the datatypes are always changing, maybe it is desirable to use the subpanels ,though. But when you say move the data to the UI, what are the best ways for doing this? I figured I could send some references to the acquisition program, and send raw data to the UI that way, not sure if this is the best way. BTW - does anybody else think that the LabVIEW manuals tend to be pretty bad for describing functionality? Ie, I don't have any idea what the Data Sockets do or how to use them, and I also find their descriptions of thigns that could be useful like the TDM and INI files to be pretty bad too.
  3. The thing is I'd like to be able to display the raw data at any level of a bunch of nested routines. How do you typically do this? I wanted everything to be available from the main window, previously I'd have to go and find the subVI executing and open that to look at the graph on it's front panel, for example. So when I've got several nested routines, this gets annoying, so I figured a tab control can display the data for the various routines. And since the subroutines, especially the type of data (graph or scalar, or even 2-D intensity graph), so I figured I can put the appropriate indicators onto a tab control.
  4. Ahh, I never use those, I guess it was ingrained in my learning that they're bad, so I've just opted for LV2 (or USR) globals instead.
  5. IMHO, I like Unitialized-Shift-Register Globals, because it accurately describes what the structure is. However, this terminology is a bit long, so I'd suggest using USR Global. That's no longer than LV2 Global, and more accurately describes the concept (when the user knows what USR is). I don't like calling them 'old globals' because it implies there's something newer and better. And while the global variable routines are certainly newer, from what I understand they're to be avoided as much as possible. This terminology also accuretly refers to such a global, whether it's intelligent, functional, or not. BTW - does anybody use such one USR (or LV2) global VI for several global variables, by using a variant as the shift-registered data, and with commands like "GetValue" and "SetValue", where the name of the appropriate global parameter is used as a variant attribute name, and the data is the variant attribute variant itself? I've found this to be quite powerful, and then only need one (or a handful) of globals for a large hierarchy of VI's.
  6. For LabVIEW 2 globals, if that's what you're referring to, I just use them as regular VI's, with VI extension, etc. However, I use the filename like InstrGlobal.vi or something similar. Perhaps, in retrospect, it would be better to put the Global as the first part of the name, so all global routines in the hierarchy would be grouped together when browsing. I also have some kind of pic of a globe on the VI icon, so it's obvious when looking at the BD it's a global.
  7. Ahhh, okay. I'll play around with that a bit, haven't done much GUI-based scripting like this yet. BTW, do you think my idea of displays in a tab control on the master VI sounds like a reasonable thing? The routine VI would basically have a reference to the appropriate display indicator on one page of the tab control on the main VI, and then update it. It looks like datasockets might have been designed to do this sort of thing, but I haven't found any good documentation on datasockets.
  8. I was actually just about to ask this question in general, and after coming to LAVA User Group today, I found you happened to asked a nice segway question yesterday. So here's my question - does anybody know how to ADD controls to a tab page programmatically? I can see how to READ the controls on any page of the tabs, but I'd like to put them in programmatically. I'm working on a large-scale data acquisition program, and depending on what kind of routine I'm running, I'd like to put that in a page on a tab control. But the routines aren't known a priori. Ie - I might have sweep one kind of variable (eg, a magnetic field). Then at each field point sweep another variable, say a heater. So it's a nested loop structure. Then at each of those points I'd measure my device, say take an I-V curve. So I want one tab to show the data from the first sweep, another tab to show data from the nested sweep, and another tab to show the I-V curve data. So I'd like that each time I add a new routine, a new tab is added to the main VI's front panel showing the appropriate data for that routine. However, I cannot find out how to add the controls to the page, or even how to add number of tabs, programmatically. My old method involved having a separate main VI for every type of routine, but this quickly proved itself to be a mess and difficult to maintain. Especially for all the different equipment combinations in the lab. Additionally, there were getting to be too many controls for the front panel, so I just bit the bullet and designed a Test Executive suite. I'm using LabVIEW 7.1, if that matters. Does LabVIEW 8 offer any more advantages along these lines?
  9. Hi, I currently have LabVIEW 7.1 installed in my lab, but I'd really like to move over to LabVIEW 8. What are the implications of downloading and running the LV8 download that is being linked in this thread? Ie, do I need to purchase new licenses, or can I re-use my old ones, etc? Thanks.
  10. Ah yes, exactly what I was looking for, thanks.
  11. Does anybody know of a decent way to get rid of a loop (eg While or For) but keeping the G components and wires within the loop in the place they are? It's quite often I want to do this, and while I can cut and then paste, it messes things up, especially if the contents are within another larger loop, which it resizes and blocks other things, etc. It's nice that one can draw a loop around some G constructs keeping them intact, and I often would like to do the reverse.
  12. Does anybody know how to add new controls to a tab pane? I am working on a test-executive, and I'd like to have LabVIEW choose which graph to display, depending on what kind of data the user is taking. Ie, either chart or 2-D color graph, etc. Also anybody know how to add/delete tab panes dynamically? There don't seem to be these options w/ the properties/methods of the tab. thanks.
  13. Hi Anton, what it sounds like you are looking for is a way to script LabVIEW. You should look at LuaVIEW, which allows one to create Lua scripts that can be accessed from LabVIEW. In other words, you can edit variables from the Lua script within a LabVIEW vi, you can call LabVIEW vi's from within the Lua script, and you can even edit a script as a string within a LabVIEW vi and then execute it! Dynamic scripting allows you to do some pretty powerful things. Unfortunately LuaVIEW would be more complicated than a simple formula-node and has a learning curve, but it's capability is far greater. The Lua language is a pretty cool and simple high-level language that is amazingly powerful in light of it's simplicity. It's a functional programming language, which basically means that functions are like variables, so you can use them in your data structures. It's primary use is to add scripting capability to other software packages, for example its currently the workhorse in the gaming industry to program the 'intelligence' of computer-controlled characters.
  14. Jim, I was under the impression that LabVIEW PDA will run on any device with PalmOS 3.5 or higher, as NI's PDA Page seems to indicate.
  15. The easiest thing to do is to get labpython. This allows you to put a window with a python script to execute right into your vi (it is a beefed-up formula node). Or you can explicitly push/pop variables to/from python and then run scripts individually from within labview. Labpython will be useful for running external python scripts, but it is quite limiting because you cannot call labview from python itself. If you desire more robust labview scripting, check out LuaVIEW. Lua is a mean lean functional programming language, and with LuaVIEW you can call Lua scripts as well as call LabVIEW routines from said Lua script. I originally wanted a way to do python scripting from LabVIEW, but after seeing how limiting labpython is, I'm switching to LuaVIEW now. I've only done very simple things with it so far, but it's quite robust. I've also come to like the Lua programming language very much, and it makes use of First Class Functions such that functions are treated just like variables and other objects, which allows for some pretty powerful techniques.
  16. Hi Paulo, I just had several paragraphs of a response, and my browser just lost it. argh. I started using OpenGOOP a few weeks ago and I'll let you know about my experiences with it. Overall - I find it pretty useful and it does provide an efficient and useful programming interface in the right situations (eg, I use OpenGOOP for device drivers). But I've found the documentation to be pretty horrendous (this is not unexpected since it's Open Source Software and thus entirely dependent upon the time of volunteer programmers). On the other hand, though, I find the software-side of OpenGOOP to be pretty good. I basically learned OpenGOOP using the three examples linked from the OpenG website, but this was VERY frustrating to learn from. First of all, several examples used an older version of OpenG, and when trying to load these examples I had to go manually to the various OpenG functions and point them to the right path explicitly. This was VERY annoying, especially in the largest of the 3 examples. The other annoyance is that there was little consistency between the three examples. Some broke the class directory into various subdirectories, some didn't, etc. The biggest gotcha for me when starting was figuring out that when you change the class data type, you have to go through the "CLASSNAME/Core/CLASSNAME Object Data Store VI Ref Type.vi" and right-click the VI Refnum control->Select VI Server Class->Browse and then link to the "CLASSNAME/Core/CLASSNAME Object Data Store.vi" file. This wasn't documented anywhere, and really REALLY annoyed me, to the point that I almost abandoned using OpenGOOP entirely. However, once I figured this out I've been okay. Although it's gotten so annoying trying to re-do this every time that I've since made my class data type a variant, and then throw all relevent parameters into attributes. [i've actually found that using the attributes of variants is more useful than the data of the variant themselves.] Anyway, I've been successfully using OpenGOOP after this, but I still don't know if I'm doing it right. I wouldn't be surprised if some of the OpenGOOP developers would laugh out loud if they saw how I use OpenGOOP in my VI's, but due to the lack of decent instructions I've eventually kind of hobbled some understanding. This is what I do when using it. I make a new class, decide what data type I'll use, and then change the VI reference as noted above. Then I make two directories in the parent class directory, named Public and Private. I then copy the four template .vit's from the Templates directory to the public directory, renaming them as .vi's. (I have no idea if this is how one is supposed to use them, but that's what I do). I try to keep my design such that I'll only call routines in the Public directory from outside the class. Private are just for bookkeeping and calculations within the class. I put all cluster and enum typedefs into the Data Structures directory. Lately I've been using a bunch of pop-up windows for configuring constructors in various ways, so I've tended to put these into a Utilities directory. I then work on making the constructor and destructor first, and then eventually add the various other routines to add/read/modify data of the class. All public routines are based on those four template vi's. One annoying thing is that if one of my external VI's receives the reference to a class, the control always says "Template", so if I have two different OpenGOOP classes, I cannot tell which is which (except through the label). I also haven't found out how to do inheritence, and instead I've had to do a bunch of programming gymnastics to get around this. For example, the motivation to use OpenGOOP was for the device drivers of my Test Executive. Each driver will have associated with it a enum indicating what device it is (GPIB, Serial, VISA), configuration data for the appropriate communication channel (eg Address and Mode for GPIB), a reference to vi file that gives the actual calls, etc. This would be the perfect opportunity for subclassing. For example, GPIB instruments would use different communicating VI's than RS232 instruments, so there would be alot of common methods but a few specific methods between the instrumetns. I wanted to have a GPIB and Serial subclass that would inherent from my Instrument class, but never figured out how to make this work. So instead I keep various other definitions and handle the class differently depending on how it's defined. I would love to hear how you are making out w/ OpenGOOP, since I've just started a few weeks ago myself. I'm also very curious if anybody knows how to address the various problems I've mentioned previously (how to really use the templates, inheritance, etc).
  17. The problem is that a routine won't know how to convert back from a flattened string if it doesn't know the type. I am trying to shy away from clusters because even if they're type-def'd, I'm having lots of problems as I change the clusters later on. And especially if I write data to a file, I cannot read back the old cluster. What I'm doing right now with my generic device drivers is using a single variant for holding everything, and the variant's attributes have the relevent information. For example, with a GPIB instrument, the variant has attributes "Address" and "Mode" that holds a string and an integer, for the GPIB call. Of course the specific routines in each individual device driver must know what data types each parameter is, but the generic routines for passing data back/forth haven't handled variants very well. You have suggested a good idea, though. Perhaps I'll save some overhead if I use a 2-D array of strings (each parameter will have a Name and a value). This will still be annoying because I must search the array for the Address string (and Mode string) for each GPIB call, and then convert to the proper type. But this is what's going on in getting data from the variant attributes anyway. So for now I've got my drivers only using variants, and each call will set/get attributes. I still don't have complete drivers finished (I'm pretty close, though). I hope to measure roughly how much slower the variant conversion drivers are from the direct GPIB calls that I have from my old programs. [For instance, I'll measure the I-V curve of a device by setting a current on a current source, then reading voltage on a nanovoltmeter. This involves 3 GPIB calls for each point on the curve. I'd normally want this routine to go as fast as possible, so it will be interesting to see how much slower the variants really are. Although some of the devices (not all) can be connected externally between them with a digital cable and a buffer of points filled up, so the only GPIB command is a trigger. But I'm not at that point yet, because not all the I-V instruments can do this, which is why I'm working on generic drivers. Eventually I'll have the option of doing that, but it will be (mostly) transparent to my program's user interface, instead of the current situation where each instrument needs an entirely separate routine]
  18. I am designing a test-executive framerwork for my lab, and I'm using OpenGOOP for the instrument drivers. At Michael Ashe's suggestion, I've been using arrays of variants for sending data between the various modules. While dereferencing the variant each time can get annoying, the perceived polymorphism is certainly worthwhile. My concern stems from using variants in the device driver calls themselves due to the extra processing they require. Namely, for a generic driver (as created in OpenGOOP), I'll have a variety of methods that I can call, but the underlying communication parameters will be different depending if it's Serial Port or GPIB (And ISOBUS, which is basically a daisy-chain on the serial port, where the first data character identifies which device to be addressed). Each GPIB call will require an address string and a 'mode' integer, each ISOBUS call will require an address and the VISA descriptor (as one seems only able to access the serial port through VISA in LabVIEW), etc. [if I knew how to make subclasses in OpenGOOP, such that GPIB and Serial instruments would inherent from the generic device class, that would really make things much easier. But I'm still wondering about variants anyway, as I use them in QSM's and other things.] What I am trying to do instead of subclassing is to give each instrument a refnum to a specific handler VI, which then does the instrument-specific processing. Putting these channel-specific variables in an array of variants (or as variant attributes) and then extracting them seems to add alot of extra processing (at least in terms of wiring). Given that they're not just simple data, but can have attributes, my gut feeling is that variants will be slower to access than just having an integer go straight to the module. Maybe the delay is relatively minor compared w/ the overhead of actually accessing the hardware, but I know how little delays add up quickly when communicating excessivly with the hardware. If I were to use variants, then there's the question of whether to catch errors at each conversion to/from the variant, which would add to the overhead. Or at the lowest driver level, I could just assume that the calling function (these would be private functions, so the user couldn't mess up here, but the programmer can) formatted the data right and not handle variant conversion errors. Also - is there ever any need to ever use an array of variants, instead of just one variant? What I've been doing lately w/ variants is actually ignoring the variant data and using the attribute functions to extract relevent terms, so that way the order of formation isn't important. I originally designed the VI's to communicate w/ arrays of variants, but in light of using attributes I don't see the need for that. And finally, I think variants could have been implemented far more elegantly. Namely, LabVIEW would be far more useful if variants automatically polymorphed into the appropriate type as you connect them to a subVI (as if you converted into the data using the conversion function). That would make it similar to simulated polymorphism in C (passing by reference and then typecasting at the receiving end). But LabVIEW forces you to do the conversion at each point, instead of letting the variant figure it by the function's input type, which doesn't really give them (other than the attributes) benefits over the flatten to/from string functions. (Unless I'm really missing something here).
  19. Is there a decent way to get a refnum to a stricty-typed VI, in order to call it by reference? Specifically, can one do this dynamically? The only way I know how to do this now is to create a constant at the "type specifier VI refnum" node on the Open VI Reference function, and then browsing to select the proper VI Server class. Doing this in itself isn't so bad, but everytime I change my data formats, which will invariably happen as I'm in the development stage, the VI's all break and I have to go through and do this again manually, on each call-by-reference. And it quickly becomes a real pain. NI's documentation on this subject is ridiculously terse. I'm surprised that strictly-typedef'd clusters can propogate their changes to all subVI's, but the type specifier VI refnum doesn't, even when the file itself didn't change, just the datatype into one of the terminals. I'm doing things like sending a strictly-typedef'd cluster into the call-by-reference node, and if I later add an element and redefine the cluster, every call-by-refernce point has to be updated. This method in itself seems pretty braindead to me, and I imagine NI would have a better way to do this, I just can't find out how. I've considered using variants instead of the cluster, but I'm reluctant to use them because I fear they'll more processing to get the variant to the proper type. Actually, I'm going to start another topic on the subject of variants to address these concerns.
  20. In our research lab we have a few Windows XP machines, networked together, for data acquisition and processing purposes. The actual computer I'll be using depends on which experimental system I'll be using, but I'll be using the same acquisition and storage VI's on each computer. The issue comes with synchronizing VI updates. If I update RoutineX on Computer1, then if I use RoutineX a few days later on Computer2 I want to use the updated version. And similarly, if I update the routine again, I want the changes back on Computer1. The way I am dealing with this now is by arbitrarily picking one lab computer as a 'server', where I store all my VI's. When I load VI's to insert, I always do it through the network, so VI's have a pathname like : "\\ComputerName\username\Routines\Acquisition\Measure.vi" This method works pretty well, all my routines work fine and will be properly updated if I change them, no matter which computer in the lab I am accessing them from. I have just started using some other user-supplied tools, such as LuaVIEW and OpenG. LuaVIEW can put its VI's and palette menu anywhere, so I do that over the network. But - the OpenG tools want to go to the local computer, in the LabVIEW install directory. So for I need to install the OpenG toolkit on each local computer that I want to use it. I'm wondering I'll be shooting myself in the foot sometime down the road. As long as each computer has the exact same version of OpenG stored in the same path, then it shouldn't be a problem whether I use a subVI that comes from the network or locally. I'm wondering if anybody else has thought about these issues, even if you don't use OpenG or other add-on libraries, to keep your usage of VI's synchronized from a network.
  21. Everybody, thanks for all your help so far, I've really learned alot and have made some good progress in my test executive suite (although it's only still only the framework so far). Here are some questions as to the several responses : To ahlers01 : The OOP style seems like it would be pretty good, and maybe I'm limiting myself by not using it this early in the game, but I've already re-designed the basic framework several times, and actually need to get onwards with making it actually able to take data. I am using a driver-like model similar to yours, and I will also have separate types of drivers. Actually, as I'm writing this now I realize that a GOOP framework w/ inheritance would really work well here. CurrentSource and VoltMeter would both inherent from Device, for example. Maybe I should re-design again? I will take a good look at OpenGOOP later tonite. To Michael_Aivaliotis : In your producer/consumer model, is there a reason you send the queue through the producer loop in a shift register? I'm under the impression that the queue parameter just references the queue, and all queue operations seem to return it unchanged. (The only reason I can think of is to provide a nice-looking way to send the queue reference to the queue release function). To i2dx : I like using enums when wiring the states, that way I pick one exact state from the list. Questions on enums : are they guaranteed to run from zero to a max number with no missed states, and is there a way to find out the state items of a strictly typedef'd enum? I've been able to do this by making an enum control, then using the property node for get_strings, but I'd prefer to do it without needing that control on the vi. To Michael Ashe : I implemented the producer/consumer style setup, and used the array of variants as per your suggestion. At first I thought it was overkill, but I quickly found good uses for the variant data. Question about OpenGOOP - does it have something similar to interfaces as Java has? I mention above about having the various specific devices 'extend' functionality of a device class. But some devices do many functions (some source meters act as current sources, voltage sources, current meters, voltage meters, etc). So implementing as an interface would be the best way to deal with this. To Albert-Jan Brouwer : I haven't played too much w/ Lua since talking to you about the two-button dialog example. I definitely intend to implement LuaVIEW scripting for dynamic control of system execution and acquisition, to allow for maximum flexibility. I will take a closer look at the lua modules, at this point I was planning to implement the drivers and handlers in LabVIEW, do you really think it would be easier to manage the instruments in Lua instead of LabVIEW? Ie, one of the main strengths of LabVIEW is the availability of drivers and ease of controlling instruments. In your suggestion, would you do the bookkeeping in lua, and then call the labview driver from lua?
  22. Is there are reason strings are preferred (as indicated from several of the responses) for indexing the state of queued state machines instead of typedef'd enums?
  23. Cool, that's an informative article you wrote, thanks. I'm a little confused at the bottom example, with the two event loops. I thought using two event loops is bad practice, or is one of them only for dynamic events? Happy new year everybody!
  24. Michael, I'm not exactly sure what you are talking about here. Do you mean having the event structure push a message onto the QSM?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.