Jump to content

ShaunR

Members
  • Posts

    4,973
  • Joined

  • Days Won

    310

Everything posted by ShaunR

  1. QUOTE (santi122 @ May 13 2009, 12:58 PM) Should add that to the obfusification thread...lol.
  2. QUOTE (angel_22 @ May 14 2009, 01:55 AM) http://forums.lavag.org/How-to-Insert-Images-in-your-Posts-t771.html' target="_blank">Inserting Images, vis etc
  3. You don't have enough data points above and below the threshold to complete the "Peak Measurement" analysis. Either 1.Change the "percent level settings" method from "Auto Select" to "histogram" to force that method or 2. Change the threshold levels to something like 40%,50% and 60%
  4. Thats a fantastic name....lol. I have a solution for that....but you won't like it If you abstract the interface, rather than the device, you end up with a very flexible, totally (he says tentatively) re-usable driver. I'll speak generically, because there are specific scenarios which make things a bit more hassle, but they are not insurmountable. Take our ACE,BAM and HP devices. From a functional point of view we only need to read and write to the instrument to make it do anything we need. I'll assume your fictional DVMs are write-read devices (i.e you write a command and get a response rather than streaming) and I'll also make the assumption that they are string based as most instrument we come across are generally. Now.... To communicate with these devices we need to know 3 things. 1. The transport (SERIAL/GPIB/TCPIP etc.) 2. The device address. 3. The protocol. Visa takes care of 75% of No.1. No.2 is usually a part of No.3 (i.e the first number in a string). So number 3 is the difficulty. So I create a Write/Read VI (takes in a string to write and spits out the response...if any), and I will need an open and close to choose the transport layer and shut it down. I now have the building blocks to talk to pretty much 90% of devices on the market. I'll now imbue the read/write vi with the capability to get its command strings from a file if I ask it to. So now, not only can it read and write single parameters, I can point it to a file and it will spew a series of commands and read the responses. This means I can configure any device in any way I choose just by pointing it to the corresponding config file. New device? New config file. No (labview) code changes so you can get a technician to do it Now, in your application, you have a lookup table (or another file) which has a name (alias), the transport, the address, the config file and/or the command for the value you want to read.(....say DC:VOLTS?); The read-write file vi is now wrapped in a parser which takes the info from the table and formats the message and sends it out through the read/write file vi or it loads the config file. I now have a driver scheme that not only enables me to add new devices just by adding a config file and an entry in the table, but also enables me to send the same config to multiple instruments on different addresses or different configs to the same devices on different addresses and read any values back I choose. And all I need is 1 VI that takes the alias. Told you you wouldn't like it because OOP programmers start frothing at the mouth as soon as you mention config files...lol. But I'll come back to this in context a little later on. If your system is such that you only have to code for exceptions. Then you are winning. If anything is changing rapidly, then any software "architecture" is bound to be compromised at some point and the more you try to make things fit...the more they don't...lol. If its for internal use only, you are better off with a toolkit that you can throw together bespoke tests quickly - something Labview is extra-ordinary at doing. Amen. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) [/post]</cite> It's damn hard to design a good class hierarchy that will fit all future (unknown) needs--much too hard for me and my little brain. Agreed (apart from the little brain bit). But there does seem to be a lot of "do it anyway" mentality about. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite> With those lessons in mind I started mulling over ways to implement interfaces in LabVIEW, which would have (I believe) made my development task much easier. Exterfaces is the result. I created it specifically for engineering tools with the understanding that more flexibility requires more coding. As a matter of fact, for many reasons I'm starting to favor exterfaces over traditional interface models--at least for hardware applications. To be honest I'm not really sure if there's much value in using exterfaces in finished product applications; if the requirements are known you don't necessarily need the flexibility exterfaces provide. OK. Well lets look at your "exterfaces" in the light of the Agilent example that you kindly provided. I noticed that you didn't put it under "Device Drivers" which I hadn't expected and is why I asked for a driver (although typical of drivers) that didn't fit nice and snugly with the simulated ones so I could see how this worked. From your exterfaces up, everything is hunky-dorey (as it always is with classes in labview) and your implementation seems to overcome a big drawback of the current Labview implementation. But this is what I was looking at. If the exterface is higher up in the tree (lets take the previous example of a Waveform test). Which is an exterface based around defining a sequence of operations (Set this, set that, wait 1 second then read result). We can instantiate that with different arguments and do different tests with methods such as "Start", "Abort", "Get Status", "Get Result etc, etc. If we now have the same test, but you need to move a motor into position, set some digital IO, wait 1 second and read the result, then your exterface can be implemented to do that in the same way that you implemented the HP driver, but the underlying method is transparent from the application AND you can have the same test running on different devices. There is value added to the extra coding since you would have to do that anyway in classic labview. Now. If your "Device Driver" was based on a class implementation of my previous example with all those obnoxious classical techniques with files and whatnot, the exterface now just defines the sequence of operations, the files (or class alternative) to configure, and the order of the aliases (or class alternative) to retrieve the result. Then you would have an implementation, that can instantiate multiple tests/measurements on multiple devices with different configurations. I'm sure you could find a way of incorporating this better than I've described, but this my current thought process. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite> Regarding reusing exterfaces across projects... depends on what you define as "reuse code." I expect I might do a lot of exterface code copying, pasting, and modifying between projects. If you consider that reuse code then I'm right there with you. Given point 2 above, I'm in no hurry to distribute interface definitions or exterfaces as shared reuse code. Getting it wrong is far too easy and far, far more painful than customizing them for each project. Lots of copying and pasting means that you haven't encapsulated and refined sufficiently. I think this is a particularly bad case in the class implementation strategy which forces you to do this over traditional Labview which would encapsulate without much effort. After all. In other languages, changes to the base class effect all subsequent inherited classes with no changes whatsoever. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite> If you're going to reuse interface definitions, not only does the interface definition need to be a superset of all project definitions, it needs to be a superset of all future project definitions. Like I said earlier, my crystal ball isn't that good. (If yours is turn off the computer and head out to the horse track. ) Releasing new versions of reuse code adds a lot of overhead. I found maintaining backwards compatibility can be a huge time sink. Regression testing can be difficult. The api can get very messy. You also need to implement the new functions in all the child classes to make sure your new ideas are workable. Then you have to worry about distribution and, if you distribute the new reuse code to deployed systems, system level testing. That's a lot of mundane and, IMO, unnecessary work. True. But if the definition is broken down into manageable chunks (think of my toolkit comment earlier). Then adding new "tests" doesn't become an issue. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite> However, if you keep the interface definition and exterfaces on the project level they don't have to be huge monolithic structures. You define the interface based on what that application needs--no more. There's no need to wrap the entire device driver because chances are your application doesn't need every instrument function exposed to it. If you have to make a change to the interface definition there's no worrying about maintaining compatibility with other applications. Small, thin, light... that's the key to their flexibility. Lots of small changes as opposed to one big change? I'd rather not change anything but I don't mind adding. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite> See above. You only wrap what the application needs, not the entire driver. For example, in the Exterface Architecture project the Ace Ammeter device driver can be set to measure AC current only, DC current only, or AC+DC current. However, my application only requires DC current measurements. My IAmmeter interface doesn't define a "Set Measurement Mode" vi because the application doesn't need it. I put the Ace ammeter in the correct mode in XAce Ammeter:Open and don't worry about it after that. I think this means (in my case) that you end up with lots of applications that can do very specific tasks with very specific hardware. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) </cite> "Create Instance" creates a new instance of the device driver and links that exterface to it. "Link to Instance" links an exterface to an already created instance. An exterface will use one or the other, not both. Use "Link" when an instrument is going to use more than one of it's functions. The vi "Many to One Example" illustrates this. (Look at the disabled block in "Init.") The system has a single Delta multimeter but it is used to measure both current and voltage. If I create two instances I end up with two device driver objects referring to the same device. That leads to all sorts of data synchronization problems and potential race conditions. By having the second exterface link to the same device driver object the first exterface is using, those problems are avoided. [Edit: Your question made me realize "Create Instance" and "Link to Instance" aren't good names for developers who are using the api and aren't familiar with the underlying implementation. "Link to Instrument" works, but "Create Instrument" doesn't. Any ideas for better names?] I've no problems with "create" (that's the same in other languages or you could use "New" as some do). But I can't help thinking that the linkto is a clunky way to return a reference to an object. If the "Create" operated in a similar way to things like queues and notifiers where you can create a new one or it returns a reference to an exisiting one, it would save an extra definition. <cite>QUOTE (Daklu @ May 11 2009, 02:21 AM) <a href="index.php?act=findpost&pid=62740"></cite> Suppose you have more than one Delta multimeter in your test system. On the second "Create Instance" call should it create a new instance for the second device or link to the first device? There's not really any way for the software to know. Seems to me requiring the developer to explicitly create or link instances makes everything less confusing. Rereading this I realize I didn't make one of the exterface ideas clear. In my model, each instrument has its own instance. Ergo, if I have four Delta multimeters in my system then I'll have four instances of the device driver object. If the device driver is not class based (such as the 34401) obviously there aren't any device driver objects to instantiate. In those cases the "instance" is simply a way to reference that particular instrument. Since all calls to the 34401 device driver use the VISA Resource Name to uniquely identify the instrument, I put that on the queue instead of a device driver object when XAgilent 34401:Create Instance is called. This allows the 34401 exterface to behave in the same way as exterfaces to class-based device drivers. Indeed. But having created 4 objects already, what happens if you create a 5th?<p>
  5. QUOTE (normandinf @ May 13 2009, 03:07 AM) Awwww. Don't feel left out. Feel smug in the fact of knowing that they will find all the bugs first so that when you upgrade to 2009.1, it actually works
  6. Its better to post in this forum. Then others may learn from the questions and answers.
  7. QUOTE Right click on the property node and choose "Change To Write" from the menu. QUOTE Also, how do I unpack this on the Block Diagram? If I attach an "Index Array" VI with no index (to indicate I want the first element of the array), the output data type is an array of one element, which is not what I would expect. I would expect a cluster of one WaveGraph. It should be a cluster containing a 1D array. If you put a Waveform graph on your front panel then right click and "create" constant on the diagram. You will see it creates a 1D array of Double Precision constant.This is the default format. You can see more on the input options in the help for Waveform Graph (a 2D Array is a multi-plot for example).
  8. QUOTE (Ton @ May 12 2009, 09:13 PM) can you not do it by the plug-in name? (therefore avoid loading anything)
  9. Most PC's have DACs and ADCs built in nowadays.....Your sound card. If your looking for cheap!.
  10. QUOTE (angel_22 @ May 12 2009, 12:46 AM) A good starting point might be the temperature system demo in the examples directory. It demonstrates alarm (over temp, under temp) detection and would at least provide a frame of reference for you to ask specific questions.
  11. QUOTE (Ton @ May 12 2009, 06:47 PM) Sorry, I'm not. Not quite sure what difference it makes if the open file ref is in a splash-loader or not, but, I also have a "cleaner", which goes through all the vi's, loading them and identifying orphan files. That only takes a couple of minutes to load and check every vi. So I'm still a bit baffled.
  12. QUOTE (Neville D @ May 12 2009, 06:22 PM) .....And no more compatible file-system.....no more 3rd party support. LV-Rt is only a real option if your system is completely NI based. QUOTE (rolfk @ May 12 2009, 08:10 AM) Which makes it probably more like XP Embedded anyhow Rolf Kalbermatter Shhhh. Don't tell everyone
  13. QUOTE (Ton @ May 12 2009, 05:58 PM) This seems a bit bizarre. My whole application (a couple of thousand vi's) only takes a few seconds to load (via an open ref then run from a splash-loader). Why does it take yours a few minutes to load a vi? Does it have to search for sub-vi's?
  14. QUOTE (crelf @ May 11 2009, 09:46 PM) Indeed....lol. We don't rely on it, safety critical aspects are hardware enforced (like E-STOPS which will cut power and air at the supply), Like I said though. Windows is very robust as long as you don't have all the other crap that your IT dept would put on, disable all the unwanted services, don't use active X or .Net and only run your software.
  15. QUOTE (Michael Malak @ May 11 2009, 10:08 PM) You used to be able to put graphs directly into arrays.I don't think you can anymore. But you can do this...... http://lavag.org/old_files/monthly_05_2009/post-15232-1242078686.png' target="_blank"> Put your graph into a cluster. Then put the cluster into an array.
  16. QUOTE (crelf @ May 11 2009, 07:25 PM) Actually. We can and we do. We specify the PC in detail. From the processor, motherboard, chassis, drives etc to RAID10, cooling and vibration/shock resistance. We also stress test the machines for 1 month before delivery. Whilst we can't prevent hardware failures, we can limit the impact on production with considered choice and preventative maintenance. These aren't laptop/desktop pc's loaded with word, excel and all the other crap. These are purpose built production machines, (designed to run 24/7 and come with a 10 year life expectancy and 5 year warrantee) which have only ours and national instruments software (apart from the OS and drivers). So if there is a software "fault" there is no-one else to blame. If the operator is capable of breaking it. Then we haven't designed it properly. QUOTE (crelf @ May 11 2009, 07:25 PM) If the user ignored a "improper shutdown, hard disk has errors, run scandisk" message and your software faults due to that, is that your fault too? Yes. The operators function is solely to load/unload the machine with parts, start the machine in the morning and stop the machine at the end of the shift. No more is expected. If an error is detected in the software, it sms's us with the error and we contact the customer to ensure that preventative action can take place on the next shift changeover. QUOTE (crelf @ May 11 2009, 07:25 PM) I'm not saying that you can't plan for these things, but you have to, well, plan for these things. If you know that you need to activate Windows in 30 days or it'll bomb out, then you activate Windows within 30 days so it's okay. You need to gain and apply that same level of product knowledge to all the components you apply to the system - including anything from NI or any other manufacturer. That said, if a user ignored an error for a month (whether that be from an activation issue or a hard fault) and then expected me to fix it immediately once it bombed out, I'd be having a stern talk to them. errrrm. I didn't think it was in question that the OP (or myself) hadn't originally activated Labview. He was talking about the fact he had activated it and after an update Labview required re-activation. We have an OEM license for windows so activation isn't an issue (if we decide to use windows that is, we also use linux so activation would be irrelevant). Any (and I mean ANY with a capital E) errors must stop the machine and turn it into a paperweight, otherwise people can lose limbs. We have no issues with windows (it's remarkably stable, if you don't load it up with active X or usual rubbish you find on a desktop PC) and we have had only 2 windows failures in 4 years both due to malicious intent. Windows has never suddenly popped-up that it requires re-activation. Or refused to work because of licensing. However, as I pointed out, we have had 2 instances as of this year so far of Labview requiring re-activation.
  17. a description of your problems would be helpfull. Perhaps some pictures of the code you have already?
  18. QUOTE (rolfk @ May 10 2009, 10:59 AM) It means you can't just "Read X Chars" until >= "file size" without getting an error. The choice is you either monitor the chunks in relation to the file size and at "file size -Read X Chars" change the "Read X Chars" accordingly, or ignore the error only on the last iteration. If you have another way, I'd be interested. But I liked the old way where you didn't have to do any of that.
  19. Agreed. There are certain people,however, I know who would disagree (purists who prescribe to the infinite decomposition approach). My problem is that complex function is what I do over and over again (measure pk-pk waveform, rise time etc, etc) but the hardware is always changing. Not only in the devices that are used to acheive the result, but in the number of different devices (might require just a spectrum analyser on one project, but a motor, a spectrum analyser and digital IO on on another). This I get. And for something like the "message pump", which has a pre-determined superset base of immutable functions is perfect. But for hardware it seems very hard to justify the overhead when drivers are supplied by the manufacturer (and can be used directly), "to make it fit". Yes the Agilent has a "Voltmeter" function, but it also has the maths functions which the others don't have so that needs to be added. Also. Just because I have abstracted the Voltmeter" function doesn't mean I can plug-in any old voltmeter without writing a lot of code to support it. If it added (lets say) a frequency measurement. Then my application would need to be modified also anyway. I get this too, And don't want to turn this thread in to OOP vs traditional, that's not fair on Daklu. The value I see in your interface approach is that it seems to make more of the code re-usable across projects, intended or otherwise The interface definition needs to be a superset of the project definitions, but that is easy for me since the higher functions are repetitive from project to project (as I explained to Omar). The intermediary "translation" could work well for me but the prohibitive aspect is still the overhead of writing driver wrappers which prevents me from taking advantage. I have one question though. Why do you have to "Link" to a device? Presumably you would have to know that the device driver is resident before hand for this. Can it not be transparent? Devices are (generally) denoted by addresses on the same bus (e.g RS485) or their interface or both. Can the "Exterface" not detect whether an instance already exists and attach to it, or create it if it doesn't? Also, where do I download the example with the Agilent included (I tried from the original post but it is the same as before) <p>
  20. QUOTE (Daklu @ May 9 2009, 06:20 PM) I like the abstraction (I always like the abstraction....lol). And I see where your going with this and like it (in principle). But the main issue for me (and is one big reason I don't use classes) is that we are still not addressing the big issue, which is the drivers. My applications are hardware heavy and include motors, measurement devices (not necessarily standardized DVM's and could be RS485/422, IP. analogue etc), air valves and proprietary hardware....well you get the picture. The hardware I use also tends to change from project to project. And (from what I can tell) to use the class abstraction, I need to wrap every function that is already supplied by the manufacturer just so I can use it for that project. Take the Agilent 34401 example in the examples directory for instance. If I want to use that in your Architecture, do I have to wrap every function in a class to use it rather than just plonk the pre-supplied vi's into my app which is pretty much job done? The drivers tend to use a seemingly class oriented structure (private and public functions etc), but there just doesn't seem any way of reconciling that with classes (perhaps we are just missing an import or conversion function or perhaps it exists but I haven't come across it). I know you have shown some example instruments, which strikes me as fine as long as you are developing your own drivers. But could you show me how to integrate the Agilent example into your architecture so I can see how to incorporate existing drivers? After that hurdle is crossed, I can see great benefits from your architecture.
  21. I read you document and (on the first) pass it all made sense. And I have downloaded the example, but I'm missing a load of JKI vi's (presumably in the Open G toolkit somewhere). I'm looking for them now. Just thought I'd say "something" since your eager for comments Just a little patience. Not everyone is in your time-zone and it is the weekend after all.
  22. QUOTE (Ton @ May 9 2009, 03:06 PM) Personal preference. For me. I like to see controls/indicators and their associated property nodes at the same level so I can see whats going on and apply probes without delving into sub-vi's. I had one bug in someone elses application that I needed to fix where on certain occasions a particular control would change its decimal places and others their colour when they shouldn't. The culprit was 5 levels down the hierarchy from the vi's affected and was a property node that changed the DP (amongst other things) dependent on a limit test supplied from another parts of the code (they didn't include the upper limit).
  23. QUOTE (postformac @ May 9 2009, 01:24 PM) It is if you pass the refnum to the sub vi. If you just select the property node and select "Create Subvi" from the Edit menu you will see what I mean. I don't consider it as good coding practice since it hides what you are doing. I would prefer to see a vi that returns the new values to the property node in the same vi as the control appears.
  24. QUOTE (Yair @ May 7 2009, 05:52 PM) Lets hope it goes out of scope before you run out of memory then. QUOTE (NI Help) If you use the Obtain Queue function to return a reference to a named queue inside a loop, LabVIEW creates a new reference to the named queue each time the loop iterates. If you use Obtain Queue in a tight loop, LabVIEW slowly increases how much memory it uses because each new reference uses an additional four bytes. These bytes are released automatically when the VI stops running. However, in a long-running application it may appear as if LabVIEW is leaking memory since the memory usage keeps increasing. To prevent this unintended memory allocation, use the Release Queue function in the loop to release the queue reference for each iteration. ----------------------- QUOTE (Yair @ May 7 2009, 05:52 PM) In any case, no one is forcing you to use these (or even a new version of LV). You can go back to an old version, but I still don't see why you say that you have to worry about memory, etc. in the newer versions. Tetchy
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.