Jump to content

bbystrek

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by bbystrek

  1. QUOTE (hambazaza @ Apr 27 2009, 02:49 PM) I just caught one error in my original post, "SEN" is not the right abbreviated version, should have been "SENS". Try changing your constant from ":OBW:MAXHOLD " to ":SENS:OBW:MAXHOLD ". The leading ":" in your original constant would have told the instrument to look for an "OBW" subsystem from the root. As it doesn't exist in the root, you would need to tell it that it's a subsystem of the "SENS" subsystem. Your original form probably would have been OK if you had omitted the leading ":", but only, if on your last command, you had already addressed the "SENS" subsystem (such as the corrected, fully qualified command that I'm suggesting). To stay out of trouble with this aspect, simply always write your code with the full command syntax. There's nothing wrong in doing so, just that a handful of extra characters need to be transmitted and processed by the instrument. Generally should be trivial for most purposes. If you still have trouble with this particular command, you might try sending a different command that's easily verifiable just to confirm connectivity. Brian.
  2. As it sounds like you're talking about internal code, and not just user interface representation, the only thing that I can think of is changing the design of your current typedef such that it allows varying content such that old/new/future data could always fit into the data type. Perhaps something along the lines of an array of clusters, where each cluster consists of a property identifing tag (or strict type def enum) and a value in the form of a variant. This way, all dependent code could just inspect the tags to determine how to handle the data, and any kind of data will always fit. You could also create conversion VIs which could bring the various data content back to your original cluster for minimal disruption to your existing code. As for the user interface, I can't think of anything other than what your suggesting with multiple indicators layered on top of eachother with only one visible. Could also be done with each version of the data on a tab control (tab control could be made transparent to the user so that they are not aware that's it's present - with the tabs hidden of course). If your property/value array were to include some sort of "revision" pair, then you could use this to switch the display as needed.
  3. QUOTE (twinsemi @ Apr 16 2009, 01:37 PM) Could be the cause by touching on some sort of a bug, but you can't "create" the insane error intentionally no matter what you do under normal circumstances, you can only make your code broken (not capable of running). If you're fix turns out to work, I'm glad. This is one of the more painful type of LV problems to troubleshoot. I once had a random crash on the order of a few times a week - turned out to be an NI supplied VI accessing a DLL failed to specify a dimension for an output value. Many hours and lost on that one.
  4. QUOTE (mesmith @ Apr 16 2009, 11:57 AM) I had forgotten about this bit of tribal knowledge. I hadn't noticed it earlier, but you have your primary LV version listed as 7.1.1 - this sort of thing did happen once in a while back then. I've seen it either two or three times myself over the years, and once each with two different coworkers, all of us building different applications. In all cases if I'm remembering it right now, it was the insane error that arises or the VI simply refuses to let you save. The fixed had worked every time. We originally learned about it from NI. It was caused by some sort of corruption. Just as Mark says, all you have to do is copy the code into the diagram of a new VI. His suggestion to isolate should be definitive if a full cut and paste still creates the error.
  5. Have you tried, from a fresh Windows boot, just opening your code and trying to make your changes without ever running the code? If your code has a bug that's running your computer nearly out of memory, odd things might happen. Of course you can have a look at available memory using Window's task manager (from ctrl-alt-delete options). Another last-ditch effort that I can think of - you might try a forced recompile to fix anything that might be corrupt. From what I've been told by NI, LabVIEW ordinary only incrementally recompiles sections of the diagram as you make changes. A forced recompile rebuilds all VIs in memory. You might want to make a complete backup copy of your code first. What you do is hold down the shift and ctrl keys and then click on the run button (perhaps should be done at the highest level VI). All VIs will want to be saved after this process completes. There's no dialog of any kind, just when the hourglass goes away, all VIs will show the asterisk in their title bars indicating that there are pending changes to be saved. If you have dynamically called VIs involved, you would have to open each dynamically called VI and do the same. I don't really know how often this sort of problem arises, but NI has directed me to try it on a couple of troubleshoots over the years. Both instances turned out to be something else, but it was another bit of wisdom to try.
  6. Certainly you're not hitting any case structure limits. I've seen several hundred implemented. I'd suspect though, that you might be restricted to the possible values that you can set the I32 selector value to. Not sure how this would apply to other data types like strings or enums driving the structure. My reason for posting - From years of experience with LabVIEW over the years, depending on what resources your program is using, it's usually wise to re-boot after a crash. At least once you understand what's going on. Just don't necessarily expect everything to run right just because LabVIEW reopens. Troubleshooting full crashes can frequently be cumbersome. When they are repeatable, as your instance appears to be, you might have a very careful look at what your doing. Sometimes someone experienced can detect what you could be doing wrong by simply observing. You might also consider the impact of inappropriately stopping your code with the red stop button while debugging. Many times your code will leave a variety of resources hanging because the various closes and releases didn't get called. Sometimes this can make consecutive runs misbehave or produce unexpected behaviors.
  7. I apologize in advance if the following is confusing. Just a few concepts to come back and read again should you have any problems... I can offer you one more point that you might not have absorbed yet. As with many of Agilent's instruments, they remember which subsystems that you’re talking with. This particular command that you've mentioned is associated with the "Sense" and OBW subsystems. You can think of it as if the instrument has multiple brains (a brain = a subsystem) which are in some respects capable of operating independently. It's almost as if they’re separate execution threads if your familiar the concept. There are instances with many of spectrum and network analyzers, including Agilent's, where it's necessary to manage execution of the incoming command queue so that things like Config subsystem operations complete before requesting data using the Measure subsystem. Otherwise what can/will happen is that the measure subsystem will literally return a value, despite that the Config subsystem is still working. A couple of commands that are sometimes inserted to control this aspect of execution are OPC? and WAI which hold the incoming command queue until all executing tasks are finished. This is not something get overwhelmed with, and you can probably ignore it if you don't experience any issues, but these are a couple of core concepts that I had struggled with a bit when first introduced to Agilent's 8753 and 8560 series Network and Spectrum analyzers. In the case of the noted command, you can always send a command in it's full form from the root ":" (I've taken advantage of the abbreviations as indicated by the lower letters in the documentation)... :SEN:OBW:MAXH ON :SEN:OBW:MAXH? Alternately, once the Sense subsystem has been addressed, it's no longer necessary to specify the subsystem on consecutive commands. Once you send a command to another subsystem, it would again be necessary to change the subsystem. :SEN:OBW:MAXH ON OBW:MAXH? I believe you can even simply it further as the ""SEN:OBW:" has already been addressed. :SEN:OBW:MAXH ON MAXH? My suggestion to you would be to code your application such that any new conversation you initiate, never assumes that a previous process left the proper subsystem addressed. In other words, always fully specify the command from the instrument's root. An exception would be where, all of the commands are being sent sequentially from within the same VI, or series of VIs where the context is absolutely clear.
  8. QUOTE (Mark Yedinak @ Apr 14 2009, 03:54 PM) Yeah, this the sort of semi-custom binary format that's easy to implement for this type of data. It takes a bit more work to build interfaces for more complicated data types. The best code I've worked with incorporated custom headers ahead of the data, not all that different than the headers produced by some of the flatten functions. If a version identifier gets incorporated, readers and writers could even be expected to cope with revisions. The one good thing about a defined structure is that it provides for ready compatibility with external code. I have some spare time right now as I'm back in job search mode at present, one of the things that I'm contemplating is to produce a group of functions for encoding/decoding generic stuff in/out of this sort of database field. Perhaps something that reads/writes a group of variants, with extensibility and compatibility on par with xml or along the lines of the config file VIs. The key would be that the low level interface should be unaware of the data types or even how many elements are being stored.
  9. QUOTE (andreapede @ Apr 14 2009, 06:49 AM) I'm not quite understanding the second approach. If the interaction is the typical command response, where the device never speaks on it's own such that it only speaks when asked a question, then the first approach would seem right. To avoid potential confusion in this sort of interaction, many of us typically flush any remaining characters in the VISA receive buffer before sending a new command to help ensure that when we go to wait for the response, it's not a stale one from a previous command that we may not have read out of the VISA buffer
  10. My thoughts for what their worth... The default ini file that LabVIEW creates doesn't really cause any harm and it's a fundamental component of deployment and required in many instances. This is why the application builder interface includes a configuration pane for it. I've historically used it as a means of conveying blink rate, blink colors, font selections and postscript printing options. As it's size is so small, I don't really understand why it would be a bother. If you haven't used the file before, you might have a look at NI's knowledgebase as typically what you might do, in an advanced context, is to copy applicable entries out of your LabVIEW.ini file (in the main LabVIEW install directory).
  11. QUOTE (Mark Yedinak @ Apr 14 2009, 02:33 PM) Yes, this was the compatibility issue that I experienced in the past that's prompting my question. At the time, we had multiple standalone applications accessing text files encoded with flattened string data. I forget just which direction the compatibility wouldn't work - forward or backward. The newer application was in 8.2.0 or 8.2.1, I don't recall how old the prior version was. The flatten to string had been used to be able to transfer data generated by a data collection tool to another standalone tool for analysis without having to expend any effort on structuring the data in any specific format mainly due to the complexity of the content and lack of any need to interchange other than LabVIEW to LabVIEW. In regards to database performance, say I've just collected 1000 points of data from a load cell. No timestamps or anything, just a sequence of points at some implied sample rate. All point data is to be stored in the database. Only LabVIEW will ever be used to read the points. Analysis will always use the all 1000 points as a group. There is no need to ever query the data, e.g. find the maximum value using SQL. If I were to write the data to a table in the database as individual rows, it might require say thirty seconds or longer to complete the write (even when implemented as a parameterized query as would be most appropriate in this case). Sure it could be spawned as a background task so the user isn't held up, but in this case, I've generally been either formatting the data into a binary/ascii form of some custom syntax, or using the flatten to string functions. Either way, writing a single row, with a large binary object is relatively quick from an OLEdb/SQL standpoint in contrast to individual rows. My question is whether anyone has any reservations with flattened data. Generally it's pretty compact, at least relative to representing the same data in ASCII.
  12. I'm curious what the latest thinking is on using LabVIEW's flattened data for long term data storage. Some years back, the data format had changed such that some content would not read properly between different versions of LabVIEW without making some code adjustments. While I'm not a big fan of large collections of text files such as those that are sometimes created with the flatten to string object, my question is more towards storing data in a database in binary form, for situations where LabVIEW is the only conceivable application that ever needs to access the data. In many situations, breaking the data into non-binary components would degrade performance significantly relative to binary data. My only concern with binary is how compatible it will be going forward. Just wondering how others are handling this.
  13. Command is in the manual that was attached to a previous post... 2.18.3 Max Hold Enables you to turn maximum hold trace feature on or off for the measurement. Maximum hold displays and holds the maximum responses of a signal. Key Path: Meas Setup State Saved: Saved in instrument state. Factory Preset: Off Remote Command: [:SENSe]:OBW:MAXHold OFF|ON|0|1 [:SENSe]:OBW:MAXHold? Example: OBW:MAXH ON OBW:MAXH?
  14. It depends on the type of instrument and software/hardware interface. What manufacturer and model number are you working with? What sort of buss is involved (RS232/RS485/GPIB/Ethernet, etc)? My advice to new LabVIEW programmers is to fully understand the behavior of all of the commands that are going to be used with the instrument before writing any significant software. Many times you can use Windows HyperTerminal or NI's VISA Interactive Control to experiment with commands. Otherwise, you can build some simple LabVIEW VIs to do the same thing. In general... Check to see that NI doesn't already have a LabVIEW driver for your instrument ("Tools>Instrumentation>Find Instrument Drivers"). You might also try contacting the manufacturer. Many don't post them directly to the NI site. Many times these drivers are a good starting point, even if they aren't complete or are not built exactly for your needs. For message based instruments, you might have a look at some of the examples that are installed with labVIEW. For example the Agilent 34401 on the Instrument I/O function pallet. Many people simply start with some of these objects by creating modified version for their particular instrument. Might also take a look at the wizard - "Tools>Instrumentation>Create Instrument Driver Project" from LabVIEW's pull-down menus. I'm not a huge fan of this method, but it works.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.