Jump to content

David Boyd

Members
  • Posts

    165
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by David Boyd

  1. The fullscreen point I get, sort of; coupled with a touchscreen, the corner "X" may not be the easiest to access. The target hardware will have a touchscreen monitor, although we'll undoubtedly leave them the mouse. And while they might fullscreen the main app window, by no means do they need to. For dialogs, I would generally offer a "cancel' button, and an "OK" (or "submit" or "proceed" whatever seemed appropriate labeling), and map keyboard enter and escape keys.
  2. And I was errant in my description above, as you've both uncovered. On the original application, someone dropped the canonical rectangular pushbutton which comes with red "STOP" text; but they did actually change the text to "EXIT". My original point remains, though - what other desktop (non-LabVIEW) application features a button as the way to dismiss an application? For years I've just trapped window close attempts, and used those to start the decision process of whether it's proper/safe/etc, and handle the app exit gracefully (hardware known state, file/db management, etc). Dave
  3. I've been working on a total ground-up rewrite of a production test application that was crafted in LabVIEW 7.1 in the early-mid 2000's. I got approval to do this since I really didn't want to glom the real required changes (mostly about hardware evolution) onto ageing software architecture, and (IMO) *way* overbusy UI. I'm talking multiple layers of tabs-within-tabs-within-tabs. I've worked to make a still-pretty-busy UI a lot flatter/simpler. I'm getting tweaked in validation reviews because I declined to include an explicit "stop" button to close down the application. (The application typically runs for multiple hours.) No matter how I try to explain that no other desktop application has a "stop" button (I just intelligently handle shutdown via the filtered window close attempt), I'm being told, "NO, it NEEDS a STOP button". Of course I'll give my manager what he wants, but I don't have to like it. OK, let's hear the sage advice and similar tales come rolling in... Dave
  4. Reading twenty-plus-year old articles like that really starts to make me feel ancient. I was about to reply with the obligatory grumble about preferred-case spelling ("it's LabVIEW!"), and instead looked up the author. Found out he passed away just before MacWorld Expo 2007. Thanks for locating and posting this. Dave
  5. I still have my "Power to Make It Simple" tee shirt that I "won" at the end of my three-day Basics I class (I think it was fall of '97... does that seem right?). The black is pretty faded. I was excited to upgrade from 4.1 and try out the miraculous "undo". And real multithreading (under NT 4.0)...
  6. Thanks, @hooovahh, for pointing me to those older discussions, which I probably totally missed. @drjdpowell's comment about a clone's reference having guaranteed validity when passed to its subVIs doesn't seem to apply to my/@Neil Pate's use case - we're sending a cloneVI ref to another VI via messaging. So no telling when the original VI ref might go out of scope. And I just realized that my demo code for launching off clones (and then later gathering their refs for subpanel use), explicitly closes the original VI ref after they're launched. There is still a static ref on the caller's BD though - I think there has to be since you need a strictly typed ref to make the ACBR work at all. My demo code is below. Dave
  7. I was browsing through class code from the Actor Framework (ashamed to say I haven't used this framework, yet, but that's changing), and stumbled across what appears to be a dire warning. (See the attached, or if you'd rather, read the BD comment here.) I've used an architecture for years now where I launch N clones of a VI using the ACBR in fire-and-forget mode, and subsequently the clones get a VIref to themselves and register that (with an assigned index) by message back to a GUI VI. The GUI then allows the user to switch through the clones' FPs to be shown in a subpanel. I've never had an issue traceable to this. Note that in my code, the clone refnums are NOT obtained by an Open VI ref with clone name string (as shown) - they are implicitly obtained within the clones via a VI class property node. Does this warning imply that this architecture is somehow unsafe? I'm hearing AQ's authorship in my head when I read this warning. @Aristos Queue, are you listening? Can you comment? (Apart from chastisement for my only now learning about the AF - sorry.) Dave
  8. I might have a tactical advantage there... who would bother to load up on old LabVIEW versions just to look at my rookie LV4.0 code. (Now, where's that chart Scott Hannahs did that shows the last version that'll open 4.0...?) Actually, I have plenty of much newer code I'm ashamed of, so who am I kidding?
  9. I spent a little time this afternoon searching Info-LabVIEW ca. 2002, and you're absolutely right, there WAS a lot of confusion back then about how to apply the "new" paradigm effectively. IMO, more than any other feature added since I started using LabVIEW (4.0/4.1), the ES really reset the way I thought about LV programming architectures. There's good lengthy discussion pertaining to the ES in those Info-LV archives, BTW, especially a few excellent posts by Greg McKaskle describing how they made the design decisions the way they did. I'd recommend looking back through that material to anyone following this thread. Dave
  10. OK, having heard from all my multiple-ES LAVA colleagues, I'm seriously in need of a reality check. AQ: do you recall any early caveats from NI (either in release notes, or help, tutorials, online discussion, etc.) that warned against the practice? I'm vaguely recalling there was an issue with the way ESes invoked some behind-the-scenes setup as soon as the VI was loaded into memory, well before user code started executing. Or maybe I was living in some alternate reality back in the 6.1 days? Dave
  11. Somewhere in the dawn of the ES (6.1? I think), while wrapping my head around this great new paradigm, I took it as a commandment that THOU SHALT HAVE NO MORE THAN ONE EVENT STRUCTURE PER DIAGRAM. I've frequently been appalled by the code of some of my coworkers who blithely put down 2, 3, or 4 ESes in separate loops. (Heck, I don't even like to have more than one ES in an entire execution hierarchy, maybe I'm carrying it too far?.) So it wouldn't bother me. But I am curious, AQ, if you went ahead with this enforced limitation, what kind of upgrade mutation could possibly save such (IMHO, barely maintainable) code that's out there? Dave
  12. I've used Digi devices for years, with few issues. Biggest system to date has four Etherlite 160's (total of 64 ports); sixty ports are tied to UUTs spewing 6400 char/sec each, all of which is digested by my LabVIEW application. The other four ports are used for instrumentation, doing query/response (a few dozen chars per message, as fast as the instrument responds). These terminal servers are setup to use the Digi RealPort driver, which provides standard Windows comm API, so VISA treats them like local asynch serial. When you get up to this level of activity, with array-launched VI clones, and lots of queue/notifier/event structure support, you have to be pretty careful about the details. Seemingly minor changes to cloned code, reentrancy settings, execution system assignment, etc can mean the difference between a working app and a quivering heap of unresponsive code. Dave
  13. Thanks, Crystal. As one of the "Old Guard", you already (IMO) have a credential that surpasses anything a CLD certifies. Funny, though, perhaps we're not as tightly regulated as you, but we've never given a second thought here to migrating to newer releases when starting fresh projects. Sometimes I think we are still flying a bit under the radar. We focus more on qualification of test systems, rather than V&V of our LabVIEW-developed code apart from those systems. If we were to do more formal software verification, we'd surely need to add personnel to handle that workload. Anyway, I took the test today. Got an 85. Way lower than my previous CLD-Rs and the original CLD. Not proud, not pleased, just relieved. But I still feel as though too much of the test was of the form: 13) Which of the following (blah, blah,...)? A. A twisty little maze of passages, all alike B. A maze of little twisty passages, all alike C. A twisty maze of little passages, all different D. A maze of twisting little passages, all different (With a nod to Colossal Cave, Zork, etc.) Dave
  14. I'm scheduled to take my CLD recert exam on Monday afternoon; this will be my third recert (took the CLD in Austin during NI Week 2004). As my LAVA listing shows, I've been using LabVIEW continuously for nearly thirteen years. I consider myself a pretty sharp guy (LV-wise), with a background in automated test, and the programming (in various environments) that goes with it, since the early eighties. My current employer didn't ask me to attain certification, I just did it on a lark. I forget the specific scores, but I know that the original CLD and two subsequent CLD-R's were all scored in the high 90s (I think one recert was a 100%). So I waited until today (Sunday afternoon, quiet around the house) to work through the practice exam. Ouch! Not a very pleasant experience, and when I scored it, not surprisingly, I barely scraped a pass. Random thoughts: I expect that with each passing recert, there will be a few questions added in that will demonstrate that I haven't adopted a new feature in my own development. And I know this has been discussed elsewhere on the forums. I'm OK with not knowing every new feature intimately. (Network Shared Variables; just haven't needed 'em, though I remember when they were DataSockets, before they grew up. Feedback nodes still confuse me with their syntax, I so prefer shift registers. And sorry Stephen, but I just haven't found a programming challenge yet that whispers "LVOOP, LVOOP" in my ear. Someday I'm sure I will make the leap.) Some of the example code snippets seem SO convoluted in their purpose, I can't help but wonder - am I being evaluated on how well I can troubleshoot some newbie-LV-minion's code? (I don't have any newbie-LV-minions at my command at my place of work, for better or worse.) Guess I'll read some of the dustier corners of the online LV help on Monday, if I can spare the time. Wish me luck, never thought I'd need it. Dave
  15. Is Windows server 2003 a supported target for LabVIEW 2009? I know that it specifically WAS NOT under prior versions, though over the years there have been reports of folks running LV apps on WS2003. I'm just asking to point out that you may not be able to get support from NI for any issues you have. Whether this is an issue for you, I couldn't say. Along other lines, have you checked the Windows event log on the target? Perhaps there is an application error event getting recorded which could suggest what's going on. Good luck! Dave
  16. I've been bitten by my misunderstanding of custom scaling and min/max definitions too. Classic example: I have a 0-15psi pressure transmitter that's 4-20mA current loop. I create a custom scale thusly: prescaled units in Amps scaled units in psi slope 937.5 intercept (-3.75) I specify this custom scale while defining a virtual channel that uses an NI current input, and am tempted to specify the channel min and max as 0-15 psi. Bad choice on my part! While trying to perform a software cal on a sensor, if the raw current reading is 3.7ma, the scaled value is pinned to zero psi even before I apply software correction factors. Perhaps this isn't really particular to the custom scaling, but I would still somehow expect the scaled channel to return all values within the chosen range of the underlying hardware. Having the max and min values specified in terms of the scaled value is just confusing here - what I SHOULD have done was to specify (-3.75psi) as the min value and (+16.41psi) as the max since the hardware will choose an input range that can actually report (0ma) to (+21.5ma). If I DON'T do this, and go with my original impulse, my scaled readings will never appear outside their valid ranges even if the hardware disconnects (goes to 0ma) or otherwise "goes south". Dave
  17. QUOTE (LV_FPGA_SE @ Jan 21 2009, 08:02 PM) Geez, Christian, that was priceless. So simple an idea, it just took one clever person to conceive of it and create the website.I didn't piss myself, but I narrowly avoided blowing coffee across my laptop keyboard. Thanks for this. Dave
  18. QUOTE (Michael_Aivaliotis @ Nov 17 2008, 02:58 PM) Any updates, Michael? Just wondering. Wasn't there an issue in the past with some key from Google that went invalid with a server move/domain change? Dave
  19. QUOTE (miab2234 @ Dec 7 2008, 12:06 PM) Well, the Match Pattern you added back in is still not doing anything for you, and is still not needed in the example. See how the string enters the left border of the While Loop? As a solid little rectangle? That is a simple tunnel. This means that the entire input string is on the wire on the inside, unchanging through all the loop iterations. Your Match Pattern looks for a semicolon character, and since there isn't one (in your revised sample data), passes the entire string out of its 'Before Match' terminal. Every iteration, unchanged. The Scan From String node has a format string input which says it should find two floating point numbers - that's the %f specifiers. The leading format specifier, the one that is a %,; is a special token that says that the decimal radix character is a comma. My LabVIEW needs this since in my locale (USA), the radix is a decimal point, and your input strings 'look' European. You may not need this specifier. (This would be a good time to update your LAVA profile to proudly declare your nationality. It can be helpful to others, as in this example. Plus, it's always interesting to see.) So, the Scan From String finds two floating point numbers separated by any amount of non-numeric characters - maybe a semicolon, maybe just whitespace, works either way. Now, here's the magic part: the Scan From String also returns an integer which says how far into the input string it had to look. We pass that around through a shift register for the next iteration. This tells the Scan From String to start looking for the next two numbers that far into the original string, skipping over the ones it has already converted. It's much more efficient to do it this way than to keep breaking up the original string, which moves the (potentially huge) string around in memory. You got the array size, though not in the way I would have recommended. The iteration terminal starts with zero, so the last iteration number is one less than the count of iterations - it's important to understand the distinction. You 'got lucky' because the last iteration in this example, by definition, is a failure - we exit the loop after we fail to parse any more numbers. That's what the Delete From Array node does at the end - it throws away the last, invalid array element. I would have recommended you use the Array Size? node on the final output. Neat, tidy, and clear (to a LabVIEW programmer) what the intent is. I still think you have your notion of X and Y reversed when your dataset is plotted. The data suggests a function to me, which implies only one Y value per X. Your plot would not describe a function. Finally, and no offense intended, please don't PM me about examples. Use the forums for this. It allows others to follow the dialog, so you'll have more chances to get good help, in case I'm too busy to respond. Best of luck with your LabVIEW learning! (It can be both fun and profitable!) Dave
  20. I've attached a modified verson of your VI. It's a little simpler than you had. Look on the block diagram for a few notes. Best of luck, Dave QUOTE (miab2234 @ Dec 4 2008, 10:38 PM)
  21. Tried the member map recently and couldn't get it to work. Page loads OK and the map frame draws, but no contents (neither map not pins). This was using IE7 on my work laptop, both behind the corporate firewall and from home. I've also tried from my home PC, using IE7 and FireFox 3.03, all same results. Is this a known issue? Dave
  22. QUOTE (jdunham @ Nov 11 2008, 08:10 PM) Let me add a "me, too!" reply to this. Like others who have responded, I've written VISA serial code for countless devices (sometimes it's an instrument, sometimes it's the UUT) where the single-character termination simply does not apply. Invariably I end up with some sort of looping, scooping, bytes-at-port?/shift-register/string-appending/match-pattern/splitstring... ...you get the idea. All because the reply from the device has a two-char termination sequence, or some DLE-escaped format, or the checksum follows the term sequence, or the message starts with a length byte, or they use a CRC16, or any one of a dozen variations on the theme. I've long wished for a VISA extreme makeover that would include advanced pattern matching built in to the API. I don't have a clear vision for exactly how it would work, but I trust that those clever folks at NI would come up with something very useful...Until then, I just have fun writing drivers for the stuff nobody else wants to take on. Which isn't so bad. Dave
  23. QUOTE (Minh Pham @ Nov 10 2008, 07:55 PM) I recall that the statement in B is NOT specifically true. While loops DO NOT resize output arrays after every iteration. IIRC there is some intelligence to the algorithm which allocates array elements - I think it allocates as needed by powers of 2 over some range - and when the loop exits, the LV memory manager does a final resize as needed to return unused elements to the heap. My recollection could be: A Rather dated B Outright faulty C Both A and B D Spot-on correct Dave
  24. QUOTE (Sparky @ Sep 30 2008, 03:13 PM) I *must* be missing the point of both your original question and the responses you've received so far. I don't see why you need a reference to the picture control in the subVI to change the contents of the picture on the caller's front panel. Can you not simply pass the subVI the existing picture data and pass the modified picture data back from the subVI, to the indicator on the calling VI FP? Unless you need to change an attribute of the picture, as opposed to its data, the reference passing just circumvents dataflow. Perhaps if we asked the same question with "numeric indicator" substituting for "picture control", my objection would be more obvious. Best regards, Dave P.S. to Paul: all I found in your zip attachment was an .lvproj file - no VIs. Also, you might want to update your profile to show you're posting LV8.6.
  25. You might try creating an event handler, then in a single event case for the control, registering at least key down/up/repeat, plus value change (of course), perhaps a few others.... then in the event case, read the property of the numeric "Numeric Text->Text". I would take this text into a "Scan From String" node, plus probably a "Match Pattern" as well, and check both the numeric value of the scanned string to your limits and also whether the Match Pattern finds non-numeric cruft. Not exactly easy, but from this you might be able to get an as-they-type peek at what the user is doing, and recolor the numeric's background in response, even before the control is validated. If that's really what you need to do... Just my 2 cents' worth. Dave
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.