Jump to content

David Boyd

Members
  • Content Count

    158
  • Joined

  • Last visited

  • Days Won

    1

David Boyd last won the day on April 22 2013

David Boyd had the most liked content!

Community Reputation

8

About David Boyd

  • Rank
    Very Active
  • Birthday 03/17/1958

Profile Information

  • Gender
    Male
  • Location
    Kennesaw, GA USA

LabVIEW Information

  • Version
    LabVIEW 2017
  • Since
    1997
  1. I might have a tactical advantage there... who would bother to load up on old LabVIEW versions just to look at my rookie LV4.0 code. (Now, where's that chart Scott Hannahs did that shows the last version that'll open 4.0...?) Actually, I have plenty of much newer code I'm ashamed of, so who am I kidding?
  2. I spent a little time this afternoon searching Info-LabVIEW ca. 2002, and you're absolutely right, there WAS a lot of confusion back then about how to apply the "new" paradigm effectively. IMO, more than any other feature added since I started using LabVIEW (4.0/4.1), the ES really reset the way I thought about LV programming architectures. There's good lengthy discussion pertaining to the ES in those Info-LV archives, BTW, especially a few excellent posts by Greg McKaskle describing how they made the design decisions the way they did. I'd recommend looking back through that material to anyone following this thread. Dave
  3. OK, having heard from all my multiple-ES LAVA colleagues, I'm seriously in need of a reality check. AQ: do you recall any early caveats from NI (either in release notes, or help, tutorials, online discussion, etc.) that warned against the practice? I'm vaguely recalling there was an issue with the way ESes invoked some behind-the-scenes setup as soon as the VI was loaded into memory, well before user code started executing. Or maybe I was living in some alternate reality back in the 6.1 days? Dave
  4. Somewhere in the dawn of the ES (6.1? I think), while wrapping my head around this great new paradigm, I took it as a commandment that THOU SHALT HAVE NO MORE THAN ONE EVENT STRUCTURE PER DIAGRAM. I've frequently been appalled by the code of some of my coworkers who blithely put down 2, 3, or 4 ESes in separate loops. (Heck, I don't even like to have more than one ES in an entire execution hierarchy, maybe I'm carrying it too far?.) So it wouldn't bother me. But I am curious, AQ, if you went ahead with this enforced limitation, what kind of upgrade mutation could possibly save such (IMHO, barely maintainable) code that's out there? Dave
  5. Just so I don't run afoul of Michael, a summary: I have been in touch with Jim by email. With his help I got set up with TortoiseSVN and have pulled down the OpenG string source, for starters. I plan to update Scan Variant into String per the code above (but not with the FXP support, just yet) and include some new vectors in the test harness for enums (at present there are none). Beyond this string fix, I'd like to add FXP to the known types in lvdata, but that will propagate through string and variantconfig and perhaps others. I suppose if Mads wants to take on the array package changes we can all get what we want in a pretty comprehensive set of new releases. And Jim told me he's cool with moving OpenG up to LV2011. It's good to know that OpenG still gets support! Dave
  6. Attached is the existing code for Scan Variant to String_ogtk and a proposed alternative/fix to the enum case.
  7. Michael: so noted, will do. I just created an account on SF (as respdave). Also noted in the OpenG buglist here that Jim McNally reported the enum-to-Scan Variant From String issue just a few months back. I'll try to post my proposed fix here shortly so others can evaluate it. Should I switch this to the 'Developers' forum at this point? Also: I've eliminated LV2009 from my work machine, my oldest installation is 2011. Do I need to back-save to 2009 for discussion/review purposes? Dave
  8. Here are my notes for modifying OpenG LabVIEW Data, String, and VariantConfig to support FXP. I did this originally to support FXPs in structures populated from INI files. I am not certain of what other packages have lv_data as a dependency that might also be affected. _OpenG.liblvdatalvdata.llbType Descriptor Enumeration__ogtk.ctl:- change value 0x5F to "FXP"propagate type changes_OpenG.liblvdatalvdata.llbGet Data Name from TD__ogtk.vi:add case "FXP" (as dupe of case "I8".."CXT", "Boolean", "Variant")change Pstring offset from 4 to 36_OpenG.libstringstring.llbFormat Variant Into String__ogtk.vi:add "FXP" to existing case "SGL".."EXT"_OpenG.libstringstring.llbScan Variant from String__ogtk.vi;add "FXP" to existing case "SGL".."EXT"_OpenG.libvariantconfigvariantconfig.llbWrite Key (Variant)__ogtk.vi:add FXP to existing case "SGL".."CXT", "SGL PQ".."CXT PQ"and do same modification for internal case structure under case "array"_OpenG.libvariantconfigvariantconfig.llbRead Key (Variant)__ogtk.vi:add "FXP" to existing case "SGL".."EXT"and under "Array" case, add "FXP" to case "I8".."I32", "U8".."U32", "SGL".."EXT", "SGL PQ".."EXT PQ"and under *that* case, add "FXP" to case "DBL", "DBL PQ" I have a separate change to Scan Variant From String__ogtk.vi, to fix the aforementioned problem with scanning into an enum. To match the behavior of the 'Scan From String' primitive, I find it necessary to do the following: 1) Get Strings From Enum__ogtk 2) sort string array by string length (max to min) 3) use Match First String (therefore, the maximal match) 4) Set Enum String Value__ogtk with string at resultant index I'm not at all sure this is the most efficient method. But the existing code in Scan Variant From String__ogtk fails for matching enums with embedded whitespace, and may have other issues as well when an enum' strings share common initial character patterns. I don't seem to be set up to get timely alerts from LAVA by email when someone posts a reply. If anyone wants to follow up directly, please drop me an email. I'll try to fix the alert thing in the meantime. Best regards, Dave
  9. So, if I created an ID on SourceForge, I could check in my updates, as long as I did the work in... LV2009? (Is that the backmost version currently still supported?) And then those in charge could accept those changes for a future release, or modify, or reject/rollback? Meantime, if I posted here the textual description of changes, it would garner some attention and hopefully provoke a discussion? Does that sound like the proper way forward? Thanks for the replies. Dave
  10. Not sure whether I should post here, or on the developers' forum... so here goes... I've used parts of the OpenG tools for a number of years, particularly the data tools, string, and variant config packages. Recently I've taken to modifying a few VIs and typedefs and carefully segregating out the modified bits. The specific modification I'd like to discuss was the inclusion of support within the variant and string routines for fixed-point datatype. (I have a concise list of the changes needed to support FXP.) So, first question: are any of the members of the OpenG developers' community planning to officially roll this in? I saw it brought up as a request (perhaps informally) over a year ago. Second question - if I have identified what I feel is a bug/incorrect behavior, to whom do I direct the description? I've found some inelegancies with how the Scan Variant from String VI behaves with enums, and have a proposed fix. Any takers? Thanks, Dave
  11. I've used Digi devices for years, with few issues. Biggest system to date has four Etherlite 160's (total of 64 ports); sixty ports are tied to UUTs spewing 6400 char/sec each, all of which is digested by my LabVIEW application. The other four ports are used for instrumentation, doing query/response (a few dozen chars per message, as fast as the instrument responds). These terminal servers are setup to use the Digi RealPort driver, which provides standard Windows comm API, so VISA treats them like local asynch serial. When you get up to this level of activity, with array-launched VI clones, and lots of queue/notifier/event structure support, you have to be pretty careful about the details. Seemingly minor changes to cloned code, reentrancy settings, execution system assignment, etc can mean the difference between a working app and a quivering heap of unresponsive code. Dave
  12. Thanks, Crystal. As one of the "Old Guard", you already (IMO) have a credential that surpasses anything a CLD certifies. Funny, though, perhaps we're not as tightly regulated as you, but we've never given a second thought here to migrating to newer releases when starting fresh projects. Sometimes I think we are still flying a bit under the radar. We focus more on qualification of test systems, rather than V&V of our LabVIEW-developed code apart from those systems. If we were to do more formal software verification, we'd surely need to add personnel to handle that workload. Anyway, I took the test today. Got an 85. Way lower than my previous CLD-Rs and the original CLD. Not proud, not pleased, just relieved. But I still feel as though too much of the test was of the form: 13) Which of the following (blah, blah,...)? A. A twisty little maze of passages, all alike B. A maze of little twisty passages, all alike C. A twisty maze of little passages, all different D. A maze of twisting little passages, all different (With a nod to Colossal Cave, Zork, etc.) Dave
  13. I'm scheduled to take my CLD recert exam on Monday afternoon; this will be my third recert (took the CLD in Austin during NI Week 2004). As my LAVA listing shows, I've been using LabVIEW continuously for nearly thirteen years. I consider myself a pretty sharp guy (LV-wise), with a background in automated test, and the programming (in various environments) that goes with it, since the early eighties. My current employer didn't ask me to attain certification, I just did it on a lark. I forget the specific scores, but I know that the original CLD and two subsequent CLD-R's were all scored in the high 90s (I think one recert was a 100%). So I waited until today (Sunday afternoon, quiet around the house) to work through the practice exam. Ouch! Not a very pleasant experience, and when I scored it, not surprisingly, I barely scraped a pass. Random thoughts: I expect that with each passing recert, there will be a few questions added in that will demonstrate that I haven't adopted a new feature in my own development. And I know this has been discussed elsewhere on the forums. I'm OK with not knowing every new feature intimately. (Network Shared Variables; just haven't needed 'em, though I remember when they were DataSockets, before they grew up. Feedback nodes still confuse me with their syntax, I so prefer shift registers. And sorry Stephen, but I just haven't found a programming challenge yet that whispers "LVOOP, LVOOP" in my ear. Someday I'm sure I will make the leap.) Some of the example code snippets seem SO convoluted in their purpose, I can't help but wonder - am I being evaluated on how well I can troubleshoot some newbie-LV-minion's code? (I don't have any newbie-LV-minions at my command at my place of work, for better or worse.) Guess I'll read some of the dustier corners of the online LV help on Monday, if I can spare the time. Wish me luck, never thought I'd need it. Dave
  14. Is Windows server 2003 a supported target for LabVIEW 2009? I know that it specifically WAS NOT under prior versions, though over the years there have been reports of folks running LV apps on WS2003. I'm just asking to point out that you may not be able to get support from NI for any issues you have. Whether this is an issue for you, I couldn't say. Along other lines, have you checked the Windows event log on the target? Perhaps there is an application error event getting recorded which could suggest what's going on. Good luck! Dave
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.