Jump to content

kej

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by kej

  1. Thanks hooovahh, now that you posted that NI Forums page I vaguely recall I had seen it before. NI of course got the web help system working again today, but in the meantime I realized I can separately launch the .chm help from LV2021, so now if I get exasperated with the online version I can revert to that (despite the (extremely small?) risk of functionality differences in newer versions of vi.lib)
  2. When the approach of reverting to offline help when online "can't be reached" fails. At first I thought NI was missing the help for a specific obscure VI, but as you see above, it's the entire help system.
  3. I resemble that remark. Best holiday gift so far this year. Thank you!
  4. Wow @LogMAN, that was a thorough reply, thank you! The black background/outline on the object icon is a key thing I was missing. You asked: I never write to class controls, in fact it's never occurred to me to do that! And I don't think I would ever want to do that. A long time ago I decided never to use property nodes for private data access. I will do a little research to see if I should reconsider that... Instead, within member VIs I always use bundle/unbundle, outside the class I use getter/setter methods. I'm happy to make custom getter/setters if I need to access some arbitrary set of the private data. When appropriate, my favorite idiom within the class is to use the Unbundle/Bundle Elements border node of the In Place Element Structure. I write a lot of RT code where I need to preallocate memory and use in-place. For example:
  5. I've been burned by another issue involving class private data saved anywhere other than the class private data control definition itself. Specifically, I've had the App Builder fail to compile or build .exe's that come up broken in the runtime when I have class private data saved in a block-diagram constant or an FPC--and when that data is in a previous structural form reliant on the class mutation history. I never intend to invoke the class mutation history, in fact I'm not sure I knew it existed until I ran into the problem. But it's easy to do accidentally and afaik there's no way to tell without building tooling outside the IDE, so this falls into the silent-but-deadly category of not having visibility into automated mutations. The easiest way to fix this issue is simply to delete all mutation history for all classes I'm using (since I have no backwards-compatibility needs). But that's yet another thing that I have to remember to do or add to build automation, and it's philosophically wrong. (It works, so I do it...) But that got me thinking if there might be a way to avoid the issue in the first place and I haven't figured out a way to do it, and in the process I realized I just don't understand how saved class default values are handled. Even if I'm going to explicitly initialize all class private data, the class itself has to be instantiated using a BD constant or a FPC, right? Before running into mutation history problems I used to just use a BD constant in some sort of constructor VI. Then I changed it to the following approach: This is the "Create" VI for a class, sorry about the embedded editorial, that's how I felt when I wrote it. Note that the class control doesn't appear on the conpane, so I'm using it like a constant to instantiate the object. But I'm not sure that this approach actually does anything. When I plunk the class control on the front panel presumably it gets the current default value of the private data control, but what happens then? Is it ever updated automatically? Experience leads me to believe that if I never explicitly save a default value for the class control in this VI, any changes I make to the class private data structure will propagate to that control and the class mutation history won't be invoked. But maybe that's not true, I have only anecdotal evidence, so asking here. If it is true then the invoke node above is redundant. But even if the invoke node above isn't redundant, maybe it initializes the private data to the previous private data structure version still dependent on the mutation history. And if I somehow accidentally explicitly saved the default data on that control, I'm realizing now that the code above probably doesn't help me with the App Builder anyway, since presumably the builder knows nothing about the runtime effects of the invoke node. Unfortunately, even if I figure out how to handle bypassing the mutation history at object creation, it turns out that the problem with the App Builder can be invoked by any saved default value on a class control anywhere in the code, even if that control is always on the conpane and required to be connected. After I knew about this issue I still managed to trigger it in UI VIs where I was in the habit of setting a bunch of front panel controls and then saving them with "Make Current Values Default" in the edit menu, in the process saving the class control default as well. At that point nothing breaks, it's only several weeks later when I change around something in the class private data definition, and "inexplicably" I can't make .exe's anymore. With any luck I only end up spending a few hours tracking it down, facepalm myself, delete the mutation history, tell the IDE to, yes, go ahead and throw away the incompatible saved default values, and all works again. So: when are class private data default values saved, and are they ever automatically updated? Is that documented in the documentation and I missed it somehow? Is there a way to get around this issue other than deleting class mutation history? If deleting class mutation history is indeed the best approach, then perhaps it should be available as an explicit operation in the IDE. In my case it might be even better if there were a global setting to not save class mutation history in the first place.
  6. Yeah, I've been burned by that as well, but I do find that the feature can be extremely useful if you use if carefully. The absolute worst aspect of this issue (pre-LV2016) was that sometimes updating a typedef'd enum would revert a block-diagram constant to its default (or was it a random?) value silently (and of course without breaking the VI, just its functionality). That would wreak havoc with any state machine, for instance, that used an enum to switch state cases. Thank goodness that's behind us. If I want to update a cluster or enum typedef post-LV2016 I very carefully observe the following rules: Adding elements to clusters or items to enums is generally safe and can be done with abandon. (Especially if you don't enable defaults in enum-switched case structures.) Changing the name of a cluster element seems safe, as does reordering elements. Changing the item string of existing enum items also seems safe. When deleting enum items, delete only one item, save and close the typedef, and then go clean up the ramifications in the codeset. Repeat the full process until you've deleted all the items you want. The algorithm doesn't seem to get confused with such a simple, non-compound change. Never delete and add enum items in the same operation! Deleting cluster elements might also require the approach for enums, but I never ever use the values in cluster constants, even strict typedef'd ones. I always initialize in-situ or make an init VI to handle that. So I never run into problems deleting and adding elements to a cluster in one operation. Heh, I came here to ask a different question about Class constants and the mutation history, but couldn't resist responding to Michael first.
  7. Hello everyone. First, I want to say a long overdue thank you! The resources, eavesdropped advice, and answers I’ve found here over the past 11 years have been immensely helpful. It’s about time for me to stop lurking and join in. You will be shocked, shocked, to hear that I have had some big problems with the LabVIEW IDE and especially the App Builder for years. I’m about to rewrite a framework I developed originally to facilitate building distributed data-collection and control systems in the cRIO context (which implies that the code must be written and synchronized across FPGA, RT, and PC targets simultaneously.) I’m facing some architectural decisions and I’d like your thoughts. I’ll describe the existing architecture, then the kind of problems I consistently contend with, then some proposed approaches for the new framework. Sorry, this is long, but I’ve condensed as much as I’m comfortable. (The post was originally like 3 times this length.) I’m more than happy to provide extra detail on anything you’d like. Here’s the (simplified for clarity) architecture and style of the original framework. With few exceptions, all functionality that needs to exist on multiple platforms is written in a LV language subset that is supported on all the necessary platforms. Practically speaking, this means that for a unit of functionality or definition of static state needed on (for example) FPGA, RT, and PC platforms, I write a VI that can compile on FPGA and use it directly in the RT and PC code as well. This shared-code approach is used mostly for static system definition and for lower-level communication protocol definitions and implementations. The FPGA code, for instance, instantiates class objects representing physical hardware modules connected to the FPGA and/or software compute modules running on the FPGA. The VI instantiating these module objects can then be run as-is in the RT or PC context, ensuring that static initialization is identical on all platforms. The definition of and serialization code for message formats for these classes is written similarly. All code is written within a class hierarchy. Inheritance is used sparingly, shallowly, and exclusively (I think) for behavioral—rather than data—inheritance. Composition, on the other hand, is extensively used. My first OO exposure was in a couple of years of Java development, so I was naturally biased towards the notion that “everything is a class.” A system is a “rig” class object which contains “chassis” class objects which contain “module” class objects etc.. “rig”, “chassis”, and “module” classes are written essentially abstract, and are always instantiated as system-specific concrete children. (If interfaces had existed when I first coded this, I would of course have gone that route instead.) There’s another category of class that functions more as a singleton bucket of generic functionality that gets initialized once and then provides some kind of service. The generic communications infrastructure is written that way. I happily stick identical copies of this kind of object, once initialized, all over the place as a private data member within lots of other objects. That way I can have access to all the functionality and state of the singleton object, almost as if it were a parent class, without the craziness of actually implementing its functionality further up the inheritance tree. Classes are used across platforms, so they necessarily include code that’s compatible with only one platform. Since class libraries are loaded monolithically, this code gets loaded on all platforms, so I have to guard (or automatically “comment out”) platform-specific code using conditional structures. I get around the class-locking problem with a bit of source code management trickery. The code lives in SVN in a single filesystem hierarchy. It includes a separate project for each target type with the code loaded only under that target. Locally I check out a separate clone of the source for each target type and program/build apps for that target only in that clone. Therefore if I’m programming for FPGA, RT, and PC then I’ll end up with three separate local copies of the whole codeset, and I use SVN to keep them synchronized. Ah, and of course all code is set to use the compiled object cache, which I generally clear (perhaps out of superstition) when I change target context. Here are the kinds of random showstopper problems I continue running into: Random VI corruption, sometimes bad enough to crash LabVIEW at project/class load. Apparent persistence of previous dependencies, even when no dependencies still exist in the current (visible) code. Unloading project doesn’t unload project contents (which isn’t a huge problem per se but indicates unaddressed latent corruption) App builder magically stops building, especially for RT. App builder successfully builds but compiled app fails to run, either on RT or PC. Interestingly, I’ve had the least problems with FPGA…that generally just works (meaning that when I have a compile problem, the issue is always a bug in my code that the compiler found.) I’ve gradually gotten better at troubleshooting and fixing these problems, but until NI squashes more issues I need to adopt an approach that avoids them better. Over the years it has seemed like these problems have been somehow caused by constantly loading classes and codesets in different target contexts (or in 64-bit vs. 32-bit LabVIEW). I’ve drawn that conclusion partly because if I stay in a single target context, things seem to be quite stable. So my working theory is that there's some fundamental problem with using classes containing cross-target code on multiple targets. As an aside, for most of the past year I haven’t been doing any FPGA programming, yet just changing back-and-forth from PC to RT is problematic. (In fact my sense is that most of the issues come from the RT tooling in LV, the FPGA stuff has always been quite solid for me. This is only my impression!). When working with just PC and RT I am not using different SVN clones as described before, I’m only using separate projects for the different targets…so the VI paths from the IDE’s perspective remain the same. But maybe there’s some other issue I haven’t found. I’ve played around with, for example, whether to keep typedefs within classes, and when to use strict vs. non-strict typdefs. I’m also painfully aware of the mutation history in classes, and since I’m not developing reusable libraries for distribution there’s no downside to blowing away the mutation history, so I do that fairly often. But maybe not often enough? In theory none of this should matter…yet apparently it does. I'd appreciate any guidance on anything else that might be "dangerous" given the description of my workflow. Potential new approaches: I’m OK with upgrading to the current LabVIEW version. Recently I ran into the “opening a single VI crashes LabVIEW” issue and was able to fix it by opening the VI in LV2022 and saving for LV2021. (I hope this is evidence of increased prioritization of stability in LV development at NI...) The elephant-gun potential new approach would be to dispense altogether with classes and instead fake some of their functionality with typedefs and auto-populating folders. I hate this idea, but it would limit the number of VIs that get loaded on multiple targets and simplify internal namespace considerations within the LabVIEW IDE (not to mention avoiding all of the logic around class member attributes, protections, etc.) Another approach that has significant unrelated appeal for me is to rebuild using interfaces instead of inheritance. In fact I just watched AQ’s 2022 NI Connect talk on interfaces where he explicitly suggests replacing abstract classes with interfaces. So I’d like to do that anyway, but I really don’t want to invest in that approach yet still face instability because I’m using classes in different target contexts…assuming that’s actually the root of my problems. Thanks for listening. Any advice appreciated. --kej
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.