Neil Pate Posted July 18, 2014 Report Share Posted July 18, 2014 Hi all, I have had some really weird behaviour in my app over the last few days. The problem seems to be related to several others mentioned on LAVA where the contents of a class are not being correctly updated via a bundle. My current problem goes something like this: I have a sub-VI where I update some members of a class, and then the class is used downstream in the calling VI. If I probe the class wire directly before the exit indicator in the sub-VI, the contents are correct, but if I probe it downstream (immediately after returning from the sub-VI) one of the member elements is not set correctly! This morning I had another issue with a similar class apparently not updating after a bundle operation, even though probing showed all the values to be correct. This issue is very very annoying as I now really do use classes quite a bit in my code, and this is such a terrible problem to debug as you can no longer believe what the probes are telling you. Any ideas? Next step for me is a mass compile or to clean out the object cache. Edit: cleaning object cache did not help. Quote Link to comment
Darin Posted July 19, 2014 Report Share Posted July 19, 2014 I usually start spewing Always Copy nodes around until it works. I have seen cases where aliasing causes problems like this and worse. I have seen dataflow go bad when a buffer is improperly reused leading to downstream code running early with old values. Ouch. Quote Link to comment
Neil Pate Posted July 19, 2014 Author Report Share Posted July 19, 2014 Yeah, did try an Always Copy, but it did not help. I will try putting them in other places like you suggest. Quote Link to comment
Popular Post JackDunaway Posted July 19, 2014 Popular Post Report Share Posted July 19, 2014 Neil, I've found this procedure to fix this problem, and a host of other issues with source/object code asynchronization: Ensure you have no outstanding changes in your working copy; everything is committed to the repo Ensure for multi-developer teams you effectively have a mutex on the entire repo; you're likely about to make a widespread commit that touches lots of source. It's especially imporatant to have no outstanding changes on .lvclass, .ctl, .xctl, .lvlib, .xnode, or .lvproj files, because these cannot be readily diffed or merged in LabVIEW. (Perhaps a good time to do this is on the weekend, where all team members know to have changes committed before then.) Shut down LabVIEW completely Open all classes in XML Editor (on windows, Notepad++ is good, associating lvclass files as XML in the Style Configurator) Find the <Property> tag for "NI.LVClass.Geneology". Delete this entire tag. This removes any stale mutation history. **Careful here!** The entire tag is likely to span 9 lines. Be sure to delete them all. (I've not yet found or developed a reliable API for this operation; lemme know if one exists.) Find the <Property> tag for "NI.Lib.Version" -- reset this to the value 1.0.0.0 Open LabVIEW to the Getting Started Window; "Tools >> Advanced >> Clear Compiled Object Cache ..." then clear the cache for both User and App Builder Shut down LabVIEW completely Open the project and offending VIs to see if this fixes things. Maybe so, but you're not done yet. If you have not done this yet, ensure every single source file is marked as source-only (e.g., not a unifile). Right-click the root node of the project, "Properties" menu item, "Project" config page Ensure "Separate Compiled Code from new project items" is checked "Mark Existing Items..." button, select all items in the tree, then "Mark Selected Items". Wait for the operation to complete, then close this dialog. Open this dialog once more; visually scan the entire list, ensuring all statuses are "marked" Right click on the root node and "Find Missing Items". Resolve these bad linkages, probably by just deleting them from the project. If you have not done this yet, find all diagram disable structures in your project (this is simple if all VIs are in LVClasses -- since all VIs are in memory -- or with the legacy VI Tree hack). One way or another, ensure all VIs are in memory, find all disable structures, and analyze the disabled cases for links to missing VIs. Do that again for Conditional Disable Structures. Mass compile the project (from right click context menu from root node of project tree) Might as well take the opportunity to fix all the errors that are returned by the mass compile. If you have insanities, search the phrase "labview heap peek insane" for instructions how to fix these. If you can't get the keyboard commands for heap peek to work (e.g., if you're developing in a VM), there exists an undocumented VI Server App method to toggle heap peek visibility. Shut down LabVIEW completely Goto Step 7; rinse and repeat until zero errors are returned Ensure the application still runs from source; e.g., you've not accidentally broken something. If this smoke test passes, commit all changes to your local repo. This is your first checkpoint. Goto Step 7, and follow all steps again back to here. If some source files have been modified (e.g., there exist modifications in your working copy), this means will probably benefit from cross-link and circular-link tips below. Especially, if during the mass compile, you see the dialog repeating what appears to be the exact same operation over, and over, and over, and over, and over, and over, and over, and over ... Try implementing the tips below. Goto Step 7 Goto Step 3, and follow all steps again back to here. You should be able to complete this entire process with zero modifications to the codebase. Commit changes to local repo. Goto Step 3, and follow all steps again back to here. If you're happy, push to central repository. Have all developers/contributors clear their compile object caches. No really; witness that operation. It's vital. Have all developers pull the project. Ensure no corruptions exist; ensure the project can be opened and closed immediately with no "dirty dots". If these dots exist, your environments are somehow different (different versions of LabVIEW?), and should probably be resolved. I've tried to codify this procedure accurately above, but drop a line if it doesn't work or correct it with a response to this thread. Tips for the future to avoid these problems (from extensive at-scale experience with LVOOP spanning LV2009-2013SP1f2+): Refactor to avoid all typedef'd clusters in Class Private Data definitions. If they must exist, ensure that if they ever recompile, all LVClasses linking to the type are also saved to disk, especially if it's just one of those 'recompiles' that we've conditioned ourselves to ignoreOne overlooked place that typedef'd clusters can hide is as sub-types to some native labview objects, such as Queues, Events, etc... Avoid type defining primitive types such as strings/numerics/bools (You:"but..." me: "I know, me too. But it is what it is.") If you are using reference types (e.g., .NET) as elements in your class private data, you may run into unique issues that require other resolutions. One resolution is storing the reference as a flattened type, then creating accessors that cast to/fro the desired type. (Avoid even having elements in your class private data. Statelessness and immutability are desirable anyway for a functional style and synchronous dataflow.) When refactoring and changing the scope of types, ensure that both owning library files (lvclass, lvlib, xnode, xctl) are saved to disk. Else, corruption, sometimes tough to detect immediately, and may not even show up until build-time. Usage of LVOOP property nodes voids the warranty of LabVIEW altogether for applications of non-trivial scale. Don't use them in any version up to the time of writing, 2013SP1f2+ (You:"but..." me: "Good luck with that") Avoid circular linkages between classes. It's not uncommon for the type-propper to give up recursing if too many circular linkages exist, sometimes ending in a "VI failed to compile" error.To analyze these linkages, open classes one-by-one in an empty project. Inspect the "Dependencies". Click on classes/libraries and say "Why is this item in Dependencies?" to highlight the static link. Though, on codebases that have hidden corruption, this feature crashes LabVIEW -- instead, "Find Callers", which is slightly less convenient because it may highlight other Dependencies. To fix the dependency vector, refactor. Strictly avoid nesting of lvclasses within lvlibs; consider avoiding lvlibs altogether. The problems they can cause (edit/build/load/run-time performance, accidental corruption) are typically not worth the benefits (scoping/namespacing, common icon, LVProj loading members for API convenience...) (This is one key offender for the "endless" loops while mass compiling, and can drastically degrade edit/build/load/run-time performance.) Avoid polymorphic VIs; these "convenience" wrappers are just overloaded functions that carry static linkages to stuff you don't need to be linked to. (This is one key offender for the "endless" loops while mass compiling.) XNodes are a better alternative, but quite tricky to develop. The moment you or a developer colleague says to the other "i'm not seeing that in my environment", clear both your compiled object caches and re-start whatever you were first talking about. (Mass compiling is not sufficient, since the cache is not always invalidated when it should be, but this has gotten steadily better from 2009-2013) To sum up; avoid static linkages, and ensure dependency vectors between all code modules are unidirectional and don't form cycles. For multi-developer teams and/or codebases of non-trivial size; may the force be with you. And may you have an experienced colleague or consultant periodically auditing and transparently fixing the codebase to keep all productive and healthy and happy. Open offer: if you've ended up on this thread at your wit's end, send me a message and let's screenshare and fix it. 10 Quote Link to comment
Neil Pate Posted July 19, 2014 Author Report Share Posted July 19, 2014 Jack, I don't know how long it took you to type that out at the next CLA summit. Thank you. I will try those things, some I have done (or already do as best practice working procedure). The codebase is quite large, 140 classes, approx 4500 VIs excluding vi.lib, so this could all end messily Quote Link to comment
JackDunaway Posted July 19, 2014 Report Share Posted July 19, 2014 Jack, I don't know how long it took you to type that out Turns out, typing that was neither the expensive part nor the bottleneck in being able to relay this information. Hope it helps :-] Quote Link to comment
JKSH Posted July 20, 2014 Report Share Posted July 20, 2014 Wow Jack, that's a treasure trove. Is NI aware of these problems? Quote Link to comment
Neil Pate Posted July 20, 2014 Author Report Share Posted July 20, 2014 Hi all, I am a bit embarrassed to say in my situation there is no bug in LabVIEW, this is clearly a case of PEBKAC. Sorry NI! The root of my problem was I created a sub-class from one of my main objects, and (for reasons I can not think of now other than it was very hot the other day) in the private data of the sub-class I added a data member object that already existed in the parent (same class, same name etc). Then when I was probing around I did not notice that I was looking in the wrong place in the cluster (basically I saw what I wanted to see...). I think I may still have the problem of a class not updating properly in a different part of my code, but I had already "fixed" that by re-arranging the class data slightly and the problem went away. Panic over for now. Thanks again Darin and Jack, I will certainly remember this thread for next time I get this problem. Quote Link to comment
GregSands Posted July 20, 2014 Report Share Posted July 20, 2014 Can I just add that when using classes across multiple targets (Windows, RT, FPGA), the issues are squared, not doubled! Thank you for the checklist (lets call it LVOOP-OCD) which I'll run my current project through soon. Particularly removing the class mutation histories, which are a waste to have if I have no code that could ever use a previous version. Why isn't there a class setting to never retain mutations? For some situations (e.g. pre-release) you're only ever interested in the current version. Or perhaps to retain only mutations related to major version number changes. Jack, could you elaborate on your avoidance of typedefs? I'm in the "but ..." camp, and though I don't tend to typedef primitives, I do use typedef clusters (not strict) throughout my classes. What issues arise? I haven't had any major problems to date (except with the RT object cache getting out of sync with the Windows object cache), and like having the "certainty" of the same data structure everywhere. Is there a method for reliably using them? Or am I best to convert each typedef cluster into its own class - and is that any better anyway? The other smaller question is LVOOP property nodes - I haven't noticed any problems at all with using them, and I'm still on LV2012 (may jump to LV2014 in a month). Is something unsavory lurking beneath the surface? Quote Link to comment
smithd Posted July 21, 2014 Report Share Posted July 21, 2014 The other smaller question is LVOOP property nodes - I haven't noticed any problems at all with using them, and I'm still on LV2012 (may jump to LV2014 in a month). Is something unsavory lurking beneath the surface? If anything, issues with property nodes have gotten better over the last few years -- at least in my usage. All of the really bad bugs I've seen have been fixed as of this most recent patch (see this recent thread for an example: http://lavag.org/topic/18308-lvoop-compiler-bug-with-property-nodes/). But, there are still a lot of usability annoyances related to case sensitivity. For example, if I name my folder Properties in the parent class and "properties" in the child class, you'll see two property items which link to the same dynamic dispatch VI. When a recent code review found a misspelling in the property name of a parent class, I had to go through every child implementation (~15 at this point) and fix the spelling there, because the classes were broken somehow. This and more is tracked by CAR 462513. Finally, I remember seeing Stephen post somewhere that property nodes, due to their inherent case structure, prevent LabVIEW from optimizing the code -- even if the VI is set to inlined. For this reason alone I avoid them in any code I expect to be running at high-priority. For UI, however, I use them extensively. Quote Link to comment
GregSands Posted July 21, 2014 Report Share Posted July 21, 2014 Finally, I remember seeing Stephen post somewhere that property nodes, due to their inherent case structure, prevent LabVIEW from optimizing the code -- even if the VI is set to inlined. For this reason alone I avoid them in any code I expect to be running at high-priority. For UI, however, I use them extensively. Thanks - I do almost exclusively use them in UI code, but will be sure to be careful otherwise. Quote Link to comment
Mike Le Posted July 21, 2014 Report Share Posted July 21, 2014 Reading Jack's post really drives home to me how much time slinging wires isn't about programming so much as doing battle with the compiler and the IDE. The list of otherwise useful LabVIEW features that are off-limits due to bizarre and mysterious corruption problems is absurdly large. I'm not quite at the point where I'm willing to take all of the preventative measures Jack recommends, but I also admit that I'm getting close. I think one more really bad corruption problem could tip me over the edge. As it is, these days I'm loading ALL my even tangentially-related LabVIEW code into memory any time I make a class or library ownership change, and that feels pretty extreme already. I long for an lvlib "lite" that allows me to associate a set of files together, pass them around to other people, and indicate hierarchically on the project window that they belong together... that DOESN'T try to load all the dependencies into memory, that doesn't check if the dependencies are IN TURN in libraries, and then try to load all the dependencies of THAT library into memory, etc... Quote Link to comment
Mr Mike Posted July 22, 2014 Report Share Posted July 22, 2014 The other smaller question is LVOOP property nodes - I haven't noticed any problems at all with using them, and I'm still on LV2012 (may jump to LV2014 in a month). Is something unsavory lurking beneath the surface? I was under the impression that all the linking issues with LVOOP property nodes had been resolved. I haven't seen a CAR on that at least since 2012. Admittedly, LVOOP property node CARs are handled by someone else now, but they usually come to me to review fixes for any serious issues. I was silently following this thread to see if anyone else had anything to say about Jack's comments about LVOOP property nodes. If anything, issues with property nodes have gotten better over the last few years -- at least in my usage. All of the really bad bugs I've seen have been fixed as of this most recent patch (see this recent thread for an example: http://lavag.org/topic/18308-lvoop-compiler-bug-with-property-nodes/). But, there are still a lot of usability annoyances related to case sensitivity. For example, if I name my folder Properties in the parent class and "properties" in the child class, you'll see two property items which link to the same dynamic dispatch VI. When a recent code review found a misspelling in the property name of a parent class, I had to go through every child implementation (~15 at this point) and fix the spelling there, because the classes were broken somehow. This and more is tracked by CAR 462513. Finally, I remember seeing Stephen post somewhere that property nodes, due to their inherent case structure, prevent LabVIEW from optimizing the code -- even if the VI is set to inlined. For this reason alone I avoid them in any code I expect to be running at high-priority. For UI, however, I use them extensively. The case sensitivity CAR is an interesting one. I hadn't seen it yet. Sorry it was a pain for you. About the case structure issue. Yes, there is a second case structure that gets inserted, but it's set up so that it defaults to the no-error case, which means it operates very quickly. I haven't tested it thoroughly, but I don't think it makes a significant difference on performance. You can also turn on the the "Ignore Errors Inside Node" option to remove a case structure on the calling VI. And you can remove the case structure from the property implementation VI. BONUS: I'll buy a (free) beer at the LAVA BBQ to anyone who writes a test that shows a big difference in performance between property nodes and subVIs. Consider whether the subVIs are inlined, too. I reserve the right to judge what a big difference is. To be clear, the factors to consider are: 1) Ignore Errors Inside Node, 2) Case structure in property accessor, 3) Inlined accessor VI or not. You immediately forfeit your right to a beer if some crucial part of your code gets dead code eliminated. Competition ends August 4, 12pm CDT. Really the only purpose of this is to settle the question once and for all. Quote Link to comment
smithd Posted July 22, 2014 Report Share Posted July 22, 2014 BONUS: I'll buy a (free) beer at the LAVA BBQ to anyone who writes a test that shows a big difference in performance between property nodes and subVIs. Consider whether the subVIs are inlined, too. I reserve the right to judge what a big difference is. To be clear, the factors to consider are: 1) Ignore Errors Inside Node, 2) Case structure in property accessor, 3) Inlined accessor VI or not. You immediately forfeit your right to a beer if some crucial part of your code gets dead code eliminated. Competition ends August 4, 12pm CDT. Really the only purpose of this is to settle the question once and for all. I was curious so I put something together (well most of it was stuff I've had lying around from previous benchmarks). I figured my first go at the benchmarks would be wrong, so there is a script to generate them. 1. Run gentests/gentests.vi 2. Open Benchmarker.vi and set the path to a generated report file 3. Run benchmarker.vi My first attempt is attached. If you want to change the benchmarks, they are the templates in the folder. From this first run, I don't see a difference, but as I said my first attempts are usually wrong when it comes to benchmarking. propbenchmark.zip first run.zip Quote Link to comment
Darin Posted July 22, 2014 Report Share Posted July 22, 2014 It will be a cold day in Austin during NI week (or let's say it will be hotter inside the convention center than outside during NI Week) before I will use property node accessors again. I once succumbed to their siren song and wrote some SDR code using them. The inability to inline subVIs containing property nodes was a disaster, I had to write a QuickDrop shortcut to replace the property nodes with their underlying accessor VIs. Simply doing that (it was not so simple in practice) allowed me to inline certain SubVIs and then the code worked. No changes to the underlying VIs, just removing them from the PN was enough. Want to quibble over what a "big" difference is. A seemingly "small" difference which means the difference between being able to process the IQ data in less time that it took to acquire it or not is by definition a "big" difference in my book. In summary: you could write code that shows that Property Nodes are in fact faster than the underlying SubVIs. Nice, but not so helpful in the real world. When the rubber meets the road, the inability to inline can hurt much, much more than some minuscule performance difference. Quote Link to comment
JackDunaway Posted July 23, 2014 Report Share Posted July 23, 2014 [Re: LVClass Property Nodes] Performance aside -- if we agree that our codebases are liabilities, not assets -- in what universe does 16kB file per accessor make sense?* A long time ago in a galaxy far away, I agreed spiritually with the decision behind the design to require explicit UDClass methods for accessors (quote: "These are major advantages of class data encapsulation, and we want to encourage people to design software that naturally has this advantage. By not supporting public or protected data, we eliminate a potentially confusing computer science concept from the language (at the very least, we cut out one more thing that needs explanation) and drive people toward a better architecture.") Four years ago, after a year of novice learning, I questioned the design in the form of a feature request. A year and a half ago, I confirmed that was not just a naïve opinion that should have fizzled with experience. Today: confident enough to call LVOOP Property Accessors an incorrect language feature, and substantially painful enough to warrant redesign. A correct, desirable solution facilitates and promotes painless, seamless, intelligent design of class member scope. The design below, currently implemented and shipping for years now, needs to be removed. It's a clunky method of spewing liability and naïvete all over yourself and your project: So, this weird-ass helpful-looking scripty-thing in the IDE is a "facility" to the uninitiated and eager, an "exsanguinator" to the initiated and burned, and it provides a UI affordance that pointedly enables and encourages the precise opposite of the decisions behind the design. To sum up, we LabVIEW users are given 1) a document that posits we Picture-Programming Mouth Breathers don't need and can't be trusted with sharp instruments, and 2) a sharp instrument only good for cutting ourselves and nothing else. Wut? The self-fulfilling prophecy of ignorant software design given poor language facilities is publicly frustrating to me. As a side-topic, I strongly desire opening the conversation about unifying the type system of LabVIEW, this being one topic of usability-versus-problems with accessors, and as a case study comparing accessor syntax of typedefs to the other walled-garden composite types. (To expand, it's worthwhile to unify by-ref built-in classes like FileIO/VIServer/VISA/DAQ; STL-like reference designs such as Queue/Event; type definitions; .NET objects; and then our only "officially supported" integration point for types, the neglected bastard UDClass). New keywords for classes that guarantee immutability would be cool, providing safe read-only access for object members (e.g., the ctor is the only setter allowed by the compiler). A sane ontological relationship, such as a Trait would be phenomenal (when, neither inheritance nor composition fit; which is, like, a lot. nailing the object model itself resolves some data access deficiencies at the root!). Let's pick up stone and throw it in any direction to pick a conversation about how to improve LVClasses, except in the direction of UDAccessors. Those categorically remain 16-thousand byte liabilities, and let's just put to bed the syntax burden ought to belong in language-land, not user-land. Yes, got sidetracked onto LVOOP generally rather than staying on the LVOOP Property Node topic, but for an important reason. I'm not riled up here because LVOOP Property Nodes ended up as a bad idea with an imperfect execution. Like, whatever, k? We can fix that, a bit inconvenient, but no prob, we can work together. The fundamental source of angst is that LVOOP is not appreciably better than when it first debuted nearly a decade ago. And maybe, in some existential and puny way, if we declare a cease and desist on LVOOP Property Accessors, LVOOP gets better? I dunno man. Seems tenuous. Let's close out here. Remember kids -- friends don't let friends use LVOOP Property Accessors. Next week's topic: namespacing and source file formats (generally, linker questions like "who relates to who, how?", which makes this accessor conversation look like polite chatter in the grocery line) * To you -- yes you, with the $/GB or €/TB figure on your Casio solar-power display -- you're handy at maths, help me with this. What are the opportunity and actual costs of maintaining one LVOOP Property Accessor? Does that scale linearly with multiple accessors? Multiple classes? Number of collaborators? How much time does it add to the build, type-propping and compiling, opening the LVProj, closing the LVProj, committing to SCC, diffing or merging on conflict, run-time performance (before it's loaded into memory, not after), debugging, when spanning across multiple targets, when spanning multiple release versions, when developing subclasses, when refactoring to add superclasses, when moving some data from sub- and some to super-classes, when explaining to your colleagues how "it must have been labview fault I just effed up the build" (punch line: you're right), when explaining that to your boss (who is sometimes the customer), when coming home late and explaining to your family (punch line: being right doesn't matter here; being good matters), when inheriting code from some trigger-happy accessor-scripting-chump, when you unknowingly run into one at least a few known vectors for insidious corruptions and bad behaviors with LVOOP property nodes? Cost certainly does not have "bytes" in the units, but certainly does have units of $$ and faith and morale. My experience, including many reboots of faith and second-third-fourth chances with LVOOP property nodes, is now converged to categorically default them as "too costly". You: "but..." Me: "Have at it; go crazy." Quote Link to comment
Mr Mike Posted July 23, 2014 Report Share Posted July 23, 2014 I was curious so I put something together (well most of it was stuff I've had lying around from previous benchmarks). I figured my first go at the benchmarks would be wrong, so there is a script to generate them. 1. Run gentests/gentests.vi 2. Open Benchmarker.vi and set the path to a generated report file 3. Run benchmarker.vi My first attempt is attached. If you want to change the benchmarks, they are the templates in the folder. From this first run, I don't see a difference, but as I said my first attempts are usually wrong when it comes to benchmarking. I took a quick look over it. I removed the write before the property node reads because I thought it might be doing some priming that wasn't necessary. I also made all of the loops use a shift register to use the same object the whole time, instead of copying it each time. Finally, I made every VI read or write four times instead of just once, so that the differences in timing could compound and be more noticeable. It looks like the property nodes are 8% slower. I haven't looked at these tests with a critical eye (or a properly caffeinated eye), but that's about what I'd expect. I bet if you included the same error checking as the property node the difference would be closer. I hope you're coming to the LAVA BBQ It will be a cold day in Austin during NI week (or let's say it will be hotter inside the convention center than outside during NI Week) before I will use property node accessors again. I once succumbed to their siren song and wrote some SDR code using them. The inability to inline subVIs containing property nodes was a disaster, I had to write a QuickDrop shortcut to replace the property nodes with their underlying accessor VIs. Simply doing that (it was not so simple in practice) allowed me to inline certain SubVIs and then the code worked. No changes to the underlying VIs, just removing them from the PN was enough. Want to quibble over what a "big" difference is. A seemingly "small" difference which means the difference between being able to process the IQ data in less time that it took to acquire it or not is by definition a "big" difference in my book. In summary: you could write code that shows that Property Nodes are in fact faster than the underlying SubVIs. Nice, but not so helpful in the real world. When the rubber meets the road, the inability to inline can hurt much, much more than some minuscule performance difference. I was under the impression that issue had been fixed. I filed CAR 484048 to our performance team. They'd still not be uninlinable if they were dynamic dispatch. Quote Link to comment
smithd Posted July 23, 2014 Report Share Posted July 23, 2014 I had to write a QuickDrop shortcut to replace the property nodes with their underlying accessor VIs. Do you still have this (or have you posted it somewhere)? When I found found my most recent issue with property nodes (which is fixed) I went through code manually to do the same. I took a quick look over it. I removed the write before the property node reads because I thought it might be doing some priming that wasn't necessary. I also made all of the loops use a shift register to use the same object the whole time, instead of copying it each time. Finally, I made every VI read or write four times instead of just once, so that the differences in timing could compound and be more noticeable. It looks like the property nodes are 8% slower. I haven't looked at these tests with a critical eye (or a properly caffeinated eye), but that's about what I'd expect. I bet if you included the same error checking as the property node the difference would be closer. I hope you're coming to the LAVA BBQ Interesting changes. I wasn't sure whether writing the same value would be optimized out so thats why I made a new random value for each run. Is there an easy way to see what code gets eliminated? The reason I personally don't like property nodes is because of the error checking. I know a lot of people disagree with me but I prefer not having error nodes if you don't expect that code to throw an error. Accessors shouldn't throw errors (unless I know I want to do validation on it). Property nodes force me to add error wires which I never expect to use. However, property nodes do keep things clean in some cases and they really improve discoverability of data, so I see them as a necessary evil Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.