If you're using typedefs for your enums then the rules are pretty much the same: they work well within a project (we can update the uses of that typedef automatically), but they can cause problems for VIs that aren't in memory when the change is made. I still think you're ok if you just add to the end, though.
This is tricky in any language, really. An enum is basically a symbolic representation of a numeric value. If you change the definition of that enum such that the symbols map to different values than they did before then you may break existing code. It's just one of the things you have to keep in mind when writing a reusable API.
More magic. When debugging is enabled we emit a little extra code between nodes that handles things like probes and breakpoints. We could do what C debuggers do (use the processor's breakpoint mechanism), but that's difficult to do in a cross-platform way. Our current mechanism allows us to do the same thing on all platforms, and even allows us to do remote debugging and probes with the same mechanism. It's just more flexible.
We try to optimize the callers based on what we can get away with toavoid copies of data, but that means that when things change in thesubVI then it sometimes affects the callers. For instance, if you have an input and output terminal of the same type and they weren't "inplace" before (meaning the compiler couldn't use the same spot in memory for both) but they are now, then the caller may need to change. Or it could be the opposite (they were inplace, but now they're not). It could also be that an input was modified inside the subVI before but now it's not (or the other way around). If you use dynamic dispatch or call by reference then you're somewhat shielded from these concerns (we can adapt at runtime), but you lose some of those optimizations. You may end up with copies of data that aren't strictly necessary.