Jump to content

Aristos Queue

  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Aristos Queue

  1. I encourage Ben not to worry too much about the interoperability with G and focus on getting the new language stabilized within itself -- a graphical notation for a safe reference system has value even if it only talks to itself... it can become an ecosystem just as G did. Now, obviously it has *more* value if it can talk to other languages, just as G has gained that ability over time. But that doesn't have to be up-front. Once the Rebar memory management is well-defined, creating the rules for interop should be relatively straightforward. I'd be far more concerned about Rebar cutting a corner to accommodate G than about making the interface seamless. The whole point of building an absolutely provable system is that it is provable -- any shortcuts and the whole attempt loses its value. Interop between Rebar and G will come in time.
  2. I agree. Ben works on making a language safe for references. I work on making a language without references. Our goals are the same: a world where data can be trusted.
  3. Ethan: Your CAR has been received, and I believe I have it fixed for LabVIEW 2019. Testing is ongoing to verify, but I’m confident. The in-place optimizations for class field access should I hope be addressed in LabVIEW 2019 if you have simple, inlined, static dispatch accessors that do not include error in and error out. We found a way to make the magic pattern that works for clusters apply to classes. Fingers crossed. Relaxing the block on access to class private data has even more gotchas than I expected, and not just at the technical level... since we do method substitution within the VIMs, if we allow access to class private data then it seems like we should allow substitution of the fields. But that is a pretty ill-defined interface that crosses the line to admit some of the hardest-to-debug C++ code. I’m opposed to admitting that to G language, even as I acknowledge its powerful notation. Even if I bought into it, I still don’t have a good solution to the technical barriers, at least not in LabVIEW 20xx. In LabVIEW NXG, it seems like it may be more viable.
  4. Is that true in the recent versions? The typedefs got revision protection in LV 2015, I think. If the mutation is going to throw away data, it does the same sort of "relink" behavior as when changing conpanes of subVI so you preserve the data.
  5. Short answer: Make it table driven. Long answer: As you scale up like that, you're entering the realm where you aren't going to code those values directly into G. You may be looking at an array of values in G that you loop over to create objects a binary file where you read back the objects you serialized earlier or something as complex as a SQL database with relationships between multiple tables and you process the entire database to create your object layout. These are just suggestions. There are many other options. The point is, you need to make your construction more data-driven instead of directly coded. This is true of all programming languages I've seen, not just G. The notation of objects is such that they come into being as data drives their existence. It's relatively easy to create an object with a given value, and if you're building up objects as they come into existence within a system, everything works fine. But to create an entire system of objects in one burst, you need a data structure that can describe an entire system. That's why things such as object databases exist.
  6. The preference to use init methods instead of non-default class constants is so strong, LabVIEW NXG is planning to never support the feature. Any class constant there will always have only the class’ default value.
  7. Definitely a bug. If you can share the code, it's worth escalating to an AE at NI to see if it is still broken.
  8. I agree with Shaun. By the time you ship, of course, you shouldn't care whether auto error handling is on or off (your VI Analyzer tests caught those), but it is invaluable during early development. And if you did miss something (developer submitted a VI late on deadline and didn't quite follow procedure or test machine had a bad day and skipped a step of your build or whatever), having AEH enabled can save you a wealth of time.
  9. So I have this bug report that AppBuilder fails to build this one particular app whenever the "remove polyVIs and typedefs" option is enabled. The top-level VI breaks during the build. I try some of our usual debug approaches and don't find anything, so I hack up AppBuilder a bit so that when I get to the end of the build, I can open up the broken VIs and see what they look like. And I see this: Where the heck have all the class methods gone? They're in memory at the start, missing at the end. That shouldn't happen! So I set up a trap to detect when the VIs go missing. I'm working in a debug build of LV with a lot of extra guards and logging turned on, so it takes about two hours to run this build. And I get nothing. So I rework my trap, thinking I've coded something wrong. Four times I do this over the course of yesterday, working on other issues while the build runs. Four times -- roughly 8 hours -- nothing. The VIs swear they're all in memory and the caller should know about them. This morning I come in and, for some reason, I happen to double-click on one of those ? nodes. And, lo, the missing VI opens. Say what? Turns out, AppBuilder had stripped all the VI icons out (saves space in the built EXE to ditch all those unneeded images), so they all render as the ? icon. I have no idea what's actually broken, but it ain't that all the class methods have gone away. One wasted day. *head bang* *head bang* *head bang* Yet another reminder -- as if I needed it -- that computers only do what we tell them to do. And they do it very precisely.
  10. I decided my previous reply could be taken too snarkily... not what I intended. Let me be explain -- Ernesto is right, and with the lens he's looking through, it's a reasonable position. There's a rather substantial difference between a casual refactor that changed a terminal to be consistent with all other terminals and a foundational logic shift required to make two terminals that are literally the same address in memory return different values -- but that's something I can know with deep knowledge of the code underlying these two issues. No way for Ernesto or any other customer to recognize that. One can be done just by removing what appeared to be a bug; the other would require significant and deliberate retooling of the code. Not to mention it would break regression tests (tests that don't exist for automatic error handling since at the time that feature came along, there wasn't a good way to automatically test for anything that triggered dialogs requiring human acknowledgement). There are levels of trust between tool vendor and tool user, and once those are breeched, the promises of the vendor aren't worth anything. That's the case here. So even explaining all of the foregoing logic doesn't really suffice to say that Ernesto is wrong in his defensive coding practices. That's something NI has to earn back for that user. So perhaps if the bug with the auto error handling gets fixed and nothing changes about the errors of the accessors for another couple versions, we can get to the point where Ernesto feels comfortable removing the unnecessary bits. But until then, he's right to continue his current practice, and I shouldn't have said otherwise. I apologize.
  11. Never mind. You're right. Sorry.
  12. Yes, there is a guarantee: both a statement from R&D and also just how much code we'd break if that ever changed. But if you want to continue connecting both, it's just your own programming time and run-time performance you're wasting, so go for it. I'll file the bug report. Definitely worked when the feature was first introduced (thanks to TomOrr0W for checking back to 2009). *head bang* This has not been a good LabVIEW day.
  13. Paul: Been thinking about this more. I think I can see how this works, but I'd love to see your source code to play with it more. If at some point we're at a conference together, I'd love to look at it.
  14. Paul: Your response is exactly why I thought inheritance is a bad choice here. "D:Exit()" on a state is not a well-defined method because it's behavior depends upon context. Having "D:Exit()" that ONLY does the exit work of D and not the exit work of B or A means that the transition can call "D:Exit()" and then could also call "B:Exit()" if and only if the transition knew itself to be crossing that boundary. Or, better -- it would just call B:Exit() and let B know to call D:Exit(). It's all design time knowledge -- using reflection means that design time information is being computed at run time, which is both inefficient and error prone. I know that neither of the alternatives I just now suggested fits cleanly into your existing architecture. Your need for reflection is clear to me given your current design. And while I would consider your architecture as a use case if I were working on reflection, I would still push back on the architecture itself. To inherit, you need Identity, State, and Behavior. All three. If you don't have all three, you shouldn't be inheriting. You don't have Identity. State D is not a flavor of state B. It is a substate of B. Any time you miss one of Identity, State, or Behavior but choose inheritance anyway, somethings work, but other things will become significantly more complicated. And a need for reflection is a bright red sign.
  15. I'm familiar with the State Pattern. I've just never seen the inner states implemented through inheritance of the outer states. I've always seen that as composition. Once I pulled it apart, it made sense. Let me see if I understand: If the transition is from one inner state to another inner state, you call a method X. If the transition is from one inner state to another state outside the outer state, you call a method Y ("StopMotion" in the example). Either that method is responsible for calling an abstract core functions to be implemented by the inner states ("OnInterruptProcessTriggers" in the example) OR (not in the presentation directly but seems to be alluded to) the inner states would override "StopMotion" to add the behavior they need for that transition and use Call Parent to make sure they get the common transition code. Assuming I have that right, it all makes sense. What I'm not seeing is why any of the function invocations have to be decided at run time. It's all fixed function calls to match your state machine model. You never have to find the "common parent" for a given transition because every transition has its own method (that is only implemented starting at a particular level of your hierarchy). Where does the dynamic come into play?
  16. What does the inheritance hierarchy look like? Are you saying that there is a class D that *inherits* from class B? I've never seen a state chart implementation where there would be an inheritance relationship between the outer and inner states. That's typically a "has a" not an "is a" relationship. I tried to picture what state (i.e. private data fields) B could have that should be inherited by D... I couldn't come up with any use cases. And the methods are all methods on class State ("Enter", "Exit", "Loopback", etc), not things inherited from B. Please explain how this works. I'm intrigued, as it sounds like a novel approach to state chart implementation.
  17. I learned a lot about serialization from writing my library. Took me 2 years of research and planning. "All I want to do is flatten and unflatten some data and I need a library of 900 VIs to do it?!?!?!!" Yes, give or take a couple hundred. Turns out serialization is really a lot more complex than I ever would have thought when you really get down to the core. Bloating the string is one of the most fascinating issues, something I totally ignored when I created the original flatten-to-string behaviors for LV classes. To flatten an object, you obviously need to record the type. But what about an array of objects? Do you need the type for every element in the array? Yes, if the array is heterogenous; no if it is homogenous. Can that be factored out? Depends upon what you plan to do with the string! If you're always going to start at the front of the string and unflatten all the data then you can factor out lots of stuff because you have context for the remainder of the string. But if you're going to fly along the string and find subportions to pull out then you need every subportion to be a complete record. "I have an array of 10000 objects. I need only 1 of them." That's a real serialization requirement and it has severe impact on the format of the string itself! Formatting for localization, formatting for versioning, formatting for different encodings, formatting for transmission (requires more redundancy in the data), pretty-printing for human readability... it's a mess. The one main takeaway I had from my project? I support trade embargoes on any nation that tries to use a comma for a decimal point. I know... Britain and the USA are outliers here, but, seriously, it's called a "decimal point"... stop putting a tail on it! I can deal with two different systems for units. I can deal with myriad ways to format a timestamp (even different calendars). I'm supportive of arcane Unicode encodings so that every language ever can be recorded. But I hit my breaking point on commas as decimal points. Why would I want to live in a world where I can order a 1.125 meter steel beam then be charged for delivery of something over a kilometer long?!?! This table should not exist. Yes, my library provides support for such abominations, but it was while adding that feature that I became a strong proponent of cultural hegemony!
  18. Oh, yeah. I forgot about that. There's a whole writeup somewhere discussing why I did things the way I did for that library. It's been a while since I worked on it. Sorry for the wild goose chase. Flattening the names of the complete ancestry into every flattened string would be an incredible bloat of the flattened size of data. I do not believe this is a disadvantage. Serialization is special... if the parent is not serializable then the child is not serializable. Requiring inheritance from a base Serializable does not seem to me to be a bad requirement. Even in other languages where serialization is an interface, mostly they have to add special syntax checking to make sure that the interface is implemented by the eldest class or else it doesn't work. Serialization isn't functionality that can be injected on a descendant level. It's all or nothing to be meaningful because failure to preserve parent state means you aren't actually saving the state of the object. Thus the requirement to inherit from specific base seems absolutely reasonable to me... indeed, it is an advantage because it syntactically assures that the ancestors are included.
  19. @Zyga Here: https://forums.ni.com/t5/Actor-Framework-Documents/AQ-Character-Lineator-working-draft-version-of-serialization/ta-p/3512840 It's my complete, working flatten-to-anything-unflatten-from-anything serialization library for classes [and any other lv data type]. I forget exactly how the names are extracted, but the functions are in there if you want to use them for your own serialization project. Also, feel free to use the library as it stands.
  20. BBean: How bad is the limitation? Just annoying or is it actually blocking work? And was the error message sufficient to explain the workaround? I've looked further at creating the mapping system -- it's a huge amount of effort with fairly high risk. I'm having a hard time convincing myself that the feature is worth implementing compared to other projects I have in the pipeline. As I said on ni.com, I investigated Mads' bug where malleable VI fails when calling protected scope subVI. I was unable to replicate the issue. Mads could not replicate it again either. So it is definitely a corner case bug. If anyone sees it, please share VIs that replicate. When you change a subVI, which VIs in memory have to recompile? Only those that call the subVI. How do we know those? Well, we have a network of links between VIs (and libraries and projects) that track all the dependencies, so I just traverse the links from the subVI to all of its callers. I don't have to recompile every VI in memory on the off-chance that it calls the modified subVI. Conversely, when you run a VI, which subVIs need to be locked so they can't be edited while you're running? Again, I can traverse the linker graph to lock those that are dependencies of the top-level VI. When you change the private data control, which VIs in memory have to change? Prior to malleable VIs, I didn't need any sort of tracking for which VIs accessed the data -- I just told every member VI to redo its compilation... fast and narrowly scoped. No link information required. And when you run a VI, does the private data control need to be locked? Only if a member of the class is invoked. But with malleables introduced, if they could talk to private data directly, the linker graph wouldn't reflect the dependency of the malleable instances on the main malleable VI. If you go through a subVI or subVI-masquerading-as-property-node, there is a link, so when the private data control changes, we change the subVI, and that causes the caller to recompile (if necessary... usually only necessary if you've inlined the subVI).
  21. > I've been informed now that this can be achieved as long as the > VIM uses an accessor when operating on private data... That information is in the error message. Do I need to clarify the message? > (It seems a bit strange that the VIM is not allowed direct access to > private data, but it can still call a private method to effectively do the > same thing (so why not allow it to access the private data without the > accessor in the first place...)). There's a pretty crucial optimization that LV classes are built upon that when the private data changes, the only VIs that need to be evaluated for recompile are the members of the class. The dependencies between VIs are tracked (i.e. when one VI calls a subVI), but creating a similar web of dependencies on the individual fields was significant work that was unnecessary ... until the advent of malleable VIs. When I considered the work of adding such a matrix compared to the workaround of providing a private accessor VI, it just didn't seem worth it. So, I put the limitation in for private data access and waited to see how many people would object. One so far. :-)
  22. This turned out to be surprisingly easy to fix. The fix will be included with LV 2018 SP1. The problem is that there are a very few nodes in LV that reach downstream to get their types when they think it is safe to do so as a way of giving better default types for terminals. But when the downstream terminal is something polymorphic, that has to be limited or disallowed. I never knew about that feature of LV (though I've unconsciously taken advantage of it many times, it turns out). I just needed to add a downstream malleable VI to the disallowed list so that the malleable is always driven by upstream instead of driving upstream.
  23. No, but it is an option, and sometimes when talking to a foreign language, it is better to bow to the grammar of the foreign language than to cleave to the proper grammar you might normally use.
  24. The other option is to create a cluster of 1024 int8 values and use that for the inlined char array.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.