Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Posts posted by Aristos Queue

  1. On 11/13/2019 at 6:16 AM, drjdpowell said:

    So far I've noticed that it takes a lot of time to figure out why a VIM is broken.  Any input being wrong makes everything wrong, with only an unhelpful error message.   It's the biggest problem with VIMs that have input's that must have matching datatypes.  I'd like to be able to specify somehow that the "Buffer" input's should always be accepted, even if the added data is unspecified, but I'm not sure how to do that (though I've just had an idea I'll try later).

    If you have suggestions for how to report those errors, I'm all ears. I've been in hours of design meetings trying to come up with some algorithm for reversing type prop to figure out which inputs contributed to a given break *and* figuring out how to get the errors invalidated when the VIM gets edited. The current "break them all" is the best we've got at this time. There are some simple cases where the VIM's FP term connects directly to a particular node and the node refuses that input, but the further downstream, the harder it gets. And the most syntactically correct solution is "any wire that connects to the node is at fault", but that is essentially equivalent to the current solution as soon as you put one Type Specialization structure or error Case structure down (those tend to encompass the whole diagram, so all the FP terms wire into them).

    Opening the instance VI and looking at the diagram to see what broke is the best I've got. I would like to make the diagram visible without having to convert the instance to a real VI, but if there's a better option, please let me (and the rest of R&D) know.

  2. 9 hours ago, hooovahh said:

    It really depends on what you are trying to do so I get that. 

    Different users, different use cases... and for some, the passwords have real value. So they linger.

    Just occurred to me: maybe we should call it something other than a "password", since that sounds like security. Instead of "Set password", call it "Join editing region". Instead of "Enter password", use "Enable editing for region". It preserves the functionality but steps us away from this constant drumbeat of "your security is broken!"

  3. On 11/4/2019 at 1:14 PM, hooovahh said:

    Not trying to derail the thread even more, but every time I hear that argument I say that is why you can lock a VI without passwording it.  Which is what my Pre-Build action on building a package does.  Need to edit it and do a test?  Sure just unlock it.  Drilling down into a VI and not realize it is part of the reuse library?  You will when you see it is locked.  But whatever.

    I still tell the quote from NI R&D saying off the record that the protection you get from password protecting a VI, is about the level of protection you get from tissue paper. 

    Just locking doesn't suffice:

    1. It doesn't give you the ability to unlock specific subsets of VIs.

    2. It doesn't keep production line engineers from "just unlocking it"

  4. 3 hours ago, hooovahh said:

    Years ago we were in a meeting with a few LabVIEW champions and LabVIEW R&D ...

    I don't know if that was the same meeting I was in or if I was pulled in after the fact, but that was where I tried to argue against having password protection at all. The answer I got back was, "This isn't a security feature. It's a feature to keep developers and production line techs from accidentally modifying VIs they didn't intend to modify, and providing a way (by using the common password) to unlock specific subsets of VIs when editing is desired."

    That seemed like a useful concept to me. Ever since then, I've been fine with basic security around the password, as long as we make it clear to users that the password isn't intended to protect intellectual property. And that's what we do:

    Quote

    From our online help for "Creating Password-Protected VIs":

    Caution  VI password protection does not encrypt block diagrams. For maximum security, remove the block diagrams from your VIs. Refer to the KnowledgeBase at ni.com for more information about the security differences between password protection and removing block diagrams.

    There's a lot more detail in that KB entry. We document this in a few other places as well.

  5. 6 hours ago, ShaunR said:

    Chinese, Russian and North Korean companies too?

    If they can be vetted and approved for signing. But the big point is that unsigned code simply cannot execute on these CPUs, so unknown actors cannot approve.

     

    6 hours ago, ShaunR said:

    Excuse me if I'm not entralled by allowing M$/Intel/Apple/etc backdoors or control of what I can and can't run in an age where computers are always connected and OS telemetry is rife. I reject this dystopian sales pitch for "security" which is nothing more than a hardware version of Certificate Authorities aimed squarely at market control.

    But you are already in that situation. The operating systems already can (and do) lock out a bunch of functionality. OSes and CPUs can already break backward compatibility if they choose. You already trust those companies deeply.

    "nothing more"? Yes. It is exactly that. I don't know what you mean by nothing more... it should be "nothing less than Cert Authority", which is what we have today.

    Also, as I said, you can get your own signature for installation on your CPU to allow you to sign other apps. The system doesn't make you beholden to those companies. But it does keep foreign code that gets smuggled onto your system from executing, which is a massive and rising problem today.

  6. 4 hours ago, Michael Aivaliotis said:

    I sign all my built EXEs, for all my customers. It's trivial to do and doesn't cost much. This also allows me to know if the application that I'm asked to support was built by my company or the customer did the rebuild themselves.

    That's level 1 signing, to verify provenance. Good practice, but what we are talking about is level 2 signing, which requires you to submit your EXE to MS/Intel/Apple/etc to have it signed by the chip's own signature. Without that, a secure CPU will refuse to run your code. MS/Intel/Apple/etc would pretty much operate the way Apple does with the Apple App Store, where they vet who you are, why you're putting this EXE out into the world, etc.

    A company could create their own signature (derived from the MS/Intel/Apple/etc) signature and install that on all the CPUs of their company, and that would let them sign their own apps. So you would deliver your EXE to your customer and they would have to sign it before installing it on their machines.

  7. On 10/24/2019 at 4:58 AM, Zyga said:

    Getting back to my original goal. I would like to change properties of e.g. sDataAcquisition.lvclass:editWindow.vi and all its clones.

    The only correct, intended, non-hack ways to get a reference to an existing clone VI is to use the "This VI Reference" node on the clone VI's own block diagram and have the clone send that reference to someone else as part of its own execution.

    Any other mechanism was never intended to work and is generally unstable outside of very specific use patterns.

    There is one mechanism available, and I requested that R&D leave it working even though it can be unstable, and that is the Open VI Reference node when you pass a string. You can pass the qualified name of the original VI followed by a colon followed by an integer to get a reference to that specific clone. The big big big caveat is that you need to close that reference before the clone is disposed. This technique is used by the LV Task Manager, and that tool is the only reason that this feature remains instead of being fixed as a bug. Unfortunately, it really isn't possible to make this feature stable without a significant performance hit on calls to clone VIs. It wasn't intended to work, which is why there's no official way to do it, just the devs forgot to close Open VI Reference loophole.

    Even if you do get a reference to one clone, any properties you set on that clone while running will be set only on that clone. Likewise, anything you set on the original VI ref will only be set on the original VI (with exception of breakpoints). LV has intentionally never created a "me and all my clones" ref (the reasons why are a topic for a different discussion thread).

    If you make changes to the original VI BEFORE the clones are replicated, then the clones will have any changes you make. That generally means never calling the original VI as a subVI and instead always calling it through Call By Reference node.

    • Like 2
  8. 20 hours ago, ShaunR said:

    If you grew up on Fortran I would have hoped they had let you retire by now. No rest for the wicked?

    The time dilation near a massive, dense, nearly-impenetrable object (aka Fortran) keeps them forever young. It's also why fixing a bug takes so long from the perspective of those standing further away. The LIGO gravity wave detector had to filter out Fortran code submissions to detect black holes (both cause merge collisions).

    My second internship involved Fortran. I know of what I speak. I was grateful for the experience... it set me on the path to ever higher-level languages!!!

  9. Speaking as a 19-year dev in LabVIEW R&D, Hooovaaah is pretty much right. NI wouldn’t do anything to change our EXE format because of this work. None of the current design is viewed as a security layer. 

    We’ve talked about secure computing initiatives in R&D. NI will probably lag the curve there (NI tends to adopt tech when it is already commodity), but I suspect it’ll happen this decade. I very much hope the world gets there soon. Unsigned EXEs are a major threat vector, and the IOT threat just keeps increasing. The world can’t afford to keep running arbitrary code much longer, in my opinion. 

  10. 1 hour ago, Daklu said:

    I disagree. While it is entirely realistic to build apps without using priority queues, having this option available does add complexity to the AF and DQMH.  

    If you show someone a regular piano and an electric piano/synthesizer and ask then which is more complicated, what do you think they'll say? Having more optional features available, especially when they're easily discoverable by users, adds complexity.

    Users are going to spend a non-zero amount of time considering those optional features and adjusting their mental model of how the framework works.

    I'm not sure what the term is for the negativity introduced by options, but it is not (as I see it) the same as "complexity" being discussed elsewhere in this thread. The unused options do not add anything to the complexity as far as the user having to create more code or having more difficulty reading the code that they do create using the framework. Depending upon how the option is presented, it might not even add anything to the complexity of adopting the framework (there are plenty of optional terminals that we just gloss over on commonly used functions and never bother to even read about). To further complicate the issue, choosing to use an optional feature might actually decrease the complexity of the user's code written within the framework -- there are options that are available for specific tasks that avoid the user having to work out those tasks on their own.

    While the learnability of a framework might be impacted by options, I don't think the complexity is changed by them. Does that make sense?

  11. 7 hours ago, Rolf Kalbermatter said:

     It's better than using the C++ template feature yourself though. That is pure evil.

    In the wrong hands, perhaps it is some evil. I made extensive use of it to get sets and maps in LV 2019 acceleration for specific inner data types with some quite readable code (according to my reviewers).

    I save "pure evil" for data multiple inheritance and single character variable names.

    7 hours ago, Rolf Kalbermatter said:

    The big question is if STL buys you much or just causes you even more trouble.

    This is a question??? In 2005, maybe. At this point, if you aren't using the STL in your C++ code, I suggest you change languages because you aren't using C++ right. (Note that if you are weeping constantly and your hands bleed and your stomach ulcers bloom, those are good signs that you're using C++ correctly. Incorrect C++ use is generally associated with euphoria and a belief that you've found "an easy way to do it!" by avoiding some part of the standard template library.)

  12. 16 hours ago, ShaunR said:

    Perhaps in the AF case. However it's also fairly common for the underlying implementation to be a stack (LIFO). That is effectively what the DQMH implementation is.

    Like "accidental"? :lol: You understimate the power of a marketing department, my friend ;)

    That would be a "stack", not a "queue." What the DQMH implements is a third thing: a "bug". 🙂

    Note that it is a known, documented bug because the single message is good enough for DQMH purposes and not intended for more than one message.

  13. 2 hours ago, drjdpowell said:

    That is a weird, though undeniably literal, interpretation of the words priority queue.  You must get into a lot of arguements at the supermarket checkout.

    A priority queue DOES order priorities. You enqueue a pair of data items: a message and a priority. The queue sorts the pair by the priority. It just ALSO retains the secondary sort order of FIFO.

    How the queue is implemented under-the-hood is anyone's guess, but ordering actual physical priorities is one possible implementation (and one that is used for an arbitrary priority heap).

  14. A framework's artificial complexity is only from the things that the framework forces on your code, not the options that it enables if you choose to buy into it.

    AF and DQMH without priority queuing are no more or less complex than AF and DQMH with priority queuing. The priority queuing only becomes a part of the complexity computations if "priority" were a required input and you had to send some messages at a non-normal priority. Because you can use both frameworks entirely without ever using priority, it isn't part of the computation. It is an option that you can choose to exercise in your code or not.

    "If you buy it, you pay for it" features don't add complexity. "Comes with the territory" features add complexity.

    The requirement to create a class per message: that is complexity of the AF. The ability to send high-priority messages is not.

     

    • Like 1
  15. > The tool is proprietary

    So you cannot just rebuild from source code because you don't own the source code? I'm going to assume the EULA agreement on this tool lets you do this. If it doesn't, please don't tell me... that's between you and the code owner. But you might consider contacting the author of the tool and asking for the source code and/or asking for some specific changes. Sometimes asking works, and either would be less work than reverse engineering LV's internal EXE structure.

  16. I'm intellectually intrigued by the project, but I hesitate to help since the tool you're building would allow someone to create a new EXE that looks like an EXE that might come from some reputable source but has had various key components replaced. That is, of course, something someone could do today (in LV or any other programming language), given enough effort and time. But it takes effort and time, and I don't think I should help short circuit that, given who I work for.

    I am interested in your use case. I take it you have some EXE that you don't have the source code for but you need/want to make changes?

  17. OO is contradictory to functional as practiced by C#/JAVA/C++. Those languages insist on by classes by pointer or reference (C++ can do by value, but doesn't use it commonly).

    OO is compatible to functional when it behaves by value, as it does in LabVIEW.

    But many functional languages consider OO to be a half step toward more functional features: dynamic dispatching is subsumed by pattern matching, for example.

     

  18. 14 hours ago, ShaunR said:

    What is "accidental complexity"? This sounds like an excuse given to management.

    Allow me to introduce you to implied spaces.

    When I build a two-story house, I consciously add a staircase between the zeroth and first floors. I add handrails for safety, optimize the height of the steps for average human legs, etc. I spend a lot of time designing the staircase.

    What I don't spend a lot of time designing is the room under the stairs. I put a door on it and turn it into a closet, a storage place for the people who live in the house.

    Now, the people who live in the house start using that storage space -- exactly as intended. But after a while, they are complaining that frequently, they need something at the back of the storage space, so they have to take everything out to get it and then put everything back in.

    You ask me, "Didn't you put other closets in the house?! Why aren't they storing more things in the other closets?" I did add other closets: I wasn't that short-sighted. But it turns out that this staircase closet is taller than any of the others, so it holds things nothing else holds... wasn't intended, just happens to work because it is under a two-story staircase. Also, this is central in the house, so it is closer than the other closets, so the users think that the time needed to pull everything out to get to something at the back isn't *so* bad.

    The users of the space made it work, but there is accidental complexity in how they have evolved to use it. I didn't do anything wrong in the design, they didn't do anything wrong in giving me their requirements. It just happened with no one at fault.

    With this new understanding of my users, I refactor the house and add a second door on the short end of the stairs so people can pull from either end. Suddenly the under-the-stairs closet is not an implied space but an intended space.

    It doesn't matter how much you refine a design, there are always places that are implied within the design that are not spec'd out. It's a macroscopic aspect of Godel's Incompleteness Theorem. Some things aren't designed; they just work the way they work because they're near the things that are designed. And when users start relying upon that implied functionality, that is accidental complexity.

    Inspiration for this post from the novel Implied Spaces by Walter Jon Williams and Whit by Ian Banks, two science fiction novels that happened to give me good advice on software design. Accidentally... I think.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.