Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Posts posted by Aristos Queue

  1. You rather severely missed the point on the Pi constant vs computed Pi functionality. Sure, a function that can compute pi or c is usable in more places than a function that just computes pi. But a function that sometimes computes pi and sometimes computes c and the caller never knows which one he is going to get is substantially less useful. Worse is a function that is documented to only produce pi that generates any number of random values when it is executed.

  2. You can't do it in code.

    Not it LabVIEW nor most other languages TODAY. There is strong interest in integrating ACL2 compiler proof engine into various compilers as it can provide such guarantees.

    I'm not fond of requirements that aren't explicitly spelled out in code--I think it significantly hinders readability--but I don't see any way to implement this kind of requirement in code.

    This is true of most of LSP. It can be proven to be impossible to machine check or to assert. Doesn't mean it isn't a requirement. If you have a problem with this, take it up with that Grand Omniscient Device before Whom we are all but mere peripherals.

    it should be validating the values returned from the method.

    By that argument, all outputs should be variants and the caller should validate that the data types are types it can use. It's exactly the same situation. A subVI needs to define what it does, and a caller depends upon that thing being done. And if part of that contract is a range limitation, then the caller doesn't have to do that check -- just like it doesn't have to do the data type check today.

    Every restriction put on a piece of code reduces the possible places where that code can be used.

    On this point, you are wrong. The more restrictions a piece of code has, the *more* places it can be used. A function that returns the constant value "Pi" can be used everywhere. A function that computes Pi on the fly cannot be used in some time critical systems. If a function has design contract that says "this is my performance bound", then it can be used in more places. This is not an example I just pulled out of the sky... the interface for the C++ STL actually defines big-O performance bounds for any library implementing that interface for exactly this reason. Impossible to compiler check, but critical to the STL being usable by most systems. Functions that guarantee range, or guaranteed not to throw exceptions, or any of 10000 other guarantees go a long long long way to increasing the amount that code can be reused.

    But the overall goal isn't (I hope) to push as much error detection into the compiler as possible.  The goal is to give us a tool we can use to quickly create what we need to create.

    These two sentences are synonyms for any application of moderate scale. Run time error detection for things that can be caught by the compiler assumes a test bed that I know never actually gets built by the vast majority of the world's programmers. We're not talking about restrictions on what the caller can do with the code, we're talking about restrictions about what the called code can do to its caller (i.e., how can a child class screw a parent class framework).

    Even the Must Override and Must Call Parent flags are nothing more than reminders for child class developers.  I can't think of a single case where either of those flags guarantee the child class will be implemented correctly, or where using them is appropriate for all potential subclass situations.  Maybe there are some, I just can't think of any.  But I've run into lots of situations where they got in the way of what I was trying to do.

    The flags guarantee that child class developers implement some portion of the class correctly. Can the child class author still screw things up? Yes. That's the whole point of everything else discussed above -- there are an endless number of potential requirements and they cannot all be compiler checked. But some -- like these -- can be. As for not using them, I don't know what to say to you. The flags are the backbone and key to most every library I've ever released, critical to anyone ever making a large system workable. If these flags have ever gotten in your way, then either the author of the ancestor class didn't know what he or she was doing (i.e. putting Must Call Parent on every method without thinking) or you were trying to hack the system and these rightfully stopped you.

  3. Hmm... I don't understand how LSP can apply but DbC does not. 

    DbC (as I've heard it discussed) gets into planning with users and feature scheduling; pulls in way more topics than I'm speaking to. Not that it doesn't have something to do with this, but it's overly broad.

    https://en.wikipedia.org/wiki/Object_%28computer_science%29

    Three properties characterize objects:

    1. Identity: the property of an object that distinguishes it from other objects
    2. State: describes the data stored in the object
    3. Behavior: describes the methods in the object's interface by which the object can be used

  4. > Is what you're getting at is more along the lines

    > of the Liskov Substitution Principle

     

    Yes.

     

    > or Design by Contract?

     

    No.

     

    LSP isn't just a good idea. It's a requirement for any system that you want to be well defined. Daklu, you bring up the testing case. The "behavior" in this case is that the child does whatever its parent does but it does it in its own way. So a child class for testing may do various asserts, logs, and may fake results. But you don't design it to have stateful side-effects on further downstream functions otherwise you end up with an invalid test. I wish I had an easy example to give you at this point, but I don't have one at my fingertips this morning.

     

    The parent makes promises to its callers that "all values of me will behave within parameters X, Y and Z." All child classes are instances of the parent class (that's why they can travel on a parent class wire and be the value of a parent class control). They have to maintain any promises that the parent class made, so they have to adhere to X, Y and Z. Failure to do so -- even in a testing situation -- breaks the calling program. In release code, this results in undefined behavior that can send the calling program off the rails. In a testing situation, this results in both false positives and false negatives. The more that X, Y and Z can be defined in code, the easier time the child class author has and the more likely it is that the system actually works.

     

    Anyone who saw my presentation at NIWeek 2012 will recall my discussion about "a child class' override VI should only return error codes that the parent said could be returned." So even though the parent VI may not do anything on its diagram (and so never ever returns an error) it should still define the set of errors (in its documentation) that may be returned by any override VIs. This is a very hard promise to adhere to for various reasons, but if I am going to write a framework that calls the parent VI, I may need to test for specific error codes coming out of that function. If any child class that comes along could return an error code that I'm not prepared for, that's a problem.

     

    Similar crops up with parents that document "the output will always be a number in the range 0 to 10". This is only covered in documentation. A child could return 11, but will break callers that are expecting 0 through 10. If we could encode in the interface of the parent VI "this output must be 0 through 10", we could break a child VI that tried to return 11 in the compiler. Or -- because figuring out that a given math expression won't ever return a number outside a range is often impossible to prove -- we could at the very least require that the output pass through an In Range & Coerce function before returning. Range is a promise that the parent VI can make to callers that is currently impossible to specify in the code.

     

    These are the kinds of conditions that I am talking about. It is why "Must Call Parent" is such a critical part of the LabVIEW architecture -- it solves a massive hole that exists in other programming languages for a parent being able to make certain promises to its callers. I cannot count how many times I have wished for something as simple as Must Call Parent in C++, C# and JAVA. The reason it doesn't exist there is -- so I have been told by people who work on those languages -- that in non-dataflow languages, it is hard for a compiler to prove whether the requirement is met or not. Personally, I think they're just not trying hard enough because it seems obvious to me how to implement it, but I don't work on those languages and I've never actually tried to make a compiler check one of those languages.

     

     

    Behavior is defined by what the object does when a given method is called and controlled by the object's state and the method's input parameters.

     

     

    No. Behavior is defined by what comes *out* of a method and the impact that running one method has on running later methods.

     

    Does all this make sense? Do you see how it relates back to the earlier comments about a parent defining the use cases for children? The parent does not make any promises about *how* the job gets done, but it does make promises about the data that comes out of the function and the impact on later function calls.



    I clearly haven't done enough to convey the critical nature of all of this to the LabVIEW community. These things are really hard to do in practice, and you can find plenty of examples where they weren't followed, but when you find one of those examples, you can almost always find a bug in the code related to that example. Breaking LSP means your code is broken. Unfortunately, LSP is something that can only be determined by human analysis, not by compiler analysis, though we continually try to upgrade the amount that can be algorithmically proved.

     

    And for the non-OO programmers: LSP applies to you too, but it is even harder to analyze because you don't have the formal structures to talk about. But it affects any time you have a plug-in system or refactor a module called by other modules or use any sort of variant data or flattened string interpretation.

  5. but many OO fanatics tend to pull the OO sword for everything

     

    I do pull the OOD for everything. I do NOT pull the OOP. That's the huge mistake that JAVA makes. When planning out a program, being able to say what object each piece of data is associated with gives you an organizational power that I haven't found anywhere else. And once you're done with the planning, you look at the plan and say, "It would be ridiculous to build an entire class for this concept, so I'm not going to do it and just write a function for that thing."

     

    Having said that, it does many programmers good to spend some time operating in the world where "you will only have functions that are members of classes." That whole "everything has a place" aspect of JAVA is actually a really valuable perspective, and in my experience, code written in any language from programmers who have spent time in JAVA is cleaner than code written in any language by programmers who have only used free spirit languages like C++ or LabVIEW. The other language that provides a needed discipline is LISP. I'd fully support any CS program that said "all freshmen will write in JAVA and LISP, alternating each week, and only then do we put the less-regimented languages in your hands." Unfortunately, most schools only do the JAVA part. And they never get around to handing the students LabVIEW. *sigh*

    Off topic (apologies).

    Is 2012 a load point? Or just that you loaded it finally in 2012? More generally. At what version is it planned that 2009 vis will not be loadable?

    No, 2012 is not a cut point, just the latest version at the time. We aim to maintain it as long as is practical, and we have at this point maintained backward load support longer than at any other time in LV's history, so far as I can tell. I suspect the current load support will go for quite some time because there's not really a problematic older feature that all of use in R&D want to stop supporting.

    • Like 2
  6. For the "too long; didn't read" crowd, just read the four boldface sentences. The rest is explanation.  :-)

     

    @drjdpowell and @flintstone:

    A child class that does something the parent never intended is a bad child class in almost all cases. The phrase "And you as the parent class designer do not know now what your class might be used for in e.g. three years from now" is false. The parent class designer establishes, in stone as it were, exactly the uses of a child class. Why? Because code will be written that uses the parent class and only the parent class, and that code knows nothing about the children that will eventually flow through it. All children are expected to match those expectations or they are going to have runtime problems. The more you can convert those runtime problems into compile time problems by letting the parent declare "these are the requirements", the more successful the authors of the child classes will be. This is true whether you are one developer working on an app by yourself or whether you are writing  a framework for third party developers to plug into.

     

    A child class needs to match it's parents for Identity, State and Behavior or else a child cannot be effectively used in any framework written for the parent. The parent defines the invariants that all child classes will obey -- that's what allows frameworks to operate. The more that a language allows a parent to say "these are the exact requirements needed to be a well defined version of myself", the more power the language has to build frameworks that are guaranteed to work out of the box. The parent designs for "the children will have free reign to do whatever they want here" and "the children will do exactly this and nothing else here".

     

    I'll give you an example that we were discussing yesterday: dynamic dispatch on FPGA. At the moment, the parent implementation of a dynamic dispatch method just defines the connector pane of the method. It does not define the cycle time of the method. In order to write an FPGA framework where any child class can be plugged in and the framework works, there are cases where you need to be able to guarantee that the child override will execute in the same number of clock cycles as the parent implementation defines. Essentially, the parent implementation needs a way to say "I have three Feed Forward nodes on my diagram in series between this input and this output. I require all overrides to maintain the same amount of pipelining... their diagrams must have exactly three Feed Forward nodes in series." We were discussing ways to add that restriction declaration to LabVIEW and whether the compiler could really check it.

     

    I have plenty of other examples, from many languages, of parent classes that want to define particular limitations for children. LabVIEW has the scope restrictions, the Must Override restrictions, the DVR restrictions [particularly the one that says this one ancestor is the only one that can create DVRs for all the descendants]. When we someday have template classes, we'll have the ability for the parent class to set type limits, just like every other language that has templates.

     

    If you are defining a class that is a lot like a parent but violates Identity, State or Behavior, do not use inheritance; use containment instead. Delegate to a contained instance of the parent class when (if) appropriate. Or define a new grandparent class for the parent and move the common functionality up to the grandparent such that the invariants of the parent class are unchanged and you can now inherit off of the grandparent to get ONLY the functionality that your new piece requires.

     

    > (if you still can open it in your then current version of LV :P )

     

    We just last year walked a LV 4.0 VI all the way to LV 2012. It opened and ran just fine. You have to open it in LV 6.0, then LV 8.0 then LV 2012, as those are the defined load points, but the mutation paths have been fully maintained.

    • Like 1
  7. Not to hijack the thread, but are you referring to the IPE structure here? I still use that a lot to minimize copies when manipulating large structures.

    Nope. The IPE is you telling LV what you are doing, not necessarily how to do it. Yes, it was named for asserting inplaceness, but that really only applies to the "Element In"/"Element Out" pair, and we never really came up with a better name for the structure. It simply provides a better way of saying "I am doing this complex operation of taking a value out, modifying it, and putting it back...LV, you figure out how to lay out the memory more effectively now that you have more information." I am referring to the overuse of the DVRs just to avoid copies where you're taking information away from LabVIEW so you can control the copies entirely. In some cases, yes, that's what you have to do, but with every rev of the compiler, those situations become rarer. Doing it knee-jerk just because you have a large data structure isn't good practice, IMHO.

    More than that.  The IPE makes explicit what the compiler tries to do anyway: minimize copies.  In fact, the IPE’s real benefit is as a guide to the programmer in writing code that can be done in place, rather than actually allowing in-placeness.  

    Exactly! Like what I wrote in the blog post about the Swap Values primitive. Yes, it can reduce data copies because LV knows that swapping is your intent, but it also makes it much clearer what you're doing with all those crossing wires, and that has benefit all its own.

  8. Really?  Seems like having a few extra copies of a concrete class hanging around in memory would be nearly unnoticable in a desktop Labview app.  The program doesn't do anything with those classes.  The total number of objects created from those classes is going to be the same regardless of whether you have 1 copy or 5 copies of the concrete class.

     

    Right. 5 copies of your class. Easily leading -- as I saw happen in the late 1990s with early naive implementations of C++ templates -- to multiplying the size of your entire program by 5 or 6. Took quite a few years to get it right. We can learn a lot from their trailblazing, but it still isn't dead obvious.

     

    Isn't a void wire a wire whose type isn't known?

    Sure. And we might be able to use it for that purpose. But there's no control for a Void type. And there's no template instantiation in the project tree or in the type propagation (because, ideally, types would be instantiated from the diagram the same way new queue types are created from wiring). And keep in mind we're not talking "save as" here... we're talking a chained template that has to be kept up to date with the original and only instantiate the type at *runtime*, not as actual source files. There's roughly a bijillion issues with template classes. I'd rank it as easier to do than interfaces, but it still isn't a one-release feature.

     

    -- Stephen

  9. Here's the problem: Dynamic Dispatch defines and requires an interface -- a ConPane -- a function prototype (this is clearly a requirement for run-time polymorphism). Additionally, only Dynamic Dispatch affords the contracts of Must Override and Must Call Parent. I find myself wanting to enforce these types of contracts ('Must Implement' and 'Must Call Parent'), yet with a separate ConPane for the concrete implementation.

     

    I'm still stuck back on the original problem, so bear with me.

     

    You want to enforce "must call parent", but you can't say what connector pane the parent function has? Huh?

     

    You want the parent to say "I have a function and it must be called at some point by my children, but I can't say what function they need to call it from and I can't say anything about their setup". At that point, why are these children at all? Why aren't you using a delegate?

     

    Which brings directly to Shaun's points...

     

    I have always argued that the only thing LV classes bring to the table above and beyond classic labview is Dynamic Dispatch and a way of organising that makes sense to OOP mindsets. If you are not using DD, then all you have is a different project style and a few wizards that name sub vis for you..

     

    And that's almost exactly what I've said from the beginning. Object-oriented programming adds encapsulation and inheritance. It brings the next level of organization to your code (you know how to organize a block diagram, this is how to organize the VIs in a module).

     

    And if you can't define the relationship, then it is just a regular subVI call, with no special Call Parent Node behavior definable.

     

    I'm really missing what relationship you think these two classes have and why you think there's any sort of child relationship involved.

  10. This is a new, interesting discussion altogether, but not the solution. It breaks down simply when you want different numbers of parameters on the ConPane for child implementations. (Whereas, genericity describes unknown type for any one parameter) That being said, genericity might alleviate (even, significantly) some of the root problems here.

     

    If this is true, then I definitely do not understand what you're asking for. How can a parent class have any need to call a function when it doesn't even know the parameter count? What possible need could a parent have to define *anything* for the child?

     

    Try again, from the top, using small words and pictures, please.

  11. Jack: After working through your thread, I think the answer you are looking for is what would be called template classes in C++ and generic classes in C#. There you would define the ancestor class in terms of type T -- not any specific type, but an unnamed type T. Think like the Queue primitives. You have many different types of queues: queues of strings, queues of integers, etc. All of them need a "copy data into the queue" operation. Obviously that cannot be defined by the parent "queue" class. And it cannot be by dynamic dispatch because, as you point out in your examples, every child has a different interface for this operation. The templating/generics takes care of that. An entirely new class is instantiated by the compiler by supplying a concrete type to fill in for type T.

     

    R&D prototyped but never released generic VIs (loathe the name because the terminology is way too overloaded, but I'll use it for now). We need a way that you would put a placeholder in the private data control and then in your project specify "I am using a new class which is the same as my generic class but with that placeholder filled in with <your specific type here>".

     

    Templates/generics have proved quite powerful in various languages, but they generally come with the need for an ultrasmart linker that can, at load time, only create one copy in memory of the specific concrete classes, even when multiple modules join together, each one of which may have instantiated the same concrete class. They also want to only duplicate the specific methods that use type T's internals and not duplicate any methods that just refer to type T in a way that all the assembly code generated is identical (i.e. T just passes through them but is not itself modified). That addresses ShaunR's memory concerns. Without such smart linkers, templates will bloat your code very very very quickly. I assume if we ever get this feature in LV that we will have learned from the other languages that have it and build the linker capacities in from the outset.

    • Like 1
  12. I use Reference based objects, just like NI does for the NI-IMAQ VIs.

     

    Citing the NI VIs as a template to follow is silly. All the modules like this date from a time when by value objects did not exist. To achieve encapsulation of data, references was the only option. I am not saying they wouldn't be references if they were designed from scratch today... they might be. But the engineering evaluation was never made, so citing them as your reason for using references reads too much into their design. And they all have established patterns of usage that make converting to a by value style today impossible as there's really no way to do such a conversion and maintain backward compatibility.

     

    Very large waveforms, arrays, strings, and clusters have been used quite successfully by many people for many years in LabVIEW. Becoming a class does not change the calculus of what should be by value and what should be by reference.  If you are branching the wire, LabVIEW *may* make copies of the data. The majority of the time -- and the LV compiler keeps getting smarter about this with every release -- when it makes a copy, it is because a copy is needed, and would be needed even if you are working with references, only in the reference case, you'd be dropping an explicit "do a deep copy of this reference" node.

     

    In my book, leaning on the LV compiler to give you better data copy control is a best practice. Trying to take control of that for yourself just burns up lots of processor cycles uselessly on memory constructs that add expensive thread synchronization when none is needed for the data accesses and cuts you off from all the compiler optimizations that come with dataflow.

  13. One can have a “Hardware” actor, that is the only process to actually communicate with the actual hardware.  That actor pulls from the hardware, but pushes status changes to other actors.  [AQ may be thinking of this “Hardware” actor rather than the actual hardware.]

    Sometimes I would suggest such a proxy, but there are some pieces of HW that behave more like streams of status updates: a sensor that continually sends back data, or a robotic arm that you give it a final X, Y, Z coordinate to reach and it streams back to you the ongoing "here's where I am now" info, rather than you polling the HW continuously for "where are you now? how 'bout now?"

     

    If I had to build a proxy between the UI and the actual HW to make that happen, that would be my first choice of architecture, all other things being equal and no other information about the system. Essentially, I prefer to get to "push" as quick as possible, whatever software layers are required to achieve that. I find that it gives a more responsive system overall.

     

    And, for the record, this isn't just Actor Framework. This is any sort of "here's the UI process and over here is the HW process". So "actor" as a model of the system, not as a particular instance of Actor.lvclass.

  14. the reason why I chose this practice is...

    As long as you have a reason based on trying alternatives, not based on fear or a knee-jerk "cannot be done otherwise" response, you won't hear objections from me*. :-)

     

    * unless your logic strikes me as wildly off base, but that's not applicable here.

    I did consider that but couldn't utilize them. It works very nice when you look-up against the same type of key say you are looking-up a 'data-cluster' against the 'name string' but what if you want to look-up the same 'data-cluster' against another type of key?

    You do what a SQL database does and double-key it. The first key maps to a second key. The second key maps to the data cluster. You can have as many different types of first keys as you want that all map to the same second key. Include some bit of ref counting in the second key to know how many first keys are pointing at it so that you know when to throw the data cluster away.

  15. Agree with Intaris, though I prefer push over pull:

    rather, interrogate any state information as and when you need it

    I don't encourage interrogating the HW. Instead let it push changes up to the UI when they happen. The UI just announces "I am connecting" or "I am disconnecting". After that, it is the HW's job to tell the UI about state changes. The benefit here is that you, the designer, do not ever get in the mindset of your UI holding and waiting for information from the HW, which leads to long pauses in your UI responses.

  16. With the caveat that I consider it a bad practice to use references unless you are backed into a corner and have no other options, then, yes, this is a good practice. Just remember that if *any* data member of your class is by reference then things work much MUCH better if *all* data members of your class are by reference. Trying to mix by value members with by reference members is legal but results in situations that many people screw up easily (i.e. the wire forks and now half of their class is copied and half of their class is the shared reference, leading to confusion about what exactly this object represents).

    • Like 1
  17. If you are anywhere close to Austin, TX, there is a UT professor that offers a 1-day-per-week-for-three-weeks course that is spectacular. We sent the entire LV team through when we did the cross-over from C to C++ about a decade ago, and we still send new hires through from time to time if they come from a predominantly Java or C# background. Dr. Glenn Downing.

  18. To the best of my knowledge, the answer is "no". We've had the XStructure on the drawing board for almost a decade but no one has gotten around to building it. There's minimal impetus to do so... the number of control structures that we've built into LabVIEW over the years has been fairly small -- they still fit in one palette. Very very few ideas are put forth that call for a new flow-control structure, so no one has felt compelled to build the G mechanism for defining one.

    I suppose in theory you could build up an XNode that draws a very complex inner image and maintain all your own editor state operations for the fake diagram inside your node, but the work involved would be excessive in the extreme.

    If you look at the Sim.xnode, I think you'll find a bunch of stuff that isn't actually implemented in G. I know, for example, that their whole interior is back in the C++ code. I don't know how extractable that layer is for a different XNode.

    • Like 1
  19. jcarmoody:

    That's how far enforcement of copyright naturally extends.

    Yes. AS DOES ALL LAW.

    All law is based on the ultimate ability of a government to force compliance. Any sufficiently flagrant violation must be met with force or the law is meaningless. The whole point of the law is to make sure we are all conforming and to penalize non-conformity. That's the *goal*. Now, when we talk about ethical government, we talk about force commensurate with the particular law being broken. We talk about rational checks and balances on the use of force. We talk about law written to maximize certain principles and minimize others.

    If I own a piece of paper and a pen, and I write on that paper something that I read somewhere else, the natural conclusion to copyright arguments is that the original author and/or government is justified in using whatever force is necessary (including murder) to prevent me from selling it.
    Yes. Exactly. Now the open questions are:

    A) Is this a justified law?

    B) If so, what is the justified force to use in enforcing it?

    Those "appeals to emotion" that you hear are answers to these questions. Asking whether it is fair for X to be penalized if Y is allowed can be seen as an emotional appeal, but it is also asking the question of just balance.

  20. Nope... this one is password protected because that property is unsafe. Toggling the retain wire values at the wrong time causes LV to enter a bad state and crash. I have no idea if this is a technical limitation (i.e. adding the code needed to detect the crash and just return an error would require some change that would have a runtime performance impact) or if this is just a crash we decided not to fix. Regardless, this feature was not made public for a reason. 

     

    As I've said before, we don't make things private just to be obnoxious.

     

    > Also, must every topic containing a mention of disabling LabVIEW's password "protection" turn into a moral debate?

     

    No. It is a moral debate from the beginning. It never turns into one. ;)

     

     

    I think if someone locks their block diagram for a simple feature that should have been openly included in the API they are making a less than benevolent statement.  On the one hand they are feely providing a help that they are not going to benefit from commercially.  On the other hand they are stopping peers from learning how it was done.  Why?  It's basically saying hey look how good I am!

     

     

     

    In this particular case, one could make the argument that the NI employee who posted this VI used poor judgement in releasing it, as the VI has the same potential flaw as the property node itself. There is nothing special about the VI that would make it any safer to use than the property node, which we deliberately made private. From that point of view, one would see it as a case of "R&D made the call to make it private, others who have access to private things decided to make it public without actually doing the legwork needed to make it public, but private things must be password protected to limit the spread of them, so they followed procedure and locked it down." Because there are times when wrapping an unsafe property node in some G code can turn it into a safe node, and in those cases, the node should be viewed as atomic... there isn't any "taking it apart" any more than you take apart a compiled DLL. Demanding to see the G code is absurd since all you're doing is disturbing the parts that make it safe to use.

     

     

    From another point of view, it is *mostly* safe, and releasing it as a VI on LAVA is vastly different from it being discoverable in the property lists of LabVIEW ... someone who gets that VI probably gets it from a place where it comes with the warnings about "may crash your LV", whereas someone discovering it in the property list would not.

     

    In neither case is it as "look how good I am". It is more that exposing it one way promotes it differently than promoting it another way.

     

    No one is preventing peers from learning anything. There isn't anything to learn in these cases.

     

     

    The evidence that a crime was committed is that they are using the software, and (hopefully for them!) making profit out of it. Cheaper/free alternatives probably exist, but the "thief" chose to use your software. Ergo you have lost a sale and thus money has been stolen from you.

     

    In this instance there is a crime that has been committed, and there should be some enforcement; probably not arrest though.

     

     

    Doesn't even matter if they are making a profit from it. If you play a video game that you didn't pay for, you aren't making a profit. You're still stealing. Cheaper and free alternatives might *not* exist -- they definitely don't exist in the case of a game title, and often don't exist for specific types of professional software. Doesn't matter.

     

     

    If nothing else, you commit a crime by promising to abide by the EULA and then not doing so. That's called lying.

     

     

    Actually, due to its virtually zero distribution cost I think all software should be free for non commercial (i.e. playing about) purposes. Smart businesses would realise that once somebody becomes familiar in a tool they are more than likely to use it in a commercial sense, in this case it would then be used in an environment where it had legally been paid for, so the software vendor wins then anyway.

     

     

     

    Tell that to the game manufacturers... their software is *always* used for a non commercial purpose. :-) Even if you limit this to production-type software, there's a problem of the software author screwing his own paying customers. You have to be very careful ... if you provide your software free to one group and charge to another group, both groups can now produce whatever it is your software provides. That means you're empowering a non-profit group to completely undercut your paying customers. I have seen student projects worked up in dorm rooms that can be every bit as good as the professional software. You'll lose your paying customers if *their* market is undercut by labor willing to work free that has access to the same high quality tools.

     

    I'm not saying that there isn't merit in the idea, but it isn't the slam dunk you claim.

     

     

    anyway who more than NI knows that if they really want their IP to be safe they should remove the block diagram?  :shifty:

     

     

     

    Which would be problematic since then the VI wouldn't recompile for upgrades or alternate chip sets or when you change the compile optimization settings.

     

    but It's impossible to "steal" software because the original "owner" hasn't lost anything.

     

     

     

    You gained an ability you did not have by using your copied software. The owner has also lost an ability -- the ability to make further software. The owner has lost the chance to recoup the time, talent and treasure that went into making the software in the first place. Just as what you gained by your theft was an opportunity cost, what they lost by your theft was an opportunity cost. Measuring gain/loss based on first-order value is very 19th century. As any capitalist will tell you, money loses value just sitting in the vault. We measure the economic value of all sorts of things as what they could be traded for, not what they actually are traded for. Software is no different. Weigh the cost of buying the software vs the cost of writing it yourself.
     

    Flarn: I realize I have contributed to the hijacking of your thread, but I figure it's ok because I also contributed to the main topic at hand first. :-)

  21. This isn't the first time the EULA issue has been brought up. As long as I'm not doing what I'm doing to make a profit or anything like that (or to enable anyone else to), I'm pretty sure I'm in the clear. Correct me if I'm wrong, of course.

     

    I hope what you are doing *is* fine. I would want to live in a world where it is fine. I tried asking our NI lawyers, without mentioning your name or pointing in LAVA's direction, whether this sort of thing is ok. You know what answer I got? It is *illegal* for our lawyers to answer that question on your behalf. We have published the EULA. It is up to you, in consultation with *your* lawyers, to decide what that EULA means and whether it applies to you, and if at some future point we feel that you are not abiding by the EULA, that's what judges are for. I was very plain... "Are you saying we cannot tell our customers what they can and cannot do with our software?" They replied, "Of course we can. That's what the EULA does. We can't put it in any other language as that would be considered a separate binding contract."

     

    Let me repeat, just so it is clear... this isn't an NI thing. This is the legal requirements of lawyers operating in the USA: they cannot give legal advice to clients they do not represent, especially if those clients are possible future opponents in court to clients they do represent.

     

    So then I asked the next logical question... can the *business* side of NI tell our customers whether what they are doing meets the EULA or not? Answer: Yes, but we would advise them not to do so because it wouldn't be binding. In other words, the EULA is binding, and if someone from NI says, "It's ok for you to do that," you might still be sued later if we decide we didn't mean it and as long as it isn't an official legal opinion, it doesn't amend the EULA.

     

    This is what is referred to as "adversarial justice", which is the style of law that we have established in the USA. Lawyers are meant to square off against each other and judges decide who wins. In the meantime, a company, indeed, an individual, should treat all other parties as legal enemies, even those that they are friends with in other domains, i.e., commerce.

     

    The more I learn about this stuff the more I am convinced we need a really different legal system for some of these modern constructs.

     

    By the way, here's a fun one... I heard about a company that has the children of workers come in after school to install software. The kids click on the EULA when doing the installation. Contracts signed with minors are non-binding, so this company feels that the software, by failing to validate the competency of the person clicking that I AGREE button, has allowed itself to be run without any of the EULA terms applying. Is this legally sound? Their lawyers think so and it hasn't been challenged in a court of law. And until you've been to court, saying something is legal or illegal is impossible. It is just what your lawyer says is likely to be legal. Yeesh.

     

    Good luck, flarn. At the end of the day, I think the only thing I can say is that as long as you're not wealthy from any revenue source, you're probably safe from lawyers. Probably. Although Rolf makes a good point about why you might not be. So if you're going to use any software you didn't write yourself, I think you have to retain a good lawyer. Unless your EULA requires binding arbitration, as some do nowadays.

     

    Yeesh.

    • Like 1
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.