Jump to content

Secret features: ethics of developing them


Recommended Posts

QUOTE (ggatling @ Apr 8 2009, 01:04 AM)

Here here! I have long opposed NI's practice of adding horrible buggy code into the Latest, Greatest release together with all the bug fixes for the previous version. I feel certain NI's current release schedule is driven by a business model that does *not* center around releasing an amazingly powerful, undeniably stable software product. That being said, LabVIEW is still the best tool for my job, so I make do.

(Clearly still licking my wounds from 8.0)

Could we start a poll on how many people would want a LTS (Long-term support - i.e. Bug fixes with no new features) version of LabVIEW released every, say 3 or 4 years?

This way we could have a cutting-edge version and a more stable (but perhaps less exciting) version with long term bug-fixes. Of course these bug fixes can be in the newest cutting-edge version, but the difference is the addition of NEW features (and bugs).

I've long thought this would be a very good way to go.

Shane.

Link to comment

I'm going to say this one more time: you are confusing undocumented features with experimental features. They are not the same.

If the software *you* create with LabVIEW uses undocumented LabVIEW features (like scripting, since that seems to be everyone's hot-button at the moment), then that is *your* responsibility. If you're using undocumented features (and, truth be told, we usually do because we've gone digging for them), then you're introducing a risk. It's not NI's fault that you chose to use them - it's *yours*. That said, I use undocumented features in my architectural designs all the time - I know that they're risky, but I also use them for their feature pay-off. It's my risk, and I'm willing to take it.

As an aside - scripting is *not* experimental, it's just undocumented... for now.

QUOTE (shoneill @ Apr 8 2009, 04:41 AM)

Could we start a poll on how many people would want a LTS (Long-term support - i.e. Bug fixes with no new features) version of LabVIEW released every, say 3 or 4 years?

I think it would be more valuable to ask how many people would be willing to foot the bill for a LTS version of LabVIEW. I don't know, of course, but I'd say the response would be very low. NI is a company that needs to make money, and unless y'all can band together and put your money where your mouth is, then it's not going to happen. As a parallel, I think the number of customers that are willing to pay for LTS are even smaller than those who want support for everything NI on the Mac, and that's not going to happen either.

Link to comment

QUOTE (crelf @ Apr 8 2009, 08:50 AM)

I'm going to say this one more time: you are confusing undocumented features with experimental features. They are not the same.

I'm not confusing them at all. I'm just hijacking the thread! :ninja: I have no complaints with the inclusion of undocumented features or locked features. But enough "tested", "non-experimental" features (doumented, secret, 1337-only,...) make it into release versions with bugs, occasionally crippling ones, to give me the heebie-jeebies about each new version. Of course NI cannot wait until LabVIEW is perfected and bug free to release it. There is a balance somewhere between adding inovative new features and refactoring, debugging, or otherwise fixing existing code. I'm not sure where the Right Place is on that spectrum, if there is one, but I was making the argument that I feel like NI does not incorperate enough of the latter, especially given the fact that to get the bug fixes I must also accept the new features.

Link to comment

QUOTE (ggatling @ Apr 8 2009, 11:07 AM)

Ha! :D Well put. You might not be confusing them, but I think others are...

QUOTE (ggatling @ Apr 8 2009, 11:07 AM)

There is a balance somewhere between adding inovative new features and refactoring, debugging, or otherwise fixing existing code. I'm not sure where the Right Place is on that spectrum, if there is one, but I was making the argument that I feel like NI does not incorperate enough of the latter, especially given the fact that to get the bug fixes I must also accept the new features.

Gotcha. I think that, irrespective of instability moans, that NI will continue to innovate. Just as long as they do so while, in parallel, they fix issues. I think they do a pretty good job of it (especially when you look at the publically exposed issues-fixed lists that come with each new version - imagine how many are on the private one :) ), considering the size of their team and the size of (read: incoming cash flow from) their client base. I'm not saying that the process is perfect, but for what we've got, it's not horrible.

Link to comment

QUOTE (bsvingen @ Apr 7 2009, 11:12 PM)

One reason is that statistics show that computerized control is more fail safe than any other alternative (mechanical or electric).

Please produce these so-called statistics. When the brakes fail on my car (or the electronic systems that control them), I still pull the mechanical emergency brake lever.

QUOTE

The other reason is that digital control enables the processes to run much more efficient because they are pushed far beyond what previously would be called a safe, or even operational state. Take for instance the fuel injection in any ordinary modern car (common rail TDI). If the controller shuts down, the engine stops in the same manner as if the crank broke, but this is an unproblematic event (for anyone except the driver
;)
). A much bigger problem occurs if the controller does something wrong, like too much fuel, uneven fuel distribution, too much boost etc due to a software error. This would be the same as if the crank suddenly would change the relative angles of the pistons (not that it is possible, but it would produce similar problems).

You act as if (1) there are no mechanical or electrical interlocks in place anywhere in the system to prevent the software from actually getting what it wants, and (2) there aren't independent subsystems on the car that impose other software-based interlocks on each other. I assure you there are.

QUOTE

All larger modern ships are controlled "by wire" with very sophisticated, almost AI-like navigational and maneuvering systems using GPS, radar, ultrasonic sensors, inertia sensors etc, most of them don't even have steering wheels, and those that do have it for show. A software error due to some hidden and undocumented experimental feature for the reason of "testing" something new, is completely out of the question. Airbus air liners use fly by wire, and has done so for decades already. It uses a system with several computers working in parallel, and there is no manual or mechanical backup whatsoever.

Again, I dispute your claim that there is "no manual or mechanical backup whatsoever" in those systems. I will absolutely guarantee that there are thousands and thousands of electromechanical interlocks throughout those systems that exist solely to prevent the software from getting what it wants if it tries to do something stupid. That's not to say that something couldn't be overlooked and that a software error couldn't cause a catastrophe, but in fact those designs address that concern, too, by using multiple redundant levels of interlocks. Your continuing claim that there's nothing in place to anticipate the possibility of a software malfunction is patently false.

QUOTE

The point is, computerized control is as much a part of a system as the crank shaft is a part of an engine.

This, we agree on.

QUOTE

The same reliability and quality is expected of the software in those controllers as is expected of the alloy content in the pistons, or the structural integrity of the ship's hull, or the main beams of the wings.

Yes, but the fact that we demand reliability from the control software doesn't mean that we've stopped using mechanical and electrical safety systems.

QUOTE

Undocumented and unreleaved features in the software is as expected and assumed as you would expect to have part of the ship being secretely made of some unknown experimental paper foil because one of the engineers had a "bright" idea.

That's what unit testing is for.

For the record, I also agree with crelf that you're confusing undocumented features with experimental features.

Look, I'm not saying that every developer in the world should go plunging willy-nilly into every undocumented (or experimental!) nook and cranny of their development tools. However, if what we're talking about is the ethics of undocumented or experimental features in LabVIEW, then the ethical risk of using those features is borne by you, the developer. Also borne by you is the ethical risk of shipping software that doesn't do what it's supposed to. So if you're not willing to assume the risks associated with hidden/undocumented/experimental features in a given situation, then you definitely shouldn't use them. And in any case, you should thoroughly test your code to prove that it works correctly. But you can't say that NI is a fundamentally unethical company just because these features exist. Unless, like I said, you intend to impugn every single software development group in the entire world.

Link to comment

QUOTE

Your continuing claim that there's nothing in place to anticipate the possibility of a software malfunction is patently false.

I have never claimed that, you are reading too much into it. What I said was that the same quality and reliability is expected of the software as is expected of every other critical component or subsystem, and that modern systems are increasingly becoming more and more all digital (no old fashioned mechamical operational mode exists). Then, this leads to my main point that including half-finished code in the finished product is not the way to assure the quality and reliability needed for mission critical tasks. There are way to many things that can go wrong as it is, you simply do not add more. In the same way that you would not expect critical mechanical components to fail, you would not expect critical software to fail. Then it is the job for the engineers to do everything they can to prevent critical components/software to fail. This means, among other things, to remove unessesary failure modes.

The Airbus is all digital, and the same is B-777 (you should read that article about ADA). What this mean is that there is no other way to control the airplane without going through at least one computer. The Airbus A-320 actually has 5 parallel lines, 5 times redundancy, but it is still all digital, there is no manual/mechanical mode except possibly manual elevator trim.

QUOTE

you are confusing undocumented features with experimental features

Maybe, I am afterall a confusing person :blink: But there is a third alternative; undocumented and experimental :ninja:

Link to comment

QUOTE (bsvingen @ Apr 8 2009, 09:26 PM)

I have never claimed that, you are reading too much into it. What I said was that the same quality and reliability is expected of the software as is expected of every other critical component or subsystem, and that modern systems are increasingly becoming more and more all digital (no old fashioned mechamical operational mode exists). Then, this leads to my main point that including half-finished code in the finished product is not the way to assure the quality and reliability needed for mission critical tasks. There are way to many things that can go wrong as it is, you simply do not add more. In the same way that you would not expect critical mechanical components to fail, you would not expect critical software to fail. Then it is the job for the engineers to do everything they can to prevent critical components/software to fail. This means, among other things, to remove unessesary failure modes.

The Airbus is all digital, and the same is B-777 (you should read that article about ADA). What this mean is that there is no other way to control the airplane without going through at least one computer. The Airbus A-320 actually has 5 parallel lines, 5 times redundancy, but it is still all digital, there is no manual/mechanical mode except possibly manual elevator trim.

Maybe, I am afterall a confusing person :blink: But there is a third alternative; undocumented and experimental :ninja:

I think if you speak to those who have flown those aircraft you will find that there are actually experimental "options" in them. AFAIK there has never been an operational aircraft, let alone other complex system, that is completely closed and "insulated" in the way you suggest.

Link to comment

QUOTE (Val Brown @ Apr 9 2009, 06:00 AM)

I think if you speak to those who have flown those aircraft you will find that there are actually experimental "options" in them. AFAIK there has never been an operational aircraft, let alone other complex system, that is completely closed and "insulated" in the way you suggest.

I am not sure what you are referring to, insulated? The flight control systems are considered IP by the companies involved, and are pretty much closed in that respect, but a certified aircraft has abselutely no experimental features on board, everything has to be certified by international and local regulations. Certification is a lengthy process and is mostly about documenting every little aspect of every litlle bit.

Link to comment

QUOTE (bsvingen @ Apr 9 2009, 07:21 AM)

... but a certified aircraft has abselutely no experimental features on board, everything has to be certified by international and local regulations. Certification is a lengthy process and is mostly about documenting every little aspect of every litlle bit.

Our company works in the Railroad certification and for new 'experimental' stuff you mention we give a 'Verklaring geen bezwaar' (Declaration of OK) that allows us (or other) to place non-regulated systems during normal operation. Such declaration is very strict and the people handing out those certificates check and test everything beforehand. I don't recall any mallfunctioning based on such placed system.

I think the same might go for other transportation systems.

Ton

Link to comment

QUOTE (Ton @ Apr 9 2009, 07:37 AM)

All public transportation is soaked in certification systems, but private transportation is a whole different matter. My picture is the airplane I am building for myself. It is a two seat RV-4, fully aerobatic and will be registered as EXPERIMENTAL :D In contrast to ordinary private aircrafts I am free to put any instrumentation in it I choose, also fully experimental unit, and I will most probably go with just a single unit of this and an extra GPS. The first 40 hours of flight I have to fly alone testing it; recording speed, climb rate etc etc, something I don't want to do manually :rolleyes: Maybe a netbook of some kind running LV and hooked on to the Voyager through USB or CAN. But with the paste I am building it will take some years...

Link to comment

Re: ethics of developing

I carry a Swiss Champ Swiss Army knife. One of the widgets on the flip side is a hook type thingy that many people don't have any idea how to use it. The rest of the knife retians it usefulness dispite the fact that it has an "un-documented feature"*.

Now if popping out that hook gizmo resulted in the knife blades going dull or the sharp edge switched to the other edge, then I would question if including that hook thingy without documenting it was a good idea. Going farther, if the knife was sold as a self defence device I would feel that I should clearly document the possible negative effects of extending the hook when the knife was being used for self defence.

If I was the developer of virtual instrument panels for air craft and I thought I could expand my customer base by allowing the pilots to choose their own color scheme I may experiment with adding this feature. Provided I did not expose this feature to the customer and stayed with the original color scheme in the sipping code, I do not see how I would be casuing harm to them. Later if I exposed the color options WITHOUT putting in place logic to ensure they never choose black-Text-on-black-background and I knew about this issue, then I would feel that I am neglegent.

In NI's case their code falls into at least two catagories, code used to develop code and code used by end user. The former is LV and the latter is NSV etc.

Since the LV development environment is intended to develop code as long as it allows us to do that, it could have an easter egg that gave step by step instruction to build a bomb and I would still concider it ethical. But if LV produced exe's with that easter-egg bult in, then I would question the wisdom of using LV since I would not be able to look my customer in the eye knowing that the easter egg could be there.

Another thought:

Are any of you out there old enough to remember people asking "Hmmm... I wonder what that asterisk and pound sign thing are for?" when phones switched from rotary dial to push-button dialing. In that case new undocumented features were introduced and few people question the ethic of adding the asterisk and octothorpe.

As engineer and scientist society is constantly looking to us to answer these technology questions (someone has to interpret the entrails) and I am glad this Q is being discussed.

Here is an example of what I discounted as a mis-use of my engineering skills. I drive thru Yuppy-town on the way to work and home every day. Durring this short drive I can encounter up to a dozen people driving stupid with a cell phone to their ear. The ispiration hit me (shotly after one of them almost hit me) to build a "Cell Phone Zapper" that I could carry in my car and use to blow-out the reciever circuits of any cell phone I point it at within range. Sure I could get my jollies and possibly make some money selling them BUT, it just ain't right! "Just because we could do something does not mean we should." (paraphrasing Jurasic Park).

Excuse me please if any of the was offensive.

Ben

* The hook on the back side of the Swiss Army knife is intended to make carrying heavy parcels bound with string easier as per the official documentation.

Link to comment

QUOTE (Darren @ Apr 9 2009, 07:20 AM)

Ben, your use of the term octothorpe just made my day! I thought my dad and I were the only ones who used the true name of the # symbol!

History shows that I choose between "octothorpe" and "pound sign" somewhat randomly, but with a weighting factor that skews toward "octothorpe" the more I drink.

Link to comment

QUOTE (Darren @ Apr 9 2009, 11:20 AM)

Another "stupid-foriegner" story: I had no idea what a "pound key" was when I came to the US. I'd call into conference lines that said "enter your conference code followed by the pound key". I'd look at the phone and think "where's the £? Maybe just this particular phone I'm using doesn't have one. I'll just mash the keypad until I get the operator." In other countries, we call it the hash key (not to be confused as a speed dial to order illicit drugs.

QUOTE (Justin Goeres @ Apr 9 2009, 12:25 PM)

...That's why unit testing is so critical, because it verifies all the behaviors of the software that matter.

That's a great way of putting it Justin. IMHO, you can put whatever the hell you like in the code you deliver to me, as long as it is proven to meet my requirements. The way to do that is to make sure there are appropriate and tracable requirements, and you use accepted testing to proove you met said requirements.

Link to comment

QUOTE

That continues to be my point: the world already assumes that software is unreliable, we design for it, and the fact that there are undocumented or experimental features in either the LabVIEW development environment or the runtime that runs the compiled code is not evidence that National Instruments is doing anything unethical.

You have to understand that a mechanical device and a digital device doing the same basic function usually have very different failure modes. It is not a question of wether software is inherently unreliably per default, it is the number of failure modes that is added when going for a digital solution, and the character of those failure modes.

An example is ignition system for an engine. A mechanical system consists of lots of moving parts and works untill somthing burns or brakes or gets worn. The failure modes are complete failure on all or some cylinders, or drop in perfomance. Two systems running in parallel will increase MTTF by several orders of magnitude, because all the failure modes in one system is covered by a functional second system. It can run also when both systems fail, if the failure or on different cylinders or general poor performance.

A digital system is per default much more reliable because there are no moving parts, there is nothing to break. So, it is easy to assume that one digital system is much more reliable than a parallel mechanical system, because all the failure modes of the mechanical system are removed in the digital system. So, if two digital systems are mounted in parallel, it should be failsafe. This is obviosly not correct. As a start a digital system needs electrical power and the redundancy is transferred to the power distribution. Further, your system must handle all the failure modes in the power distribution (total and intermittent loss of power, power spikes etc).

The main important issue is the timing (the basic functionality of the ignition system). In a mechanical system this is handled by cams and contact points. They can brake, become worn out, but nothing more can happen. In a digital system, they need to be programmed, and this one of the main reasons you want a digital system, you can program the timing so it is optimal under all conditions (increasing HP and lower fuel consumption). But this is also where you can add an infinite number of failure modes. A new and dangerous failure mode is too advanced timing for one or several cylinders (ignition happends too early) due for instance to a power spike upsetting the controller. This can completely ruin the engine within a matter of minutes. But the real issue is that a parallel system will not prevent this, it will only make it worse (if ignition already has happened, you cannot "unignite"). Normal redundancy will double the probability of this failure mode. This is a general characteristic of digital control systems, normal redundancy does not work because you have added failure modes that cannot be solved by redundancy. In fact you would be better of with just one controller and focus on reliable power distribution.

In flight control systems this is solved by using at least 3 controllers, and using the output of the two most similar. So, the need for triple (and more) redundancy in digital systems is due to the nature of the failure modes, nothing else, and certainly not because it is assumed that software developers include undocumented and experimental trash in finished products. I mean, companies and governments are using billions of billions of dollars to make things as save and reliable as possible. There is no way anyone can convince me that including undocumented and experimental code that potentially can lower this reliability, is ethical.

Link to comment

QUOTE (bsvingen @ Apr 10 2009, 12:37 AM)

...There is no way anyone can convince me that including undocumented and experimental code that potentially can lower this reliability, is ethical.

OK, you believe it's unethical. None the less, it happens and varieties of it happen all the time by design and that is sometimes not documented BECAUSE it's "experimental". Calling it unethical to do that really doesn't change much in what happens in the real world nor does it alter the fact that this happens for many good reasons not the least of which is that it really is impossible to anticipate all possible failure modes of a real-time, physically implemented system.

So, if you are going to be consistent with your beliefs -- and also not be unethical yourself -- I would guess that means that you can no longer use LabVIEW.

Link to comment

QUOTE (bsvingen @ Apr 9 2009, 11:37 PM)

There is no way anyone can convince me that including undocumented and experimental code that potentially can lower this reliability, is ethical. (emphasis added)

But there's the rub: "code that can potentially lower this reliability."

You seem to believe that the mere existence of any undocumented features, or experimental features that are left in the software (or the runtime that it executes in) but not exposed to the user, automatically constitutes a lower-reliability system. I say unit test the code, and design the system so that no individual piece puts too much trust in any other piece.

Link to comment

QUOTE (Justin Goeres @ Apr 10 2009, 09:03 AM)

I say unit test the code, and design the system so that no individual piece puts too much trust in any other piece.

You totally had me until the comma. I completely agree with the unit testing, but I think the second part of that sentence is irrelevant, and possibly incite-ful. If you successfully unit test your code to well written requirements (usling standard processes, of course) then you're covered - the system design need not be considered.

Link to comment

QUOTE (Justin Goeres @ Apr 10 2009, 05:03 AM)

But there's the rub: "code that can potentially lower this reliability."

You seem to believe that the mere existence of any undocumented features, or experimental features that are left in the software (or the runtime that it executes in) but not exposed to the user, automatically constitutes a lower-reliability system. I say unit test the code, and design the system so that no individual piece puts too much trust in any other piece.

I totally agree with Justin. What if the compiler used by NI to build LabVIEW itself contained an undocumented feature? What if the operating system contained an undocumented feature, or else the OS used to run the compiler that built LabVIEW? The compiler is just a set of bits which generates another set of bits (LabVIEW.exe) which generates another set of bits (your app) which is controlling the action. Just because their creation is separated in time doesn't mean it's not just one system. Testing is the only way to validate behavior, not any proof about a closed set of features.

Link to comment

QUOTE (jdunham @ Apr 10 2009, 08:27 AM)

I totally agree with Justin. What if the compiler used by NI to build LabVIEW itself contained an undocumented feature? What if the operating system contained an undocumented feature, or else the OS used to run the compiler that built LabVIEW? The compiler is just a set of bits which generates another set of bits (LabVIEW.exe) which generates another set of bits (your app) which is controlling the action. Just because their creation is separated in time doesn't mean it's not just one system. Testing is the only way to validate behavior, not any proof about a closed set of features.

You ask interesting questions to which I would reply:

I'm certain that the compiler DOES contain undocumented features. I'm also certain the the OS does as well. And neither of those conditions means that it is unpredictable -- UNLESS (perhaps) one uses those undocumented features.

For years Ken Thompson used to deny that there were ANY "back doors" into the Unix kernel. We all KNEW that there were. Ultimately he did confirm that they were there. I've never seen an OS nor a compiler that doesn't have undocumented features but, then again, I'm pretty much a realist and, if the unit testing process works so outcomes can be validated, that works for me. So who knows? There may well be an OS and/or compiler that has absolutely NO undocumented features whatsoever. I don't really care because I gave up on idealizations a couple of decades ago.

Link to comment

QUOTE (crelf @ Apr 10 2009, 07:24 AM)

You totally had me until the comma. I completely agree with the unit testing, but I think the second part of that sentence is irrelevant, and possibly incite-ful. If you successfully unit test your code to well written requirements (usling standard processes, of course) then you're covered - the system design need not be considered.

Yep. Upon rereading my post, you're right. It was unnecessary.

Link to comment

QUOTE

Testing is the only way to validate behavior, not any proof about a closed set of features

You are missing the point (at least half of it). The behavior is not the issue, the quality is. More presisely you want to get rid of all possibilities for unexpected things to happen, at least as much as you can. It is about failure modes, not validation of operational modes.

Testing of a system and documenting (defining) its subsystem are two different things. Testing a system consisting of undocumented sub systems will still leave you with an experimental system. This is a simple fact.

Something tells me some of you just don't get it (no offence :) ) Without stretching the similarity too far, minimizing failure modes is somewhat similar to minimizing the possibility of bugs entering the code. In this respect LV already is way beyond most other compilers. It is also somewhat similar to the reasons for encapsulation (and OOP). You simply cannot expect the unexpected, but you can minimize the possibility of unexpected things to happen by assuring everything is well defined, minimizing side effects and by reusing things you know have no failure modes. Is it possible to be (reasonably) sure that things are well defined without documenting it so other people can look at it, or at least leave the source open? I doubt it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.