Jump to content

Why do people alpha or beta test?


crelf

Recommended Posts

I only release stuff that works and looks clean. I also don't believe in alpha or beta testing (it's just an excuse lazy people/companies use for free resource so they don't have to test it themselves!).

...and the opportunity for your potential customers to help shape the direction of your product by pushing it through use cases you hadn't thought of :)

Link to comment

...and the opportunity for your potential customers to help shape the direction of your product by pushing it through use cases you hadn't thought of :)

How so? They can only put it through use cases if it works. Otherwise they are just trying to find workarounds to get it to work.

Alpha = Doesn't work.

Beta = Still doesn't work.

;)

Edited by ShaunR
Link to comment
How so? They can only put it through use cases if it works. Otherwise they are just trying to find workarounds to get it to work.

I agree that you can't dump it in someone's lap totally unfinished, but I couldn't imagine dumping it my customer's laps without them having an opportunity to tool around with it before its officially released. What you're suggesting might work for systems that have few and very well defined features, but once you scale up even just a little, the possibilities of what your customers might do with it grow quickly. Look at LabVIEW: I'm glad they have betas, because, without infinite resources or infinitie time, they can't possibly imagine to test all of the combinations of awesome things we're going to try to make it do.

Maybe we're comparing apples to oranges, or I'm missing your point. It's a thread hijack anyway, so I'm going to split to t a separate thread.

Link to comment

Alpha = Doesn't work.

Beta = Still doesn't work.

LOL.

I'm curious how people decide whether they are doing alpha or beta testing? I've always considered it alpha testing if there are large chunks of functionality that have not been implemented, UI is unfinished, etc. Beta testing is when the software is mostly (> ~80%) feature complete, UI is mostly in place, etc. I've had other tell me they don't consider the software to be in beta testing until it is feature complete and all you're looking for is bugs. Thoughts?

Link to comment

I agree that you can't dump it in someone's lap totally unfinished, but I couldn't imagine dumping it my customer's laps without them having an opportunity to tool around with it before its officially released. What you're suggesting might work for systems that have few and very well defined features, but once you scale up even just a little, the possibilities of what your customers might do with it grow quickly. Look at LabVIEW: I'm glad they have betas, because, without infinite resources or infinitie time, they can't possibly imagine to test all of the combinations of awesome things we're going to try to make it do.

Maybe we're comparing apples to oranges, or I'm missing your point. It's a thread hijack anyway, so I'm going to split to t a separate thread.

Well. I deal with automation machines and if it doesn't work there are penalty clauses. The clients are only interested in what day it will go into production and plan infrastructure, production schedules and throughput based on them being 100% operational on day 1. Deficient software is something desktop users have gotten used to and have been trained to expect. Software is the only discipline where it is expected to be crap when new. Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".

Link to comment

I'm curious how people decide whether they are doing alpha or beta testing? I've always considered it alpha testing if there are large chunks of functionality that have not been implemented, UI is unfinished, etc. Beta testing is when the software is mostly (> ~80%) feature complete, UI is mostly in place, etc. I've had other tell me they don't consider the software to be in beta testing until it is feature complete and all you're looking for is bugs. Thoughts?

https://en.wikipedia...ife_cycle#Alpha

Before having read it, I agreed with the feature-complete notion of a beta, which is primarily what separates alpha from beta in my mind.

Link to comment

LOL.

I'm curious how people decide whether they are doing alpha or beta testing? I've always considered it alpha testing if there are large chunks of functionality that have not been implemented, UI is unfinished, etc. Beta testing is when the software is mostly (> ~80%) feature complete, UI is mostly in place, etc. I've had other tell me they don't consider the software to be in beta testing until it is feature complete and all you're looking for is bugs. Thoughts?

Alpha and beta testing (for me) is internal only. Alpha testing will be feature complete and given to other programmers/engineers to break it on an informal basis. For beta testing, it will be given to test engineers and quality engineers for qualification (RFQ). Sometimes it will also be given to field engineers for demos or troubleshooting clients equipment. Clients will never see the product until it is versioned and released.

Link to comment
Well. I deal with automation machines and if it doesn't work there are penalty clauses. The clients are only interested in what day it will go into production and plan infrastructure, production schedules and throughput based on them being 100% operational on day 1. Deficient software is something desktop users have gotten used to and have been trained to expect.

Ok, so we were trying to compare apples to oranges. 'nuff said

Software is the only discipline where it is expected to be crap when new.

That's not entirely true, but it's not entirely unture either. I think that some new softwares are expected to be buggy, possibly because they haven't gone through alphas or betas, and/or the market has forced an early release on a less-than-mature product. It's also one thing that makes software so awesome - changes (whether they're bug fixes or feature additions) can usually be made with a much faster turn around than hardware. That's not always the case, but if we're talking about product platforms here (which we're not :) ), then it's a great way to go- especially if you have more than one customer: you can weight feedback on feature requests and make strategic business decisions based on them, which can be super efficient during alpha and beta cycles, as opposed to waiting until full releases.

Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".

Again, apples and oranges. Image an iPhone user who wants an app that allows them to read their email: if you said to them "I've got an app that does 98% of what you want, but 1% of the time it'll take an extra 30 seconds to download attachments, so I'm not going to release it for another 3 years until it's perfect". The iPhone user goes and downloads another app.

Also, skyscrapers aren't ever perfect - they're good enough. And I'm pretty sure building owners are given tours of their buildings before they're completed :)

As for the type of systems you build, I agree with you - and we make similar-sounding systems too. Such tings are decided at requirements gathering and design phases, and are usually locked down early. That said, we also have a suite of software products we use on these programmes, so it's a balance.

In summary, as Jeff Plotzke said during his 2010 NIWeek session: "Models can, and should, fit inside other models." :yes:

Alpha and beta testing (for me) is internal only. Alpha testing will be feature complete and given to other programmers/engineers to break it on an informal basis. For beta testing, it will be given to test engineers and quality engineers for qualification (RFQ). Sometimes it will also be given to field engineers for demos or troubleshooting clients equipment. Clients will never see the product until it is versioned and released.

That's an admirable goal, and works extrememly well in some industries. I sometimes find it useful to get feedback from end users during development, because, in the end, we want compliant systems, on time, on budget, that make our clients happy. That last item can be tricky if the first time they see it is the last time they see you :)

Anyway, it depends onthe idustry, technology , and client. All I'm trying to say is that deriding alpha and beta programmes completely doesn't make sense to me - every tool has a place.

Link to comment

Ok, so we were trying to compare apples to oranges. 'nuff said

Nope. It's software. It's comparing apples to apples. :yes:

That's not entirely true, but it's not entirely unture either. I think that some new softwares are expected to be buggy, possibly because they haven't gone through alphas or betas, and/or the market has forced an early release on a less-than-mature product. It's also one thing that makes software so awesome - changes (whether they're bug fixes or feature additions) can usually be made with a much faster turn around than hardware.

The expectation of bugs in software is exactly what I was saying about being "trained". There is no excuse for buggy software apart from that not enough time and resources have been spent on eliminating them. That is the reason why there are faster turnarounds in software, because people will accept defective software whereas they will not accept defective hardware.

That's not always the case, but if we're talking about product platforms here (which we're not :) ), then it's a great way to go- especially if you have more than one customer: you can weight feedback on feature requests and make strategic business decisions based on them, which can be super efficient during alpha and beta cycles, as opposed to waiting until full releases.

Indeed. However, that can be in parallel with the development cycle where the customers frame of reference is a solid, stable release (rather than a half-arsed attempt at one). If you operate an iterative life-cycle (or Agile as the youngsters call it :) ) then that feedback is factored in at the appropriate stages. The "Alpha" and "Beta" are not cycles in this case, but stages within the iteration that are prerequisites for release gates. (RFQ, RFP et. al.)

Again, apples and oranges. Image an iPhone user who wants an app that allows them to read their email: if you said to them "I've got an app that does 98% of what you want, but 1% of the time it'll take an extra 30 seconds to download attachments, so I'm not going to release it for another 3 years until it's perfect". The iPhone user goes and downloads another app.

Well. If it takes three years to fix something like that, then it's time for a career change! If you know there is a problem, then you can fix it. The problem is that it takes time/money to fix things and the culture that has grown up around software is that if they think they can get away with selling it (and it being accepted); then they will. If it wasn't acceptable they'd just fix it. I've never been an iPhone user (always had androids). However. If the apps are written by any script kiddy with a compiler and given away free, then, basically, you you get what you pay for.

Also, skyscrapers aren't ever perfect - they're good enough. And I'm pretty sure building owners are given tours of their buildings before they're completed :)

The phrase "Good enough" is exactly what I'm talking about. Good enough to get away with? Sky scrapers aren't perfect, but they are "fit for purpose", "obtain an level of quality", "comply with relevant standards", "fulfill the requirements" and don't fall down when you slam a door :D. The same cannot be said for a lot of software once released, let alone during alpha or beta testing.

As for the type of systems you build, I agree with you - and we make similar-sounding systems too. Such tings are decided at requirements gathering and design phases, and are usually locked down early. That said, we also have a suite of software products we use on these programmes, so it's a balance.

What's the difference? A balance between what? Between quality and less quality? Software is software. The only difference is how much time a company is willing to spend on it. Please don't get me wrong. :worshippy: I'm not commenting on your products. Just the general software mentality that, at times, makes me despair.

In summary, as Jeff Plotzke said during his 2010 NIWeek session: "Models can, and should, fit inside other models." :yes:

I've no idea what that means :lol:

That's an admirable goal, and works extrememly well in some industries. I sometimes find it useful to get feedback from end users during development, because, in the end, we want compliant systems, on time, on budget, that make our clients happy. That last item can be tricky if the first time they see it is the last time they see you :)

It's not a "goal".It is a "procedure" and the general plan works in all industries from food production to fighter jets. Why should software be any different? The "goal" is, in fact, that the first time they see it IS the last time they see me (with the exception of the restaurant or bar of course). That means the software works and I can move on to the next project.

Anyway, it depends onthe idustry, technology , and client. All I'm trying to say is that deriding alpha and beta programmes completely doesn't make sense to me - every tool has a place.

And I'm saying it doesn't matter what the industry or technology is. It matters very much who the client is and, by extension, what software companies can get away with supplying to the client. Public Alpha and Beta testing programmes (I'll make that discrimination for clarity in comparison with the aforementioned development cycles) are not a "tool" and are peculiar to software and software alone. They exist only to mitigate cost by exploiting free resource.

God help us the day we see the "Beta" kettle. :D

Link to comment

... that make our clients happy.

Happy clients? I have some unobtainium for sale...

Well. I deal with automation machines and if it doesn't work there are penalty clauses. The clients are only interested in what day it will go into production and plan infrastructure, production schedules and throughput based on them being 100% operational on day 1. Deficient software is something desktop users have gotten used to and have been trained to expect. Software is the only discipline where it is expected to be crap when new. Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".

My clients would like 100% operational on day 1, but 50 custom assembly, gauging and test stations with PC and PLC software all trying to dance together with operator interaction just doesn't give you that option (especially with decreasing time to market). The operator will do something that causes the programmer to stare, blink and go, "why would you ever do that?" All of the stations do go through debug and testing on our floor and the customer's floor, so we shake out most of the bugs, but it's impossible to get them all.

Link to comment
Nope. It's software. It's comparing apples to apples. :yes:

Nope, we're comparing products to services - apples and oranges :)

Indeed. However, that can be in parallel with the development cycle where the customers frame of reference is a solid, stable release (rather than a half-arsed attempt at one). If you operate an iterative life-cycle (or Agile as the youngsters call it :) ) then that feedback is factored in at the appropriate stages. The "Alpha" and "Beta" are not cycles in this case, but stages within the iteration that are prerequisites for release gates. (RFQ, RFP et. al.)

I like to think that the Agile process (not that I've ever seen anyone implement a true Agile process FWIW, but there are plenty of people who will tell you that they do :P ) and a vee model with added alpha/beta stages tracks relatively closely.

Well. If it takes three years to fix something like that, then it's time for a career change! If you know there is a problem, then you can fix it. The problem is that it takes time/money to fix things and the culture that has grown up around software is that if they think they can get away with selling it (and it being accepted); then they will.

Agreed - I can't argue with that :) Unfrotunately (and I"m not talking from any direct experience here) market forces can make or break companies in this situation, so there can be a business gamble here. I work in regulated industries with well structured processes, and customers who respect that, so there's not a lot of room for such shennanigans, but that doesn't mean it doesn't fit other fast turn around markets (like iPhone app stores, or, since you're an Android guy (as am I), Goolge Play.

If the apps are written by any script kiddy with a compiler and given away free, then, basically, you you get what you pay for.

Sure - and I'd prefer an app that was written by a script kiddy who went through a formal alpha and beta over one who didn't. But again, that's a different market than I'm used to professionally - we have customers who pay for custom solutions, which means we sell services, not products. That said, some of those services come with products inside them. If we're only talking about services, then I'm more inclined (but not totally) to agree with you. If we're talking about products, then I'm all for alphas and betas. That's what I meant about models fitting inside each other (as a side note, sorry to harp on this, but if someone says they use the Agile model, they might use portions of it at some level in their process, but probably use a vee above it, and maybe an iterative below it - see what I'm getting at?)

The phrase "Good enough" is exactly what I'm talking about. Good enough to get away with? Sky scrapers aren't perfect, but they are "fit for purpose", "obtain an level of quality", "comply with relevant standards", "fulfill the requirements" and don't fall down when you slam a door :D. The same cannot be said for a lot of software once released, let alone during alpha or beta testing.

Rather than comparing a software app to a skyscraper, let's try something a little more apt: a software app to, say, a power supply. If I'm designing a power supply, I'd like to think I'd make a prototype before final release that I might get some of my key customers (whether they be internal and/or external) to play with and give me feedback on. I think that probably makes sense. And I think "alpha" and "beta" can be analogous to "prototype" in the product world.

I'm not commenting on your products. Just the general software mentality that, at times, makes me despair.

I should hope not - I don't think you've ever seen any of them - in alpha, beta or otherwise :D I agree that there's a trending mentality toward treating beta testers as free bug finders, and that, with the availablity of easier release platforms (think AppStore and Play), the temptation to release software that clearly isn't ready but not marked as such for the purposes of getting free bug testing, is becoming rife. In that, I agree with you wholeheartedly. BUT I won't accept that that means all alpha and beta testing stages of products is immoral. Both sides need to be fully aware of the consequences, and choose to opt in to such programmes. I'm warning that (again, we're talking about products here, not services) ignoring early drop techniques can lead to software that meets the requirements, but leads to unhappy customers. We have several customers who jump at the chance to pay for an ECO to change requirements while we're developming (that's a whole other discussion), but it's only during prototype demos/betas that they have the opportunity to think those changes up. And then it's on them if they want to pay for said changes.

It's not a "goal".It is a "procedure" and the general plan works in all industries from food production to fighter jets.

Sorry, I should have been more clear: I was referring to services vs products.

The "goal" is, in fact, that the first time they see it IS the last time they see me.

Then I guess we philosophically disagree. I prefer to work closely with the customer during the project to make sure everything aligns to thier expectations (if possible, of course :) ) though all stages of a project's execution. As I alluded above, I prefer not to drop something in their lap that meets all of their initially defined requirements, but doesn't fit thier needs.

Public Alpha and Beta testing programmes (I'll make that discrimination for clarity in comparison with the aforementioned development cycles) are not a "tool" and are peculiar to software and software alone. They exist only to mitigate cost by exploiting free resource.

And we circle around to the start: as long as both parties understand and agree to the terms of a alpha or beta testing programme, I see them as very powerful tools.

God help us the day we see the "Beta" kettle. :D

I see no problem with a "prototype" kettle :P

Happy clients? I have some unobtainium for sale...

I'm not saying it's easy, but I know it's not unobtainable. Well, maybe it is for some customers :D

Link to comment

I like to think that the Agile process (not that I've ever seen anyone implement a true Agile process FWIW, but there are plenty of people who will tell you that they do :P ) and a vee model with added alpha/beta stages tracks relatively closely.

Well. "Agile development" is more of a state of mind than a process. In the same respect as TQM, it encompasses many "methods". However. Lets not get bogged down on semantics. I stated I use an iterative method which, simply put, consists of short cycles of requirements, development, test and release-to-quality which runs in parallel with another another iterative cycle (verification, test, release-to-customer/production). There's more to it than that. But that's the meat of it. The release to customer/production are phased releases so whilst all planned features are fully functional and tested. They are not the entire feature set. I will also add that "release-to-customer/production" doesn't necessarily mean that he gets the code. Only that it is available to inspect/test/review at those milestones if they so choose.

With this in mind. When I talk about alpha and beta testing. They are the two "tests" in each of these iterative processes. So the customer (who may or may not be internal e.g. another department) gets every opportunity to provide feedback throughout the lifecycle. Just at points where we know it works. We don't rely on them to find bugs for us.

Feedback from customers is that they are very happy with the process. The .release-to-customer/production appear on their waterfall charts (they love M$ project :) ) as milestones so they can easily track progress and have defined visit dates when either we go to them or they come to us. They also have clearly defined and demonstrable features for those dates.

Rather than comparing a software app to a skyscraper, let's try something a little more apt: a software app to, say, a power supply. If I'm designing a power supply, I'd like to think I'd make a prototype before final release that I might get some of my key customers (whether they be internal and/or external) to play with and give me feedback on. I think that probably makes sense. And I think "alpha" and "beta" can be analogous to "prototype" in the product world.

With this analogy. The alpha would be a PSU on a breadboard and the beta the first batch PCB that had been soldered a hundred times with no chassis. :D That's what prototypes in the real world are and that's exactly the same for software. The first production run would be where a customer might get a play.

Sorry, I should have been more clear: I was referring to services vs products.

More semantics. When a customer asks you "what can you make for me" he is asking for product.

Then I guess we philosophically disagree. I prefer to work closely with the customer during the project to make sure everything aligns to thier expectations (if possible, of course :) ) though all stages of a project's execution. As I alluded above, I prefer not to drop something in their lap that meets all of their initially defined requirements, but doesn't fit thier needs.

I think actually we agree. I don't propose to just drop it in their lap 6 months after they asked for it. Just that when the job is done. It is done to the extent that he doesn't need to come back to me. Doesn't need to phone, send emails, texts or carrier pigeons except to offer me another project because the last one went so well.

I see no problem with a "prototype" kettle :P

Just don't put water in it or you may find it an electrifying experience :)

You folks with your luxurious "written requirements" and "specifications"!

All my reqs and specs are transmitted verbally and telepathically.

Not a complaint - it's fun when the customer says, "Wow! I didn't think that could be done!"

Not always written. A lot of the time it is just meetings and I have to write the requirements then get them to agree to them. Amazing some of the u-turns people make when they see what they've asked for in black and white. Or, more specifically, what that glib request will cost them :)

Link to comment

Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".

I've never thought comparing software products with hardware or physical products was particularly relevant. Retail software products are nothing like skyscrapers. Imagine if I said to an architect, "I want you to design me a new skyscraper I can build anywhere, but I don't know where I'm going to put it or what environmental conditions it will be subjected to." Would you expect the architect to be successful?

If I built one copy of the skyscraper in Antartica and another copy in the Sahara, would they both function equally well, or would there be defects requiring changes to the design? What if I built one in an earthquake zone, or a tsunami zone, or on a raft in the ocean, or on the moon? The desktop environment in which retail software products exist is much less constrained than any architect has to worry about.

And I'm saying it doesn't matter what the industry or technology is. It matters very much who the client is and, by extension, what software companies can get away with supplying to the client. Public Alpha and Beta testing programmes (I'll make that discrimination for clarity in comparison with the aforementioned development cycles) are not a "tool" and are peculiar to software and software alone. They exist only to mitigate cost by exploiting free resource.

Some companies may do public betas for that reason, but it's not universal in retail software and I really doubt any large software companies do it for that reason. (Except maybe Google with GMail and GoogleDocs) First of all, public beta testing isn't free. You need to hire beta coordinators to manage interactions with the beta testers, assemble feedback, nail down repro cases, distribute the information to the developers, build infrastructures for getting the software to customers, etc. True, beta testers typically do not get paid (though sometimes they get other forms of compensation,) but that very different than claiming beta testing exists to exploit a free resource.

Second, in my experience, public beta testing usually results in relatively few new bugs being filed. There were typically at least some new bugs, but not nearly as many as you would expect. Public beta testing of retail boxed software is not a very effective or efficient way to find new bugs. If companies were just interested in saving money they would skip public beta testing altogether. Why do they do it? Depends on what kind of software it is. Sometimes game companies will release betas to test play balance. All the betas I've been part of like to get usability information from customers.

They also specifically use beta testing as a way to check how the software works in a wide range of pc configurations. (Equivalent to putting the skyscraper in the Sahara or on the moon.) There is no way any software company can gather the resources required to test their software on all possible pc configurations. When I was at MS Hardware they had a testing lab that contained (to the best of my recollection) ~50 common computers for software testing. Some were prebuilts from Dell or other vendors, others were home built using popular hardware components. Between all the different hardware combinations and the various driver versions for each piece of hardware we still knew we were only covering a very small slice of the possible configurations.

Link to comment

I've never thought comparing software products with hardware or physical products was particularly relevant.

Why is software so special? It is a tangible deliverable that can be measured and quantified. It's not as if it is like, say, a thought!

Retail software products are nothing like skyscrapers. Imagine if I said to an architect, "I want you to design me a new skyscraper I can build anywhere, but I don't know where I'm going to put it or what environmental conditions it will be subjected to." Would you expect the architect to be successful?

After consultation to refine the requirements. Yes.

If I built one copy of the skyscraper in Antartica and another copy in the Sahara, would they both function equally well, or would there be defects requiring changes to the design? What if I built one in an earthquake zone, or a tsunami zone, or on a raft in the ocean, or on the moon? The desktop environment in which retail software products exist is much less constrained than any architect has to worry about.

I cannot believe you just wrote that. :lol:

This is reductio ad absurdum argument. What is worse. Is that it is a reductio ad absurdum arguement based on an analogy :)

Let me ask you this though. How much retail software only runs on Windows, or only on Android, or only on Mac? How much retail software written by anyone actually runs on all available operating systems? (environments). You could probably count those that run on more than one on your fingers (LabVIEW being one of them).

Some companies may do public betas for that reason, but it's not universal in retail software and I really doubt any large software companies do it for that reason. (Except maybe Google with GMail and GoogleDocs) First of all, public beta testing isn't free. You need to hire beta coordinators to manage interactions with the beta testers, assemble feedback, nail down repro cases, distribute the information to the developers, build infrastructures for getting the software to customers, etc. True, beta testers typically do not get paid (though sometimes they get other forms of compensation,) but that very different than claiming beta testing exists to exploit a free resource.

Large software companies only do anything for one reason and that is to reduce costs and make profit. I would wager very few (if any) companies hire for special roles of public beta tester cordinator, it is usually just an extension of an existing employees role-managers are easy to come by and large companies are full of them. The same goes for IT. So of course they exploit a free resource. They'd be stupid (in business terms) not exploit someones offer to spend time and effort in testing for no free when they would have to spend considerable amount on employing a department to do it..

Second, in my experience, public beta testing usually results in relatively few new bugs being filed. There were typically at least some new bugs, but not nearly as many as you would expect. Public beta testing of retail boxed software is not a very effective or efficient way to find new bugs. If companies were just interested in saving money they would skip public beta testing altogether. Why do they do it? Depends on what kind of software it is. Sometimes game companies will release betas to test play balance. All the betas I've been part of like to get usability information from customers.

They also specifically use beta testing as a way to check how the software works in a wide range of pc configurations. (Equivalent to putting the skyscraper in the Sahara or on the moon.) There is no way any software company can gather the resources required to test their software on all possible pc configurations. When I was at MS Hardware they had a testing lab that contained (to the best of my recollection) ~50 common computers for software testing. Some were prebuilts from Dell or other vendors, others were home built using popular hardware components. Between all the different hardware combinations and the various driver versions for each piece of hardware we still knew we were only covering a very small slice of the possible configurations.

I don't subscribe to the "software is special and harder than quantum mechanics" school of thinking. I happen to think it is one of the easier disciplines with much less of the "discipline". If you are doing full factorial testing on different PCs then you a) don't have much confidence in your toolchains, b) don't have much confidence in your engineers and c) expecting to produce crap.

Link to comment
Not always written. A lot of the time it is just meetings and I have to write the requirements then get them to agree to them. Amazing some of the u-turns people make when they see what they've asked for in black and white.

Right - we rarely use a customer's requirements document - they usually give us a "specfication" that we write traceable requirements to, which we then have them argee to (so we're all on the same page) - and it's that requirements document that we test to.

PS: I'm still convinced we're talking about different things, but that horse is already dead and starting to smell.

Link to comment
Imagine if an architect said "here's your new skyscraper, There's bound to be structural defects, but live in it for a year with a hard-hat and we'll fix anything you find".

Actually, they do exactly that. It is very common for building contracts to include clauses to make the new building more livable post-opening. The building is functional, but not necessarily usable. And there is always the possibility of a leak in some window or something like that.

Software isn't special because it is harder. It's special because it is easier. Don't like the carpets in new building? Expensive to replace and lots of waste on top of developer time. Don't like app theme? Only developer time. Might as well run a beta and see if you've got it right.

Link to comment

Why is software so special?

It's not that it's special, it's just different.

Would you expect the architect to be successful?

After consultation to refine the requirements. Yes.

Refine? Uh uh, that *is* the requirement (with respect to location.)

The desktop environment in which retail software products exist is much less constrained than any architect has to worry about.

I cannot believe you just wrote that. :lol:

This is reductio ad absurdum argument. What is worse. Is that it is a reductio ad absurdum arguement based on an analogy :)

Ahh... the ever-so-common fallacy of "appealling to the fallacy." (Dismissing an argument in its entirety based on identifying a logical fallacy, without regard to whether the argument relies on the fallacy. Or in the context in which it is usually employed, "I can't refute the argument so I'll claim it contains a logical fallacy and ignore it.")

I absolutely agree comparing software development to building a skyscraper is absurd. Why'd you do it in the first place?

How much retail software only runs on Windows, or only on Android, or only on Mac?

I wasn't referring to running on different operating systems. I was referring to the various hardware configurations, drivers, services, and all the other stuff that is different between computers even if they are running the same operating system. Operating systems have improved a lot in the last decade in providing a consistent environment for the software, but the environmental variables are still much less constrained than the environmental variables an architect has to deal with.

Large software companies only do anything for one reason and that is to reduce costs and make profit... They'd be stupid (in business terms) not exploit someones offer to spend time and effort in testing for no free when they would have to spend considerable amount on employing a department to do it..

If beta testing were an effective alternative to having an in-house QA department, then I'd agree with you. It's not. It's not even close. I'll grant you that the overriding goal of all companies is to maximize profit (though not necessarily by reducing costs.) Releasing shoddy software isn't a good long term strategy for maximizing profits, and beta testing isn't a good way to find bugs. They'd be stupid (in business terms) to rely on beta testing for QA.

I would wager very few (if any) companies hire for special roles of public beta tester cordinator, it is usually just an extension of an existing employees role-managers are easy to come by and large companies are full of them. The same goes for IT.

May I ask what you're basing your assertion on? I can't say what most large software companies do, but those beta programs I've been able to see from the inside require a lot of time, effort, and money. Beta Coordinator was an actual position. It was somebody's job title. It may have been filled by a contractor, but it was too big of a job to just add it to someone else's list of tasks.

I don't subscribe to the "software is special and harder than quantum mechanics" school of thinking.

I'm not claiming software is harder than quantum mechanics. There are some things that make it harder than structural engineering. In other ways it's probably easier.

One thing that makes it harder is customer's expectations. People intuitively accept constraints to physical objects. Nobody buys a Ford Focus and expects it to be suitable for all driving conditions they may encounter. What if there's 2 feet of snow on the ground? What if I want to tow a 4,000 pound trailer. What if I want to play around on some sand dunes? Nobody considers the car defective for not allowing them to use it in these conditions. Just seeing the car gives them a pretty good idea of what it can be used for.

People also expect software to be much more malleable than physical objects. Most people have a pretty good idea that modifying a car designed to use unleaded gasoline to be able to also use diesel fuel requires more than just drilling out the hole in the fuel tank so the diesel pump nozzle fits. Yet software customers ask for those kinds of changes all the time and expect them to be easy, because on the surface they look easy.

I happen to think it is one of the easier disciplines with much less of the "discipline".

Easier? Maybe. I suppose it depends on a person's particular talents. But it is a very new discipline and it is much more of a craft than a science. Engineering knowledge grows from failure. You try something new and when it fails you figure out why. Structural engineering has had thousands of years to figure out how to build things correctly. Software engineering has had about fifty. Structural engineering is heavily based on knowing the physical properties of materials and applying mathematical equations to predict what will happen when a design is subjected to different conditions. Software engineering hasn't developed to the point where we can do that consistently.

If you are doing full factorial testing on different PCs then you a) don't have much confidence in your toolchains, b) don't have much confidence in your engineers and c) expecting to produce crap.

It's kind of irrelevant because nobody can do or expects to do full factorial testing. Regardless, any combinatorial testing is simply a recognition that d) the software must run correctly in an environment that is not well-defined and that the developer has very little control over.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.