Agile (if the culture incentivises honestly) does have the benefit of feedback. Rather than a black-box which "could" be done in a month, you can instead see that the team on-average under-estimates by 10 days, has x stories left, so it'll likely be done in 2-3 months at this rate.
The downsides are that it opens up the team to feature-creep, introduced a pile of weird buzzwords, and can be massively wasteful if the team doesn't want it and it's forced on them.
> Agile (if the culture incentivises honestly) does have the benefit of feedback.
Doesn't Waterfall incorporate feedback? In my memory and experience, it does.
My memory of learning Waterfall back in the early 90s is a bit hazy, but I distinctly remember that one of the advantages touted was that mistakes earlier in the process of developing software are cheaper to fix than mistakes later in the process (no matter what the process is).
Finding an error when drawing up the requirements is orders of magnitudes cheaper to fix than finding an error after piloting at the client.
As I remember it, Waterfall was taught (to me) as a way to avoid developing the wrong product, or a product that does not meet the requirements of the end-user (sound familiar?)
My SDLC textbooks from back then were filled with different ways to elicit requirements, because fixing broken requirements after the product was developed was (and probably still is) bloody expensive.
In the same way, fixing bugs during a test phase is a lot cheaper than after a deployment phase (hence various types of testing were introduced by the textbook).
Fixing bugs during a pilot phase is a lot cheaper than after a full deployment, hence piloting was in the textbook as well.
Agile aims to deliver features one sprint at a time. Waterfall aims to deliver the specified system as a whole or not at all. Which one works better is probably contextual, but I cannot see either one being unconditionally better than the other.
Yes, though as originally documented the Waterfall process didn’t have much to say about iteration. It was simply left to the reader to realize that they would be releasing version 2.0 of their software a year or two later, and that they could incorporate feedback from the users of version 1.0.
Most of the data about costs is quite limited too. A lot of the numbers that people quote come from the 60s, specifically a project to develop software for a ground–to–air missile. Certainly in that project fixing a bug after deployment would be very expensive, since it would probably require you to visit all the military bases where the missiles were deployed, disassemble them to some degree, and swap out a ROM chip. These days we can deploy a product with one command. If you find a bug tomorrow, you can fix it and run the command again. These days the only cost to fixing a bug after deployment is the revenue you lost due to the bug, and that might be minimal too.
> Most of the data about costs is quite limited too. A lot of the numbers that people quote come from the 60s, specifically a project to develop software for a ground–to–air missile.
Well that makes sense with my belief that WF aims to "deliver the system as a whole or not at all". There's no point in delivering an MVP G2A missile system that is not complete.
> Certainly in that project fixing a bug after deployment would be very expensive, since it would probably require you to visit all the military bases where the missiles were deployed, disassemble them to some degree, and swap out a ROM chip. These days we can deploy a product with one command. If you find a bug tomorrow, you can fix it and run the command again. These days the only cost to fixing a bug after deployment is the revenue you lost due to the bug, and that might be minimal too.
To be sure, CI/CD pipelines make the fixing of bugs in the code cheap enough to simply deploy when you can. However, bugs in the specification aren't going to be cheaply fixed after deployment, and these are much more common[1] and hard to get correct than any other type of bug.
[1] I.e. the code does exactly what the programmer intended it to, but what the programmer intended it to do is different to what was needed.
>> Most of the data about costs is quite limited too. A lot of the numbers that people quote come from the 60s, specifically a project to develop software for a ground–to–air missile.
>Well that makes sense with my belief that WF aims to "deliver the system as a whole or not at all". There's no point in delivering an MVP G2A missile system that is not complete.
True! :)
Plus, the customer knew pretty much what they wanted from the start. Not much chance they watch the demo and then ask if it can be mounted to an airplane…
> To be sure, CI/CD pipelines make the fixing of bugs in the code cheap enough to simply deploy when you can. However, bugs in the specification aren't going to be cheaply fixed after deployment, and these are much more common[1] and hard to get correct than any other type of bug.
Yes, though in the agile model the idea is that the spec is just what the customer asked for two weeks ago (or whatever your sprint length is), after seeing how the program worked at that time. If there’s a misunderstanding and you correctly implemented the wrong thing, then the cost is at most the two weeks you spent on it.
You spend months dreaming up a design spec with plenty of timing diagrams, UML diagrams, classes, etc.
Then, it's reviewed, which means people read it and and try to make comments on it, then everyone pat themselves on the back because it's been 'signed-off'.
Then you start actually implementing it and you quick find out that, actually... And that's not even accounting for any changes on requirements that might have been asked in the mean time.
Feedback means actual, hard feedback, be it from the customer or reality.
Errors get more and more expensive to fix the further down the line they are found and that's exactly why hard feedback (which usually is either actual tests or customer feedback) should be obtained ASAP, which can be achieved through iterations.
> You spend months dreaming up a design spec with plenty of timing diagrams, UML diagrams, classes, etc.
>
> Then, it's reviewed, which means people read it and and try to make comments on it, then everyone pat themselves on the back because it's been 'signed-off'
I'm sure that happens, but I'm not commenting on what happens, I'm commenting on the Waterfall process as I remember it being taught in the 90s. What yysay above is definitely not what was taught.
> Feedback means actual, hard feedback, be it from the customer or reality.
Yeah, which is what the Waterfall process that I remember advocated: that the end-user be involved at all times, that the design be refined iteratively and quickly, and that the requirements.
I'm not sure where this misconception of what Waterfall advocates came from, but you're not the only one with it.
It's not a misconception, it's the way it is, including in the paper that has been quoted in this thread.
For instance, "Step 2: document the design" advocates extensive documentation upfront.
Of course there is feedback but the loop is very large and slow, and the paper emphasises that feedback ought "to be confined to sucessive steps", which, in addition is problematic when requirements change mid-way.
Agile is trying to solve a real issue with the waterfall model in general and, especially, when requirements are fluid.
If I read the paper correctly, the iterative approach considered as wasteful, and two-stage waterfall is recommended instead: documentation and planing at first stage, to find weak points first, then execution of the plan. IMHO, it was designed for times when computer time was expensive.
I did a few projects in waterfall style about 20+ years ago, but my memories are faint. Our PM used a UML modeling tool to model classes and relationships, then exported it into Java. We wrote a lot of useless documentation. I developed a good habit to write documentation immediately, when memory is fresh, to avoid the pain of writing it later.
I don't get the point of Figure 4 in the .pdf, I see htat and I hear process process process, don't skip process even when it makes no sense.
You can actually tell that the methodology was developed for a different time period, when you would have to schedule time in order to run your program on the mainframe, instead of just hitting build on your machine.
The point of figure 4 is to admit that the neat step-to-step iterations in fig. 3 don't always happens. Sometimes testing reveals flaws that are bigger than the preceding coding step, and you have to back up to design or even requirements. But then once you back up, you still have to go through the remaining steps without skipping them.
Ex: Testing reveals a corner-case that was never accounted for in the design. You can't just re-code, you have to go back up and redesign, then code the new design, then retest.
Royce's recommendations to minimize these problems are
a) more and better design, and
b) more and better documentation
In contrast, the Agile approach is to try to slice the work into finer and finer tasks, so these same activities can span a few sprints. It works in the small, but loses the forest for the trees.
> No, by definition. Do you see a stream of water back to the beginning of waterfall in any waterfall on the planet? Nope.
I'm not referring to real waterfalls[1], I'm referring to the Waterfall Software Development Process, which is an iterative process that requires almost constant feedback.
[1] Real waterfalls certainly do have the water returning to the top of the fall, only it's not as a stream and it's not immediate :-)
The downsides are that it opens up the team to feature-creep, introduced a pile of weird buzzwords, and can be massively wasteful if the team doesn't want it and it's forced on them.