Project: Quick & dirty

Dijkstra would not like it

Jérôme Beau
6 min readJun 25, 2020
“I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself “Dijkstra would not have liked this”, well, that would be enough immortality for me.” — Edsger W. Dijkstra, both leader in the pursuit of simplicity and the abolition of the GOTO statement from programming.

Nine times out of ten, when you ask managers to pick among different implementation options, they answer:

The simpler.

which is a shy way to say:

The quicker.

which sounds frustrating as if they didn’t pick attention to the pros and cons you just described, as if the only criterion was speed. All they have in mind at this time is the next delivery date and they just compare two short-term costs:

cost (quick) < cost (clean)

Which is usually true. However this is using the “quick” part of the equation only. The full equation goes like this:

  cost (quickAndDirty) 
= cost (quick)
+ cost (dirty)

What is cost (dirty) exactly? It is the time developers will spend to re-write or refactor the quick code in a way that it is maintainable, so that further developments will not slow down.

However, as that cleanup operation implies re-writing (i.e. dumping the quick code and replacing it by the clean version), we can state that:

  cost (quickAndDirty) 
= cost (quick)
+ cost (clean)

and so that:

cost (quickAndDirty) > cost (clean)

Impact over time

To illustrate this approach and alternatives, we built a model to simulate:

  • the cost (time spent) of development ;
  • the technical debt, which is the sum of all quick code ;
  • the codebase maintainability, i.e. the ability to read/understand it and change it.
  • the delivery pace as the amount of features shipped without regressions.
Even when coded “cleanly” (debt is zero here), the complexity of the built software increases over time along with the code size (accumulation of new features delivered). Without regular refactoring, the maintainability decreases as the code gets more and more complex and the cost of adding new features in such a poor code increases. The curve of delivery flattens and in the end you cannot deliver anymore (logarithmic scales are used here to better show evolutions over time).

Now let’s look at the worse-case scenario of doing only quick & dirty development instead of clean ones:

As the technical debt increases, maintainability decreases half earlier, down to a level where no delivery can be done anymore (flatline of shipping) as the code got too hard to maintain without breaking things. The cost of development curve steepens for the same reasons: because of the poor quality code, it takes longer and longer to produce new features without breaking existing ones.

So even with speed as the single criterion, we can see that repeated local speed (the “quick” part with “dirty” effects) impairs global speed (delivery pace).

When to handle technical debt?

Actually the cost of a technical debt can only be accounted to a subsequent development iteration (which makes it even less noticeable at start). This can be the immediate next one:

  cost (quickNDirt1) 
= cost (quick1)
cost (cleanDev2)
= cost (dirty1)
+ cost (dev2)

In that scenario systematic immediate cleanup of the dirty code make things better:

With systematic cleanup the debt never increases more than one iteration. Maintainability gets jagged (because of each dirty-then-cleanup cycle) but keeps an acceptable level longer, allowing to deliver longer as well. Cost is sometimes higher tough, as cleaning is more costly (=2) than dirtying (=1).

However cleanup phases (accounted as the “cost of dirty”) can also be delayed (they are often), and occur to some later iteration:

  cost (quickNDirt1)
= cost (quick1)
cost (dev2)
= cost (dev2)
cost (cleanDev3)
= cost (dirty1)
+ cost (dev3)

But each new quick & dirty code in the interim will add up to the debt:

  cost (quickNDirt1)
= cost (quick1)
cost (quickNDirt2)
= cost (quick2)
cost (cleanDev3)
= cost (dirty1)
+ cost (dirty2)
+ cost (dev3)

Now let’s test a fool hypothesis: could letting the debt increase be beneficial in any way?

Not resolving all the debt makes it increase progressively. The cost saved by not cleaning up all the debt (quick dev) is actually increased by the bad consequences on maintainability, which falls earlier, thus impairing the delivery capability more quickly.

This doesn’t seem to be a good solution as increasing letting the debt grow costs you more in the end (as any debt does). One could say that you pay maintainability “interests” in this case.

Even clean code requires refactoring

But that is not enough. As we saw in the first graph, writing only “clean” code warrants a good delivery but still at an increasing cost, because even clean code requires periodic refactoring so that new features can be integrated, not just as additions (thus adding complexity), but merged into concepts shared by the whole codebase. You can see that as keeping a room clean but, as new furnitures are added in, wanting to move or replace some of them to make the whole thing more practicable.

Once again, even if refactoring is costly, it should be seen as beneficial to the whole project’s maintainability (and so lifespan). Thus, on the long run, the cost of refactoring will be balanced by the increased maintainability:

Avoiding quick & dirty to avoid debt is not enough: another kind of “implicit” debt with the increasing complexity of the product, and adding periodic refactorings allow to mitigate the loss of maintainability while keeping a good delivery pace, while not making costs worse.

Now, what if the quick & dirty + cleanup (the other “no debt” approach) would include such periodic refactoring as well?

Quick & dirty, if cleaned up, look similar to the clean code approach when including refactorings, but is a bit more costly and fails in maintainability earlier, because of the time lost in a subsequent cleaning up iteration (instead of writing clean version directly). As a result, delivery is less optimal.

It’s not only about code

Think about some codebase that was not cleaned up enough and consequently suffers from huge maintainability issues. Would you feel motivated to work on it? Probably not, or you might ask a lot of money as a compensation for your expected suffering. Depending on project’s management about the quick & dirty approach, you would also put more or less hope in the perspective of getting things better, which could lead to even more frustration and even less motivation. No one wants to build on sand.

You would probably be motivated by rewriting a new fresh version of the app using the latest hyped framework, but possibly nervous about the size of such an enormous task ahead.

In the end, both cases — refactor or rewrite — will be bigger challenges than maintaining the existing app for good.


Maintainability is key in project development as it impacts both cost and delivery capability. Poor maintainability imply some unacknowledged effects such as bad stability (i.e. more and more regressions as the code is getting more and more obscure) and developers productivity (coding is getting harder and more stressful). In induces a vicious circle where developers are less and less motivated in maintaining something that is bad and hurtful.

Releasing time pressure is not enough to avoid « quick and dirty » syndrome. You also have to build trust and confidence with the company.

In real life, “quick & dirty” iterations cannot be avoided for both business reasons (deliver a feature to close a deal typically) and developers laziness (it’s dirty but “it works”) but their effect is detrimental on maintainability.

As in any building activity (like an entrepreneurial one when you “build” a company), productivity cannot stay high without investment, and the only way to mitigate the maintainability damage of quick & dirty iterations is to compensate them by “cleanup” iterations. However, because those cleanup additional iterations result in time loss, cleaned-up quick & dirty code can be summed up as “clean code in twice iterations”.

However, another dimension of maintainability loss is the “natural” tendency of any code to get more complex (and so less maintainable) as it gets bigger and bigger (as a result of features additions). That means that any “no debt” approach (either “quick & dirty + cleanup” or “clean”) require periodic refactoring to keep a reasonable maintainability rate.

In the end, both the amount of quick & dirty and refactoring iterations will dictate the product life expectancy, that is, the time when the code base will be deemed not maintainable anymore and will stop the delivery capability. At this point the only options will be either product freeze/discontinuation or full rewrite. As the saying goes:

— Why refactor it, since we’ll rewrite it from scratch in two years?

— To avoid rewriting it from scratch in two years.

Another cause for full rewrites is frameworks hype and deprecation. For that matter as well as going quick & dirty or not, the approach of your choice will dictate how long your product is expected to live.