Today: April 25, 2024 2:18 pm
A collection of Software and Cloud patterns with a focus on the Enterprise

Software Development and batch size

I’ve been having a lot of conversations lately about batch size and how the choice of batch size impacts software development and release processes. Today I went looking for some other perspectives and I found this post about optimization on the IBM website. The author provides a good summary of the benefits available by decreasing batch sizes in software development.

I have been using agile methodologies for quite a long time, and they are much better than the traditional waterfall model. In the latter, all development is done at once, before testing occurs. Defaults are detected way after they were put in the code, which results in lengthy, ineffective, debugging sessions. In agile methodologies, development is cut in small pieces, and each piece is tested before moving to the next one. [Faults] are detected when the new code is fresh in the memory of developers, leading to short and effective debugging. This seems in line with the idea of small batch sizes advocated by Reinertsen.

This is true in my experience. Small batches increase the quality of each software deliverable while decreasing the frequency of bad releases or production issues. Later in the article he points out one instance where large batch size appears to yield a better outcome: “in a manufacturing environment, it is better to assign work in progress to machines globally instead of dealing with them one at a time…”

He seems to be asking why optimizing an entire population as a batch (large batch size) works better than optimizing smaller populations (small batch size) in certain circumstances. In other words, why does software development favor small batches when manufacturing appears to favor large batches?

Input and Output Variance

In a factory setting where every machine accepts identical raw input and produces identical completed output, it may work better to optimize for large batches. The variance of the input and output is what makes software development different, but there are also manufacturing examples where small batches were made more effective. One notable example was when Toyota changed how they manufactured automobile parts that required custom dies. Reducing the changeover time from three days to three minutes allowed them to decrease their batch size and increase overall efficiency.

Software development is all about specialization and context switching

Nearly all software development takes a unique input and produces a unique output (high variance). I am aware of very few instances where a programmer routinely takes in a similar input, does some processing on it, and returns a normalized output. Given human limitations in context switching (multitasking) and the ramp up time to become proficient with programming languages and platforms, smaller batches tend to work best for software development. Some organizations do focus on trying to make their developers better at context switching and proficient with more technologies, but in my experience, these initiatives often fail to deliver long term benefit to the organization.

Batch size and application integration

One argument I often hear about decreasing the batch size is that integration testing isn’t possible. I acknowledge that many systems are made up of tightly coupled (and often hard coded) components. Loose coupling has long been an effective pattern in software development and facilitates smaller batches while increasing overall system resilience. Loose coupling also increases reuse and improves flexibility in composing larger systems. Integration can happen most effectively when system components are loosely coupled.

Additional References

https://martinfowler.com/bliki/ActivityOriented.html: silos within organizations resist decreasing batch sizes and favor work that benefits the silo over work that benefits the business.

Organizing by activity gets in the way of lowering batch size of work that is handed-off between teams. A separate team of testers won’t accept one story at a time from a team of developers. They’d rather test a release worth of stories or at least all stories in a feature at a time. This lengthens the feedback loop, increases end-to-end cycle time and hurts overall responsiveness.

https://hbr.org/2012/05/six-myths-of-product-development: Fallacy number 2 addresses the idea that large batches improve the economics of the development process. In fact the reverse is true in manufacturing and software development.

By shrinking batch sizes, one company improved the efficiency of its product testing by 220% and decreased defects by 33%.

http://www.informit.com/articles/article.aspx?p=1833567&seqNum=3: In this post, the author of Continuous Delivery explains why smaller batches results in overall decreased risk to the organization.

When we reduce batch size we can deploy more frequently, because reducing batch size drives down cycle time.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.