Railroads don't have a batch size of 1
Lean theory tells us that the optimal batch size is 1 [update: no it doesn't. Optimal batch size is often 1, especially in a manufacturing context.]
i.e. we perform each unit of work on demand and it flows independently through the value stream.
Certainly that is the case when you order one of my books which are printed on demand individually and posted directly to you.
I am still learning about systems and flow. Some of you will know that I am a train nut: I like railways. So it puzzled me to read an article recently in Trains magazine about the trend to larger trains, which have increased by a factor of 100 over the last 150 years.
This increase has been enabled by technological improvement in wagon design, couplers, brakes, and multiple-unit train control, as well as enormous investment in longer passing sidings and double track mainline.
Now, railroads' sole function is to move stuff not create it, but this is also true of the steps in the Require-to-Deploy value stream after design and build, when our purpose is to move created value into business hands.
The reason a railroad creates larger trains is clearly to reduce cost. They do this by reducing the size of train crews, and of course by reducing the number of trains, by consolidating them into very large batches. These "batches" in the USA get to be 5 kilometres long and weigh 30000 tons.
Railroads are time sensitive, like IT, yet by creating these enormous freight drags they can still meet their SLA, which is usually a number of days. Extremely time sensitive freight - and obviously passengers - still go in smaller, faster, more frequent "batches", but they still don't transport individual people or parcels.
I can relate the large freight drags to the need in legacy organisations to design and co-ordinate releases of complex interdependent systems. In most systems, the business has no need or expectation for change more frequently than monthly or quarterly. In fact, monthly would be an improvement for most.
I have written before about the competitive frenzy and how it generates unnecessary haste. You may need to change web content within hours or days to respond to a competitor, but you seldom need to deploy an entirely new product in less than a number of months in order to remain competitive over the longer term. For government and large corporations, many systems move even slower with minimal impact on the organisation.
We must be very careful and resistant of the modern trend to introduce startup thinking into legacy organisations. Startups are on a helter-skelter scramble for survival and growth. Legacy enterprises have enormous momentum and stability: they can and should be planning five years ahead. And lots of Enterprises are not subject to the mad competitive struggles of the retail sector.
To be able to deliver IT changes on a monthly cadence would be a fine standard for most systems in many enterprises. This is Horse DevOps not Unicorn DevOps. The transition process in legacy systems is often highly manual and hard to automate:
- code merging
- environment provisioning
- code migration
- user testing
- training of users and support
- production deployment
By doing releases on a fixed cadence, the work of each cycle is planned and predictable. We are bringing the work to the teams not the teams to the work, we release by features not projects, we establish a velocity and resist overburdening the teams.
We should remember that a batch size of one is a navigational star not a destination. There are two kinds of DevOps, and Horse DevOps is not attempting to fly or travel in space.
For horses, the principle of "batch size of 1" reminds us to keep batch sizes small in certain contexts, it is not a target KPI. On the other hand railroads remind us of the benefits of large batch sizes. I think horses needs to keep both in mind, especially in our release process.
There is much I am still learning in this space, so I welcome discussion - either in the comments below or leave us a link to your own thoughts.