Process improvement programs are a staff distraction

On page 10 of the introduction to this book the authors categorically state:

"A process improvement programme reduces the time the staff have for existing service duties, causing a decrease in service quality - exactly the opposite of intended programme goals. "

This is out of context and should include some cautionary text, such as "could cause". It is a plain fact that process improvement programs have their place in helping organizations define problems and judo those statements into opportunities or improvement. They might draw in resources occasionally required to 'man the pumps' but this resource commitment is made as a management decision consciously designed to be outweighed by the target benefit!

A clumsy statement at best. At worst it will alienate a whole community breast fed on ITIL Version 2's 'process improvement' focused strategy.


Plain fact?

This being a skeptical site, I have to ask:

"It is a plain fact that process improvement programs have their place in helping organizations define problems and judo those statements into opportunities or improvement."

And your evidence of this "plain fact" is what?

Customer experiences

It is my personal experience with our customers that a process improvement led strategy can result in major tangible benefits if the reason for the process improvement is supported by a targeted problem statement, thus mitigating the need for a wholesale process replacement and consequential staff distraction. So plain fact means - plain fact to me based upon experience. Process improvement led initiatives have their place, they just need to be carefully focused... stay skeptical.... its healthy to an extent. Experience is a wonderful asset.

The Improvement Paradox

I believe ITIL is correct here on both the warnings and leanings. Most improvement initiatives rely on employees who perform the day-to-day work to both guide the improvement program and make the actual improvements.

It takes time for improvements to emerge, they are not instantaneous. Therefore, the first effect of an increase in improvement effort is the reduction in time an employee can allocate to existing duties. In other words, the short-term effect of an improvement effort is a decline in output or performance.

As a direct result of this "air pocket", the pressure to work harder increases. The employees are then forced to cut back the time devoted to the improvement effort. This gets output/performance back on track but impedes the improvement effort. "Common sense" efforts to mitigate this effect (such as through more training) frequently serve to exacerbate the problem.

This simple dynamic explains why, over the long term, most (yes, not some but most) improvement efforts come back to earth. It is not the only reason, but it is a very common one. Unfortunately, it is poorly recognized (and thus incorrectly handled) because:

1 - It is frequently swept under the category of "cultural and people challenges". A term equally reassuring and meaningless in that it excuses the user from determining root causes such as...

2 - ...the "plain fact" of the goodness of the process improvement program. The preponderance of the evidence suggests otherwise. I can cite the research but you may already know it: Your own web-site, for example, identifies research that 78% of ITIL projects fail. Now, the overall research doesn't say process is bad (quite the contrary) but it does give damning evidence that improvement programs are, by their inherent nature, intuitively challenging and should be treated with extreme caution.

ITILv3 does the right thing by acknowledging these situations.

Improving should be part of the job

In the Improvement Paradox there is an interesting notion that the act of improving is something staff should do beside their daily work instead of it being part of it. So, staff is told that they have to do extra work during an ITIL implementation initiative all the time and that it will become better in the end (what never happens). Since most ITIL initiatives also try to flood the poor staff with additional paperwork and bureaucracy it seems so much effort for so little gain.

Improvement programs are challenging, when you make it a program. When it is part of the day-to-day work to improve efficiency (first task, how to create time for further improvement) then it becomes much easier. Understanding how staff works and helping them directly with improving is the task of the consultant. Not to spend all that time on 'Cultural change' programs


I've not seen it often in practice, but I like the principle of bootstrapping: allocate a certain time each week to incease efficiency to buy time to increase efficency to...

For instance:

1)the Alligator Killer ("when you are up to your ass in alligators..."). Someone spends every Tuesday afternoon killing alligators: eliminating root cause problems that otherwise nobody has time to deal to.

2) the Process Polisher. Someone else spends Tuesday afternoon studying process and looking for quick win improvements to eliminate bottlenecks, error generators or redundancies.

I remember an alligator killer I saw years ago in a bank. it was easy enough for his boss to free up one person for one afternoon a week. Just about every week he brought back a dead alligator. He was a young keen smart kid with good technical skills and a sharp mind for understanding the systems. His boss gave him a strong mandate to poke where he liked. He would develop hunches or he'd data mine until he caught a sniff of some underlying problem, then he'd ferret it out and nail it. He loved the job and everyone loved his work. Every kill was that much less pressure on the team.

Spot on

You are both spot on. By treating the improvement effort as a dynamic process and actively managing the feedback and delays (a la the lifecycle, for example), differing techniques take on a more innovative power. You have subtly created a virtuous cycle instead of a vicious one.

mixing the metaphor

Actually this young guy should have been termed the "swamp drainer," if the metaphor is to be understood correctly.

Charles T. Betz

draining the swamp is CSI

Indeed, but it doesn't have quite the same cachet does it?

You could say draining the swamp is IT Operations, i.e. BAU, delivery of service. Or better still that draining the swamp is improving your environment, i.e. what V3 now calls CSI. Then alligators are problems that distract you from that, and incidents are being bitten in the ass.

I use it that way. I go on to position change management as the fence around the swamp that stops the alligators getting in. No point killin' 'em if they move in just as fast (or faster).

CSI = IT Operations = Saying No to Project Managers

My experience is that most IT organizations are in the running business. They are running to solve incidents and they are running projects. Alwasy busy, always working hard, and never stopping to look at the real problem and solve. No time. Busy.

That's why I like MOF, they introduced the Team Model and the notion that you have to seperate tasks based on if they are ad-hoc and reactive (ic Support), project-planned or repetitive and pro-active (ic. Operations). I've been involved in implementing this Team Model and specific the Operations team for the past years. The implementation process is quite simple:
- Create a room with a lot of monitors and call it the Operations Control Center (or something similar)
- Create two roles: Monitoring Operator and Area Manager (That is the Microsoft approved term)
- Create a roster that makes sure that there are always two monitoring operators and one Area Manager.

You'll find that the monitoring operators will experience Cold Turkey symptoms. Thay are so used to run that they can not sit still and obeserve anymore.

The Area Manager has the rol of Alligator Killer or Swamp Drainer. Every day there is another Area Manager specific for a certain Area (or Domain), like the Network, Storage or Directory Services. The area manager spends the afternoon scouting the manunfacturers website, user fora, etc. for new fixes, new piece of information to improve the performance of the specific part of infrastructure (or applications). The area manager can create instructions for pro-activce maintenance tasks. Investigate trends and report on the performance. etc.

And there is a structure for improvement that really works. The hard part is to get management to allow this to happen. The first days they will be very willing to try. But then that pesky projectmanager needs a system administrator now and management is than inclined to give in. Projectmanagers are useles when it comes to planning their projects and always need some extra resources on the last moment.

Mind the gap

It's all well and good to acknowledge the siutations, but it doesn't leave people any better at avoiding the problem in the first place. From what I can tell, it's a function of gap management. There is a level that is "just right" for each organization. When an organization tries to take on a gap that is more than a healthy stretch, failures will surely follow. The hard part is that this is not something that people are used to tracking.


As the cliché goes, “The first step is admitting you have a problem.” Dealing with process-improvement problems and producing sustainable change is indeed a complex challenge.

The routines offering short-term fixes to operational problems are termed “first-order problem solving.” The routines that produce higher levels of insight, and thus sustainable improvement, are “second-order problem solving.” This second capability is a rarity (e.g. look at the adoption of Problem Mgt) and so most organizations continue to find sustainable change/improvement elusive.

The key to enduring process improvement hinges on challenging the assumptions of the existing processes and their interactions. In other words, new ways must be developed to both understand and accomplish the work. There are effective metaroutines which can facilitate second-order problem solving to produce sustainable change and therefore continuous improvement.

A metaroutine is a problem-solving procedure/methodology to improve existing routines or create new ones. Properly designed and deployed metaroutines are mechanisms for enhancing second-order problem solving. Organizations default to first-order problem solving when a metaroutine is not in place.

The Shewart/Deming cycle (PDCA), for example, is a simple metaroutine governing the use of statistical tools in TQM. While PDCA suffers from severe limitations, it is a useful first step in teaching organizations the value of metaroutines.

The DMAIC cycle can be thought of as a metaroutine patterned after the PDCA cycle. Like PDCA, DMAIC governs the use of statistical process control and other advanced statistical tools in order to achieve Six Sigma.

I imagine ITIL-specific metaroutines will be fully developed in the complementary material. The theoretical groundwork has been laid. For example:
- Service Operations: single/double-loop learning in Service Operations (explains why Problem Mgt is in SO rather than CSI)
- Service Transition: Closed-loop thinking and the DIKW hierarchy
- Service Strategy: the use of causal loop diagrams, the golden pony, systems thinking.

But the first step, of course, is admitting the problem.

After you admit you have a problem...

After admitting the problem you have to complete steps two and three..... two: define the problem in the form of a 'problem statement', pointing back to the actual evidence, or offering a hypothetical or suspected set of indicators. Three: state the impact to each affected audience and preferably with linkage to performance measures they care about.

Strange how this was identified as a weakness in P-D-C-A as far back as 1948 by Carnegie Mellon University and many feel led to the evolution of statistics based quality management - and Six Sigma. Six Sigma is a messenger. Don't shoot the messenger. The trick with six-sigma is to know when to stop - as with any other analytical tool.....

While I am at it I might as well offer a few more of the subsequent steps:

Conduct a control barrier analysis to verify all safety mechanisms were in place and active
Cause analysis, perhaps leading to a root cause identification, but unlikely
Solution identification, criteria based selection and packaging
Development of action plan and desired results
Open a change record.....

Little of this is in ITIL - even the new V3, and the placement of Problem Management in Operations instead of Improvement is a huge indicator...

Indicator of what?

There are many paternal claims on ITIL. But when I read the v2 books, it seems obvious who the real daddy is: TQM. ITILv2 reads like “TQM-lite” with a dose of BPR. So it isn’t surprising that, when asked what is wrong with or missing from ITIL, the narrative reads like a plea for “TQM-heavy.”

My suspicious are confirmed when I see insights such as “control barrier analysis”, "cause analysis" and statistical control methods. Someone forgot to tell the ITSM community that, between 1993 and 1999, TQM fell from the third most commonly used business tool to the 14th, and declining fast. (The disaster called BPR speaks for itself.)

I’m not disappointed ITILv3 did not further embrace this legacy. TQM tools emerged during the age of mass production, where quality management meant reducing variations in material products. It worked fine in this realm where processes are static and contained stable and limited interactions. Well, it didn't work completely fine but you get the point.

Services, on the other hand, exhibit heterogeneity. Variances are in their nature. What two call center encounters, for example, are truly identical? The manner in which TQM quantifies success does not readily transpose to quantifying the success of service-related improvement programs. Probably why the TQM quantitative techniques were left out of ITILv2. Only the most rudimentary tools (e.g. fishbones) were left in.

Service processes are most often time-dependent, dynamic and contain at least moderate interactions with other processes. E.g.: A will cause B to occur if C is close to D, and X occurs before Y. It doesn’t make sense to use TQM tools (even fishbones) for these types of problems. The tools cannot tell if the corrective action worked or not, thereby breaking the PDCA model. An improvement program is therefore even less likely to succeed.

So I can't agree with your chiding a service management framework for not moving closer to TQM.

Problems are only one entry point...

I believe that I understand your point and I don't believe that "admitting the problem" is step number one. Problem solving is only one entry point. There are other drivers that factor in and can be used as an appropriate starting point.

I think problem solving is way overrated and much of what gets done in the name of problem solving is actually problem creation in a new wrapper. Undoubtedly, there are way more problems than a given organization could hope to take on and see a meaningful resolution. In fact, many of them are distractors and aren't worthy of the time invested to find a solution.

In a given organizations stage of life, there are both things to resolve and opportunities to unfold that are completely appropriate. What are these? Why are they important? What is the compelling benefit? These are a few of the critical questions to get up on the table and wrestle with until you have satisfactory answers.

Unfortunately, most of the organizations I get the opportunity to work in don't have a sufficient level of discipline or organizational committment to see the work through to completion. It's much easier to just "install your software and see what it can do for us", rather than to deal with the important questions.

Dealing with the important questions gets viewed as a "waste of time" and thus deferred (with lots of justification for why). It's about that point that I'll say:
"You can pay me now or pay me later -- either way you're going to pay... and don't complain about the price to me after you've made your choice."

I find myself agreeing with

I find myself agreeing with much of what you’ve said. Although the optimist in me believes we are slowly coming out of the current cycle of management short-sightedness in IT. These cycles always do:

While the A-bomb gets credit for ending the war, Japanese engineers knew they had been beaten by the American factories. So they were ready when Deming told them Japan could offset their lack of raw materials through investments in engineers and mathematicians to improve quality. Japan later returned the favor to the above-mentioned American factories.

As you said, pay me now or pay me later.

Experience is a wonderful asset...

... and a terrible teacher. You get to take the exam first and learn the lesson afterward. That can prove to be a very costly educational experience.

Skeptical Empiricist

When I encounter a self-proclaimed ITSM theorist or expert (there are no shortages these days), I'll listen patiently for the sage advice. Upon completion I'll ask, "How do you know?"

If the reply begins with something like, "Well, in my 20 years of experience...", the following quote has a nasty habit of coming to mind:

"But in all my experience, I have never been in any accident...of any sort speaking about. I have seen but one vessel in distress in all my years at sea. I never saw a wreck and never have been wrecked not was I ever in any predicament that threated to end in disaster of any sort."

E. J. Smith, 1907, Captain, RMS Titanic

Alienating process improvement specialist

This statements may also alienate process improvement specialists using CMMI, Six Sigma, et cetera ad nauseum. Kudos to most of the comments -- most process improvement programs do degrade performance initially, and should be applied judiciously until the tradeoff/benefit realization curve is understand by that organization. It will differ from that of other organizations.

Risking placing myself in the " my 20 years experience" category, I have seen process improvement succeed and fail, and have played roles in both types of situation. Most failures come when the objective is to "be ITIL compliant", "be CMMI Level x", or "get the ISO certification." Every time -- EVERY time -- the focus is turned to such grand, nebulous goals, the investment in time and attention is distracted from solving real problems. The best successes have come when the organizations focused on real, empirical problems with the occasional glance at IEEE, ITIL, CMMI, or some other reference to remind them of various points. Honestly, the most convincing examples of process improvement were in organizations or teams that had more common sense than book dependency; I have seen a few that were almost fully compliant with CMMI level 2 without ever having read the book! On the other hand, I have seen organizations working with standards for over five years and still only achieve expensive posturing.

The whole quote

While I understand and appreciate the comment & question, when I went back to read the whole thing what struck me was the comment by "Visitor" that:

"ITILv3 does the right thing by acknowledging these situations," makes more sense when you can see the entire context, soooo...

The entire surrounding context reads:

"The phrase ‘People, Process, and Technology’ is a useful teaching tool. A closer examination, however, reveals complexities such as time delays, dependencies, constraints and compensating feedback effects. The following are observations in the real world:

-- A process improvement programme reduces the time the staff have for existing service duties, causing a decrease in service quality – exactly the opposite of intended programme goals. As quality falls, pressure to work harder increases. Pressured staff then cut back on improvement efforts.

-- Funding cuts affect service quality, which in turn diminishes demand for services. The reduced demand prompts yet more funding cuts.

-- Increase in service demand generates increases in operations staff. The ratio of experienced staff to new staff decreases. Less mentoring and coaching opportunities are available for the newcomers; quality of service suffers; demand for services slows; morale and productivity decrease, and staff are let go.

Apart from driving change through continual improvement, organizations must be prepared for rapid transitions and transformations driven by changes in an organization’s environment or internal situation. Changes may be driven by mergers, acquisitions, legislation, spin-offs, sourcing decisions, actions of competitors, technology innovations and shifts in customer preferences. Service management should respond effectively and efficiently. The approach to service management provided is useful for understanding the combined effects of management decisions, dependencies, actions and their consequences."



(I may just spring for the electronic copy of the books... Makes it easier than typing using the wrong (:-)) dictionary. :-))

Syndicate content