The Emperor has no clothes. Where is the evidence for ITIL?

This article has been podcast

Since this is a skeptical blog, it is high time we examined the evidence. Where is the evidence for the benefits of ITIL? There isn’t any. Not the kind of hard empirical evidence that would stand up in, say, clinical trials. There is more evidence for quack alternative medicines than there is for ITIL. There is certainly more solid evidence for the application of CMMI (or CMM in solutions development see this presentation by SEI), an analogous methodology in a closely related area.

Granted there is some research around the benefits of aligning IT with the business but not around quantification of ROI and nothing (that I can find) specific to ITIL.

The itSMF themselves make a few outrageously unsubstantiated claims in An Introductory Overview of ITIL [version 2]

  • Over 70% reduction in service downtime
  • ROI up by over 1000%
  • Savings of £100 million per annum
  • New product cycles reduced by 50%.

Send no money now!!

The Best Practice Research Unit is associated with the ITIL 3 refresh [Updated: It was. The site has disappeared]. After 20 years it is about time there was such a unit. It is just a shame this is not an initiative of either OGC or itSMF (at least itSMF USA is doing something, in fact several things focused on research).
The BPRU website explicitly recognised this problem.

Much of the material published on IT management, including IT service management, has been normative or prescriptive in flavour. Few rigorous, academic studies have been undertaken to evaluate how tools, techniques, methods and management approaches have been selected, adapted, implemented and measurable benefits achieved.
There is a danger that new approaches arise out of the practitioner community with little empirical validation.

“Few rigorous, academic studies”? Don’t be nice, Tony. The solitary piece of academic research I can find carries a bold and, I think, unproven title “Evidence that use of the ITIL framework is effective” It is from Dr B.C. Potgieter, Waikato Institute of Technology (New Zealand), J.H. Botha, Oxford Brookes University, Dr C. Lew, Damelin International College.
It opens by saying “Very little academic material exists on ICT Service Management Best Practice…” and concludes its own research with:

We found that both customer satisfaction and operational performance improve as the activities in the ITIL framework increases. Increased use of the ITIL framework is therefore likely to result in improvements to customer satisfaction and operational performance. Although the study was limited to a single research site, claims made by executive management of the research site and OCG as to the contribution the ITIL framework seems to be justified. More definitive research delineating the nature of these “relationships” is however needed, especially regarding each process in the ITIL framework.

The data base is poor: “research site was a large service unit of ICT in a provincial government in South Africa during 2002/3.” One site. More importantly, the two things measured to support this brave conclusion were (1) customer satisfaction (the three surveys they conducted only included management in the final survey so all we can say is that non-managerial staff were happier) and (2) “objective service improvement” by measuring “the number of calls logged at the Help Desk” because “we can rather safely conclude that the number of problems logged would be a good reflection of objective service levels”. I expect that last statement leaves this research with zero credibility with anyone who understands ITIL and ITSM. No cost/benefit analysis. Not a single valid objective metric. Sure if you throw enough government money at anything and launch an aggressive enough PR campaign you can make the users happier. That proves nothing. And the fact that calls to the Service Desk went down over an initial nine month period would to me be a cause for concern not celebration. But you can bet this paper will be quoted all over the place as evidence of the effectiveness of ITIL.

Pink Elephant have finally extended the number of anecdotal stories beyond the tired old Proctor and Gamble, Caterpillar and the internationally famous Ontario Justice Enterprise. They now have a few more arbitrary statistics . I’ve been in vendorland and I have generated this kind of case study. These amount to no more than selective quotes from managers justifying their decision after the fact.

HP is one vendor putting numbers where their hype is, though this pertains to Service Desk product not ITIL:

IDC found that, for the companies surveyed, IT productivity increased by an average of 14% … contributing an average cost savings over three years of almost $4.2 million annually. When normalized for company size, these savings amounted to $17,235 per 100 users … Based on these
savings, the three-year hard ROI … averaged 130%, giving an average payback period of 13.5 months.

University researchers! are you listening? Those are results. Unfortunately, paid analysts doing surveys for a vendor are about the least useful sort of research. It is not subject to the same transparency, peer review or statistical rigour as academic research, and their results tend to be selectively quoted by the vendor.

But since we’ve started, here is another "sponsored survey":
“Did you make a business case before decision? (Base: 62 European firms): No 68%” TWO THIRDS had no business case!!!
And “Don’t know 11%”. Who on earth was answering the survey that didn’t know if there was a business case?
But wait!! There’s more! “Did you observe the expected ROI? (Base: 20 European firms). No 50% Don’t know 30% Yes 20%.” Good lord! If less than a third built a business case, one would guess the ones that did represented a sample biased towards those who had a good case, and yet only ONE FIFTH of them achieved the expected ROI. This is best business practice? I think I need to go lie down.

Before I do though, here is one more. If you are willing to make major decisions based on amateur research by vendors (as everyone adopting ITIL does), here is an interesting one to ponder:

In a survey carried out by Bruton of 400 sites, about half of the 125 organizations which were found to have adopted ITIL made no measured improvement in terms of their service performance or the rate at which they were able to close helpdesk calls. “Some helpdesks can way outperform a site that has adopted the best practices of ITIL," said Bruton. "Best practice does not mean superior performance. It is beginning to sound that ITIL is the only way to go. It isn’t. It is only one way to go.”

A man after my own heart.

I would like to see some solid scientific research on:

  • Quantified cost/benefit analyses across a statistically significant number and diversity of organisations of adoption of ITL vs. other BPR methodologies, or vs. a simple process review and reorganisation, or vs. implementation of a service desk product.
  • Quantified cost benefit analyses of organisations that have only done ITIL without concurrent Six Sigma or CMMI or other quality improvement programs.
  • Analysis of the proportion of organisations that would actually benefit through adoption of ITIL.

The really delicious irony in all this is ITIL’s own emphasis on the importance of a business case and ROI. But the facts are that few organisations even bother to examine the business case before embarking on ITIL; even fewer measure results; and the few that do are building their business case in the absence of any solid research to justify their estimates. The Emperor has no clothes.

Browse this site for more skeptical views on ITIL.


Visible Ops

Those interested in measurement of ITIL's effectiveness should check out The Visible Ops Handbook or, more broadly, the work of the IT Process Institute ( They're not vendor aligned, have done excellent work on benchmarking, assessing the benefits of good practice, etc.

Some of ITIL's claims are more easily cashed out than others. Visible Ops focuses largely on Change, Config, Incident, and Problem Management. The link between process and business value is pretty straightforward in those realms, primarly because every incident ticket costs money and the processes mentioned have a very direct ability to reduce the number of incidents which arise or to reduce time/effort required for resolution.

Other ITIL claims are more difficult to cash out. Personally, I've done work with organizations on Service Catalog and Portfolio Management which have easily demonstrable financial return, primarly because we work directly with financials to guide our efforts. I'm not aware of any academic research reflecting such work. Similarly, Capacity Management can be readily tied to financial efficiencies associated with improved technology acquisition practices.

Are actionable service catalogs the silver bullet...?

With all due respect to Kevin, Gene and George, the Visible Ops approach is similar to Lean. It sets laudable goals of finding and eliminating waste and improving overall efficiency. It drives the process through problem then change and sets up a continuous (not continual) mechanism. I wish more IT folks would check Lean out - the success I have had in 2008 with my Lean Service Management program has been significant. Anyway, I wanted to speak to the catalog comment.

A word to the wise for all these ITSM projects that have set out to build a laundry list of artifacts and then replace their 'processes' one by one - STOP. The humble service request may well be your key to success. In many sites it represents 80% of workload drivers. Package everything that might consume IT resources in a request, including incidents. No need to separate them. Now we have a single prioritization and workload management option.

Next, interview your customers and slowly and carefully match their activities to the requests (I can explain how but not here). Build catalogs based upon what customers/user DO. Focus on efficient processing of service requests, and specific ones that cause problems or are resource intensive.

Best of all - employ workflow management tools to manage and automate these requests. Connect a portal, a catalog and request management and you are actually doing Lean Service Management with all the pull, flow and Lean Consumption/Provision buzzwords in play - oh I think some folks call than an 'actionable service catalog'.

If you are attending any practitioner training - ask them how to do this - if they can't tell you over the phone, or don't know - save your money.

Where do you start - learn how to do problem management for real - don't reference ITIL - it sucks - use a healthcare view... Understand how to define a problem and its impact and translate it into an opportunity for improvement. Oh I am back at Lean again.

If you want to discuss this (Lean) more - drop me a line...

Value of ITIL probably impossible to measure

Being a religious agnostic I have a hard time taking anything "on faith" but, as others have mentioned, one probably has to do so with most all management systems. This reminds me of the discussions as to what the "best" quality management system is, of which there is probably no answer.

The value of ITIL, and other management systems, is most likely impossible to measure and compare to other options. However, it is unlikely that there is truly any significant difference between one management system and another. The value comes from having a framework to guide your direction for improvement. Most likely any well designed framework will do.

To give an example from a project I'm currently working on, I'm working with the corporate HQ of a company that has grown through acquisitions and divestitures. Currently corporate is managing two organizations, each hundreds of miles from each other and hundreds of miles from corporate, that were acquired while all other segments of the company are gone. One organization had its own internal IT while the other had contracted IT that was brought "in-house" following the acquisition.

There are no standard processes between each the organizations and with corporate. There are no standard SLAs and ticket handling control is so lax that any that did exist would be meaningless. Our task is to design processes for service operations that are consistent for the entire organization while trying to minimize the disruption to the organization.

While not considered a full ITIL implementation we are using ITIL as our guideline to developing these processes with an eye to a potential expansion of ITIL to the entire IT organization in the future. This means that our processes, while not necessarily complete, have to be consistent with the ITIL framework.

Is this an ITIL implementation when considering ROI?
If this fails to have a positive ROI in terms of $$$ does that make the project a failure? Or, are there other considerations and values that must be considered?
Is it really possible to consider all of the factors that this impacts when calculating the return on the effort?

And, as mentioned in another previous comment, the primary driver of this effort is the need to upgrade their service desk software to a new version before support for the old version is discontinued. In such a case is it possible to seperate the return from the upgrade of tools from the better implementation of processes?

I would have to say that it is impossible to really determine the value of the effort in any really hard numbers. It just has to be taken on faith that putting more structure to the organization provides a return compared to the lack of structure currently in place. And, given that, it is just as impossible to compare one framework with another to determine which is better.

If an organization has a solid structure in place it will be far harder to show a positive return from changing that structure to one that is more ITIL oriented. If an organization has minimal structure, however, ITIL is a fine set of guidelines to use in implementing a structure and will almost certainly prove to be of benefit.

Just my $.20 (inflation, you know)

the world is full of people who want to believe

Hi Keith and welcome. thanks for contributing!

I agree with all you say. ITIL's value probably can never be measured to the final satisfaction of all (but we CAN measure much better than we do, which is why I applaud the proposed new itSMFI Journal).

My issue is that business cases are floated that quote all sorts of Crap Factoids about the positive returns of ITIL. And ITIL is sold as this magic pixie dust that makes IT work better.

People like you know better. many don't. the world is full of people who want to believe. See the scene from Life of Brian [Brian of Nazareth not Brian Johnson] with the followers of the sandal vs the gourd. Or many other scenes from that classic film ("How shall we **** off oh Master?").

I'm a bit worried at those who would take Real ITSM seriously.

Different from any other "business cases"

But how are the ITIL business cases any different from the thousands of other business cases presented by, and to, MBAs to justify all of the other management systems out there?

Do you really think it can be clearly quantified that Six Sigma is far better than TQM (or vice versa)?

Do you think the numbers presented to show ROI on a quality control initiative are really any less blue sky than the same attempt for ITIL or any other ITSM initiative?

Those sort of things are used to justify to the bean counters that an improvement will help. If improvement is needed I don't think it really matters how "real" the numbers are. If improvement isn't really needed hopefully someone will call bovine feces on the salesman who's trying to pass on something that's not likely to show a significant return.

likely to show a significant return

Agreed many busines cases are dodgy. But there is some good data around quality management, at least in manufacturing. It may be that IT is inherently hard to measure, but if so I think that reflects on our lack of sophistication in our techniques more than fundamentals of the industry.

I do believe IT ideas are measurable, at least to the level of "likely to show a significant return". i don't think we even know that much about ITIL yet.

Why we may NEVER have evidence of ITIL

From "Diarmid" on the itSMFI forum:

Firstly, there is the issue of defining what you mean by "the implementation of ITIL". Technically it would be better to say "application" rather than "implementation". But the more serious point is how you define an organization as having applied ITIL. Do you evaluate their service management against some yardstick that says that they have definitely embraced ITIL in all (or most of) its glory. Or do you accept them if they "talk ITIL" and claim to be using it?

Secondly, how do you distinguish between organizations that have "discovered" ITIL and used it to radically improve their service management and those that have discovered ITIL, looked at it, realized that basically that is what they already do and then embraced the ITIL terminology because they like it?

Thirdly, there is the chicken and egg situation. It could be argued that most organizations will move toward ITIL practices when when they are being driven by change all around them. So they know they want to change and they see that ITIL is a good way to go. At most you could say that ITIL was one enabler for the change, but quite likely ITIL will have been no more than the direction that their change took. It is also the case that organizational change will be going on anyway and in some organizations is always going on and is often superficial. This can muddy the waters.

Finally, you will find practical difficulties in obtaining much useful data from more than a few organizations and you may have difficulty in determining how "typical" your sample is.

For the outcome of your research to have any value these are some of the issues that you will need to deal with.

It is my view that ITIL is essentially about process and control and can therefore be accommodated within a wide range of organizational models. Thus organizational change would rarely be essential although sometimes a logical consequence.

ITPI research: evidence for ITSM

ITPI has released some credible empirical research here:

In a well-designed survey of 341 organizations, they identify significant correlations between IT performance and ITSM best practices, but also find some surprising non-correlations. For example, simple Change oversight does not predict performance in their sample, but a strong Release process does. CMDB in and of itself does not predict success, but linking changes to CIs in a CMDB does (which I would argue in turn justifies the CMDB - it's all about what you do with it, not just having it).

The performance measures seem primarily operational (a full list is not available in the summary). That is, "performance" is defined in terms of uptime, successful releases w/o rollbacks, etc - not in terms of business performance, IT porfolio alignment, etc.

Charles T. Betz

The problem of correlation

You do realize this study is based on inductive inferences?

It starts with observations and arrives at general conclusions. Since ITskeptic is a fan of _Zen and the Art of Motorcycle Maintenance_, I'll borrow an example:

If a motorcycle goes over a bump and the engine misfires, and then goes over another bump and the engine misfires, and then goes over another bump and the engine misfires, and then goes over a long smooth stretch of road and there is no misfiring, and then goes over a fourth bump and the engine misfires again, one can logically conclude that the misfiring is caused by the bumps.

While the study is a good first step, it stops too short. Its a shame because it wouldn't have been that hard to integrate deductive inferences. Then it could be called credible.

Could "Visitor", who's

Could "Visitor", who's posted opposing the value of the ITPI research, identify him/herself? I assume the posts are all one person.


if it is who I think it is, they have a good reason not to. Anonymity is acceptable on this blog, though I'm happier if you tell me privately who you are and why. But ultimately it is a personal decision.

fair enough

*sigh* OK, that's fair. It would add something to the value of reading the discussion though ... I can interpret and understand better the arguments by yourself or Charles Betz, in the context of all your other writings. Not to discount anything as bias, but to get a better feel for where the writer is coming from. We are all human after all.

Inductive and deductive

This was such an odd and off the wall post that I began to wonder if my memory of research method had completely left me. Life being what it is, it took some weeks for me to get around to contacting my friend A.B., who is a tenured professor in the hard sciences at Duke University, published in top-tier peer-reviewed journals (including Science), and well qualified to comment on these matters. We established tonight that:

1) Determining correlation (as opposed to causation) can be a worthy research endeavor, meriting publication in first rank research journals. In his words, "if it were all limited to research with ironclad causality determined, the amount of scientific research published would drop dramatically. Furthermore, pure deduction really only comes into play in math, philosophy, and other such deterministic endeavors. The softer the science, the more controversial the deductive reasoning, to the point where in fields like the social sciences causality is almost impossible to rigorously prove." So, the ITPI research cannot prima facie be deemed "not credible" on the grounds that it "merely" shows correlation.
2) Privileging deductive over inductive is a strange bias, one that never enters into peer reviewed publication decisions in his experience. "Inductive" in this sense connotes empiricism, which as anyone familiar with scientific method knows is the flip side of deductive theory, and is theory's reality check. Scientific method 101, really. We were both reminded of a Robert Heinlein quote from (I believe) Time Enough for Love, in the journals of Lazarus Long, in which he argues that pure deductive reasoning is simply an exercise in tail-chasing, and people like Kant never emerged with anything more than what they went in with. Inductive reasoning, on the other hand, is where value lies.

Charles T. Betz

Here’s a published study

Here’s a published study based on induction:

“I have just completed a thorough statistical examination of the life of President Bush. For fifty-eight years, close to 21,000 observations, he did not die once. I can hence pronounce him as immortal, with a high-degree of statistical significance.”

While the statistics may be comforting, as science this is nonsense. The ITPI study enters into the same nonsense the moment it declares its research as hard evidence. Declaring it as observation is fine. Declaring it credible evidence is silly. Any publication in a first rank research journal would be wary of crossing the line between the two. Any professor worth his parchment would have pointed this out.

What about the organizations that adopted the preferred practices but did not achieve the performance gains? What were the performance profiles of the organizations before adopting the practices? Could the winners already possess certain cultural traits (e.g., dedicated staff, process rigor, etc.) which predicated success regardless of what practices they adopted?

To assert evidence, the study would *combine* inductive observations with proper methods. None of these are shown. Any assertions based on induction must be done very, very carefully.

The problem is we are trained to take advantage of the information that is lying in front of our eyes, ignoring the information we do not see. The scientific method is a more reliable form of induction because it adds formal requirements, one of which is using deduction to make predictions which can be tested to provide further insight. The ITPI’s assertions, like many of the models bandied about the IT community, uses the worst of the methods: lazy induction to make guesses, and then working backwards to rationalize the guesses.


Correlation vs. causality is all your arguments boil down to, with some peculiar distinction between "evidence" and "observation," and questionable insinuations as to what ITPI did or did not claim. No one is saying their current work is the last word.

You seem to be falling into the trap of "the best is the enemy of the good" in your unwarranted and perplexing attacks on them. At least they are trying, and their work as such is a credible first step upon which to build. Dismissing them out of hand is unreasonable.

Charles T. Betz

- No, you are confused.

- No, you are confused. Correlation is the measure of strength of the relationship between variables; induction is a *process of reasoning*. Correlation might have improved the study but, as it is set up, you cannot do correlation between categorical data.

- There is a world of difference between evidence and observation. It’s what separates science from theology.

- Nor have I made insinuations. ITPI’s own marketing and website proclaim the study as “hard evidence”, among other questionable assertions. I have nothing against the fine people at ITPI. I’m sure they are gentlemen and scholars. My secret hope is they go back to their data and correct the issues. Then I'd be the first to wave the study on high. That’s the way to respond - not applauding the effort simply because they tried. That serves no purpose save to spin up the cult and give the Skeptic the topic of his next blog.

not scientists

I agree. they are indeed gentlemen and scholars, but they are not scientists. A scientist would publish the hypothesis (deductive), the experimental design, the raw data, the double-blind controls, and the calculations to derive the results.

I'm rationalist enough to believe that only scientific method produces "hard evidence", and even then only when it has been independantly reproduced.

Where then is the acceptable research?

Not sure why you think I am confusing correlation with induction. Well aware of the distinction. We use inductive (and deductive) reasoning to generate hypotheses to explain observations that may include interesting correlations. Deduction is especially problematic in semantically imprecise fields like management "science," of which ITIL and ITSM are subsets. (Always beware of a "science" that uses the word "science" in its name.)

What started this all was your implication that inductive reasoning is somehow suspect or less worthy, which puts you in the camp of Hume & Popper, fine company if you want to keep it, but don't claim that there are no other approaches to scientific knowledge.

Your definition of observation versus evidence is polemic; evidence may be seen as the more abstract term, consisting of aggregated, codified, and interpreted observations but there is hardly a "world of difference" between the two, and my reading of the ITPI material is that it is quite a bit more than mere raw observation. And again, I have determined in my recent conversations with real scientists (published geologists and physicists) that interesting and non-obvious correlations collected through a consistent method may be significant and publishable in and of themselves, and if ITPI has compiled some, they deserve credit. Such effort is an acceptable contribution to the edifice of knowledge, especially given the paucity of anything useful for IT management. Is it a world-shaking, paradigm-setting effort? No. Does it deserve the out of hand rejection received here? No. Is it not "evidence" until independently reproduced? Not by the definitions of "evidence" I know.

But more to the point, where is the ITSM research that *is* acceptable in your eyes? Surely if ITPI's efforts are so impoverished and misguided, and doing credible research is so simple and straightforward, there must be solid examples *somewhere* that one of you can point to.

I *am* concerned about the steep charge for the ITPI research. This however reflects less on them, and more on the hidebound and unresponsive MIS academic community. Why can't we get this kind of work done without charging an arm and a leg for it?

Charles T. Betz

And exactly what knowledge is that?

"Not sure why you think I am confusing correlation with induction."

Because correlation has nothing to do with the flaws in the ITPI study. Based on your continued comments, I still think you are confused.

"What started this all was your implication that inductive reasoning is somehow suspect.."

Yes, a scientific study based solely on inductive inferences is suspect. It doesn't prove or predict anything. I stand by my statements.

A proper study attempts to isolate the impact some variables (independent variables) have on a given outcome (dependent variable). By carefully gathering data and testing hypotheses with precise tests, (by isolating the effect of the independent variables on the dependent variables) the drivers of performance can be determined.

What is needed is good data about the dependent variables. But if the data is already contaminated with delusions, it won’t do much good. When profits were up, Cisco was cited as evidence of the value of customer orientation. When sales plummeted a year later, Cisco was said to exhibit a “cavalier attitude toward customers”. Nothing changed within Cisco. It performed as it had always done. The problem was false attribution and confirmation bias, the sweet and tender traps of inductive inferences.

If ITPI wanted to study the relationship between change management and IT performance, the *last thing* you do is ask managers “How good is your change management?” Like the Cisco study, all you’ll get is an attribution based on performance (inductive inferences). You instead need to rely on measures that are *independent* of performance.

ITPI’s regression analysis only gives the study the illusion of credibility; lipstick on the pig. (The logic of performing linear regression on a human behavior is another serious problem, but I’ll ignore that for now.)

Any time you use words like “…practices that predict the highest levels of performance” (Page 1), that is a claim of science. It means if you do this, here’s what will happen. They might be correct, but given the flaws in the study, we really don’t know. You claim it provides knowledge. Exactly what sort of knowledge is that? Evidence, nope. Correlation, not really. Propaganda, probably.

The confusion between observation and evidence went out with the dark ages. It is misleading to build a general rule from observed facts. The act of observing an accused woman drowning does not afford evidence of witchcraft. The body of knowledge does *not* increase from a series of confirmatory observations. No matter how many years I observe an increase in home prices, it does not follow that prices are proven to go up. I’ve never observed OJ Simpson kill a single person. Would that confirm his innocence? Observation and evidence are not interchangeable. Confusing the two is called confirmation bias.

This is the same sort of silliness that begins with, “I have determined in my recent conversations with real scientists…” So what? As with tools, and fools, its easy to find confirmation for anything.

"Surely if ITPI's efforts are so impoverished and misguided, and doing credible research is so simple and straightforward, there must be solid examples *somewhere* that one of you can point to."

Isn't that the real problem? There is such a desperation to latch onto anything with the facade of evidence that the experts don't feel compelled to do it right. It's been done in much tougher realms. It can be done here. No reason to settle. Of course, you may not like the conclusions.

Lessons of History

This reminds me of the tale of the infamous Trofim Lysenko, as told by Michael Crichton:

"Lysenko was a self-promoting peasant who, it was said, 'solved the problem of fertilizing the fields without fertilizers and minerals. ' ...Lysenko's methods never faced a rigorous test..."

"[Lysenko] was especially skillful at denouncing this opponents. He used questionnaires from farmers to prove that [his methods] increased crop yields, and thus avoided any direct tests..."

"Lysenko and his theories dominated Russian biology. The result was famines that killed millions, and purges that sent hundreds of dissenting Soviet scientists to the gulags or the firing squads. Lysenko was aggressive in attacking genetics, which was finally banned... There was never any basis for Lysenko's ideas, yet he controlled Soviet research for thirty years. Lysenkoism ended in the 1960s, but Russian biology still has not entirely recovered from that era."

astrology, tarot, homeopathy, acupuncture, Lysenkoism, ITIL...

"...% of Fortune 500 companies reported that..."

"...% of CIOs confirm that their decision to adopt..."

"...with an estimated average annual saving of $..."

I've railed against this kind of research before.

Asking people how well something works has been used to justify astrology, tarot, homeopathy, acupuncture, Lysenkoism, ITIL...

The professor game

Phil Rosenzweig is a Professor at IMD in Lausanne, Switzerland. He earned his PhD from University of Pennsylvania's The Wharton School and has spent six years on the faculty of Harvard Business School.

Frustrated by the nonsense of self-described "thought leaders," and their inability to tell the difference between good and bad business research, Prof. Rosenzweig put together a nice book called the Halo Effect.

In it he describes nine delusions that distort our understanding of business performance. (I count five out of the nine delusions in the defense above). Good book for the layman manager seeking to avoid errors in judgment and reach a better understanding of what drives organizational success.

Skeptical of the skepticism

OK, then let's get to cases. In the survey in question (as summarized), the "high performers" were not surveyed on their success, but just on the IT management practices they had in common (Change, Release, Config, etc) -- merely a preparatory step (page 3). A much larger sample, including perfomers at all levels, was then surveyed to establish correlation between the management practices and the dependent variables. This seems to eliminate the "halo effect" problem.

"the *last thing* you do is ask managers “How good is your change management?” "

It does not seem to me that this is what they did. The dependent variables are quantitative (Page 7):

- Downtime minutes per month
- Security breaches automatically detected
- Release rollback rate
- Incidents fixed within SLA limits

"ITPI’s regression analysis only gives the study the illusion of credibility; lipstick on the pig. (The logic of performing linear regression on a human behavior is another serious problem, but I’ll ignore that for now.)"

Really am mystified by the problem here. We have a set of management practices emerging from the ITSM discourse and a number of companies that self-identify an affiliation with one or more elements from this set of practices. (I think it's to their credit that they bootstrapped the study by surveying the 11 companies for actual practice - the lazy way would have been to simply use the ITIL framework.) We also have objective, quantitative data from those companies' IT service management tooling, which is becoming mature enough (in part through the maturation of IT process and performance management) to allow apples to apples comparisons. Finally, we have "demographic" data which (I assume) covers things like industry verticals, size of company, etc. This all seems very straightfoward from a survey research point of view.

I am still struggling with how asserting the management practices (as the independent variables) are predictive of the performance metrics (as the presumed dependent variables) runs afoul in any way of scientific method.

There would certainly be opportunity to refine this work. You could propose for example that corporate revenue is really the true cause of both using systematic IT practices, and seeing strong operational performance. Reasonable followup question. So, do another survey and control for that. Combine that survey with the first one in a meta-analysis. Etc. No one report does it all. (Actually, I think they controlled for that one through their collection of demographic data.)

But asserting that the ITPI research project is not "credible" has simply not been proven yet to my satisfaction. It's really quite a remarkable and, I think, poorly considered assertion.

Charles T. Betz

A look at the 62-page report

A look at the 62-page report reveals:

Page 11: "We started with interviews of 11 organizations recognized for their data center operational excellence."

This is classic halo-effect. Why were these organizations "recognized" for their "operational excellence"? Is this the hypothesis? If so, it is already contaminated.

Page 11: “Interview data collection and analysis proceeded iteratively with early interviews being more open-ended, and later interviews being directed by five emerging themes of common practice.”

So the hypothesis (and test questions) emerged/changed as the data took shape? After the study began? I don't think this is quite in keeping with the scientific method.

Page 11: “We also asked about key measures they used to gauge success in these areas.” Page 11.

So the survey responses are shaping the measures of performance? How is false attribution mitigated?

Page 21: (Actual survey question) “Which of the following best describes your definition of ‘change success rate’”?

So the survey responses are defining change success? How is confirmation bias kept from contaminating the data?

Why is a particular practice linked to performance? What is the evidence-based logic?

Or is performance based on the fact that managers “believe” it works, attributed to measures they have in common with other managers? Are there any tests for refutation or are they all simply based on confirmation. The scientific method requires that these questions be asked.

The very act of interviews/surveys shape results. People perform better when they know they are being observed. Ask any attorney. How is this kept from influencing the conclusions? For example, data before the practices were implemented, control group, etc.

Very disconcerting is the lack of visibility on the variables *before* the time period covered by the data on the performance that these practices presumably predict. By using words like “predict”, the study is arguing causal order. How do we know?

Should I go on? There's more...

Not halo effect

I question your scholarship, and believe your own “confirmation bias” is evident in your selective quotes.

Bringing Rosenzweig into the argument was a case of a hammer seeking a nail. He is not critiquing operations research in his work; he is going after bigger fish (like Tom Peters) who are attempting to explain organizational financial performance – a much more dubious enterprise.

“This is classic halo-effect. Why were these organizations "recognized" for their "operational excellence"? Is this the hypothesis? If so, it is already contaminated…
the hypothesis (and test questions) emerged/changed as the data took shape? After the study began? I don't think this is quite in keeping with the scientific method…. the survey responses are shaping the measures of performance?”

You seem to be deliberately overlooking the fact that the survey was executed in two parts. First, they established the questions. That is where the iterative interview method was applied. The actual survey data gathering was not part of this first phase. When they got to the second phase of surveying the 341 organizations, the questions were fixed in form. This is the second time I have pointed this out.

Every survey needs a starting point. You may ask, “how did they identify the 11”? Provided with an answer, you could go on to question the assumptions behind that. And so on, endlessly denying the antecedent. They had to bootstrap the thing somehow.

”… the survey responses are defining change success? How is confirmation bias kept from contaminating the data?”

Your own anti-ITPI confirmation bias is showing through again here. While it is true that change success rate is also a dependent variable, asking the respondents how they define change success actually reduces confirmation bias. It decomposes the general assertion of “change success” into three less ambiguous factors:

(1) functional success (did what it was intended to do),
(2) timing success (done when it was supposed to be), and
(3) adherence to build procedure (done how it was supposed to be).

They established that the top performers consistently used the more rigorous definition.

”Why is a particular practice linked to performance? What is the evidence-based logic? …Or is performance based on the fact that managers “believe” it works, attributed to measures they have in common with other managers? Are there any tests for refutation or are they all simply based on confirmation. The scientific method requires that these questions be asked"

Refutation is implied by non-correlation. One particularly disappointing aspect of the study for me personally was that it did not support using a CMDB to assess downstream impact as predictive of improved performance (there was a weak correlation, but it didn’t “make the cut”).

The very act of interviews/surveys shape results. People perform better when they know they are being observed. Ask any attorney. How is this kept from influencing the conclusions? For example, data before the practices were implemented, control group, etc.

The Hawthorne effect is not applicable in this case, since there weren’t people in white coats with clipboards running around these companies’ operations distracting people. Summary indices of performance were gathered through one-time surveys.

Control data is notoriously difficult to come by in any but the hardest sciences. It’s an Achilles heel for operations research as well as for much of economics and social sciences. All one can do is accept the limitations. In ITSM, it would be tantamount to asking some company, “Can you stop doing Incident and Change Management for a while”? At this point, these practices are culturally embedded in IT, and no-one is going to abstain from them in organizations of a certain size. If that is your critique, you are essentially saying that no research is possible in this area.

Very disconcerting is the lack of visibility on the variables *before* the time period covered by the data on the performance that these practices presumably predict. By using words like “predict”, the study is arguing causal order. How do we know?

Again, a combination of bootstrap problem and law of available data. And, don’t agree with your semantics. The word “predict” is a more tentative assertion than the word “cause.”

Should I go on? There's more...

I’m quite curious if you actually have something. Please continue. And I still don’t know what you meant by saying the work is not credible because it is based on inductive reasoning.

This whole debate continued here:

Charles T. Betz

Analysts produce crap

It is obviously time I published this post that I have been working on

that's me firmly wedged on the fence

I have an issue with "research" in general not ITPI in particular. I think that there are valid criticisms of just how scientific their work is, but I also think theirs is some of the most credible work around right now. I'd hate to give the impression otherwise.

On the other hand, I'd sure like to see more work done by independent non-commercial researchers in academic institutions using scientific rigour and openly published data. Having said that some of the academic stuff is crap too (and it originated from Waikato, here in my homeland - OH THE SHAME!).

There! that's me firmly wedged on the fence.

Fair enough

Agree 100%, as stated previously, that this should be openly published. But I won't criticize anyone's business model until I have walked a mile in their moccasins.

Does your discomfort with "research" in the ITSM field center around the fact that so much of it is survey-based, by necessity, with all the abuses that surveys are prone to? Or does it center around the fact that most of it is directly industry funded? Is it method, or sponsorship, or both? Or something else entirely?

Charles T. Betz

My concerns with research

Hmm.. good question. All that.

My concerns are that research is

  • commissioned to prove a point, like cancer research paid for by the tobacco industry but with less observers ready to scream "foul"
  • created as a revenue generating exercise, therefore the results need to be useful, attention getting and self-serving (grow the market)
  • often anecdotal and opinion-based
  • Often asked of the wrong person: "How brilliant were you..." "Did you make the right decision to..." "what ROI have you had from your spending..."
  • lacking transparency (and hence impossible to reproduce): what was the methodology? what questions were actually asked? how was the sample derived? what controls were there (generally none)? what were the raw results?
  • lacking peer review ('cept blogs like this). Where are the professional journals and conferecnes with real review boards?

Funny I just finished writing a post on that very topic which will show up on the blog here soon.

My concern is more

My concern is more fundamental. Research prompts beliefs which prompt action. If research is done improperly then it produces half-truths that are partly right at times but flawed enough to get organizations in trouble. Because it arrives in a cloak of science and objectivity, these half-truths get repeated time again and again.

"Process excellence is vital."
"The best organizations have the best people."
"Productivity through people."

When half-truths are treated as the whole truth, they cause severe damage to careers, companies, and employee quality of life.

Management is hard. It is difficult to lead and manage organizations; there are no simple, easy formulas. It requires devotion to learning the craft. Including better ways of thinking about business knowledge.

Or alternatively...

....Misfires cause bumps in the roads



That's exactly the sort of statistical syllogism the study falls into with statements of "hard evidence" and "predicting top levels of performance".

One of the reasons why "The Emperor has no clothes" - ROI

ROI is a fake , a fake term retrieved ,not without propose , from the financial business to explain nothing. I work with IT for a couple of years , about 15, and just start to listen about ROI for past 3,4 yrs. Coincidently when Linux start to emerge from darkness! Mere coincidence. And If i remember well one of the first to pull ROI up was MS!. The truth is that ROI is a measure created to explain why you pay for something that does not worth. Does not worth because you have a cost-free alternative available or because you don't have to prove that it really does what was supposed to do (remembers ITIL?). If you doubt pls chk :,1540,1999135,00.asp --> Very good ideas indeed.

So if you base one of your products in a fake what you are producing indeed ? Another fake !.

Empirical benefits to ITIL framework adoption

Can you provide the empirical evidence that Six Sigma, Cobit or MOF, or CMM provide measurable added value?

Then we can use that same metric and apply it to ITIL and see what we come up with. Six Sigma hype is all the rage in USA right now. ITIL has more CIO mind share than Cobit right now too, and with ITIL v3.0 MOF might drift away, but what I am getting at is "why not ITIL?" What better framework is there out there? How can you quantify that it is better. Based on experience I'd say ITIL adds value to IT Governance just as formal Project/Programme Management methodology adds value to efficient management of an enterprise, and for Application Development CMM is the way to go.

As the director I work for, here in USA, said: "we kicked out the Brits here a while ago...", in other words don't give me that English language ITIL spin, give me better insight into my production incidents and problems and how to reduce them. I'm using the ITIL Framework to do just that.

Regards from Chicago

performance graphing

In an internal company workshop, an ITIL author showed us quantitative models that showed the effect of each ITIL entity (process, function, cmdb and so on) on the overall performance of the IT organization. Generic and real-life examples.

What was fascinating was that he did this in real time with freely available desktop software. He modeled ITIL systems within a picture of larger business systems and demonstrated sensitivity analysis and how data, training, and process affected the behavior of the business over an extended period of time. Some made it better, others made it worse. It wasn’t always obvious.

The performance graph for an organization starting with Incident management, for instance, improved for a period of time and then fell into a downward trend. After Problem management was added to the model, the curve resembled more of an s-shape.

He then took the pony example from the strategy book and modeled it. He asked for recommendations from the group on how to fix it. Many suggestions made the performance worse. The right answers turned out to be simple but surprising. Now the CIO wants our organization modeled. The week before he was skeptical of ITIL.

ITIL and Efficient IT Performance in Public sector companies

Actually I admire all the comment you all mentioned it was very fruitful to me; I am preparing for doctorial degree for business administration since almost a year in Prestige University in Netherlands and my thesis is about measuring if ITIL works or not based on scientific evidence “ITIL and Efficient IT Performance in Public sector companies” and while i was searching for resources over the internet I found this useful site with a fruitful article.
I have my questions , proposition and hypotheses and I hope to get your useful comments about it (what should I add , remove or modify) I want really measure if ITIL works or not.

The study will be conducted on Telecom Egypt which is sole land line telephone provider in Egypt and I am GM on this company and responsible for ITIL implementation on this company so here it is…

1.1. Problem to be studied

Long waiting time consumers go through when paying Telephone Bills and instability of IT systems in telecom Public sector Companies.

1.2. Purpose of the proposed research project

Proposing applying ITIL in order to mitigate the problems addressed above. That is studying how ITIL might be associated with: (a) waiting time consumer suffer paying their phone bills, and (b) stability of IT systems in telecom public sector companies.

ITIL promotes optimum use of people, process and technology, improving the quality while reducing the total cost of IT services to the business (Global Knowledge, 2007)

Despite the phenomenal popularity of ITIL, there have been a few academic attempts relating to ITIL adoption and implementation (Carter-Steel & Tan, 2005)

The lack of evidence that applying ITIL will improve the IT overall performance motivates this study.

In this regard, this study is centered around the possible impact ITIL Implementation has on IT overall performance in public sector companies examining the case of Telecom Egypt.

1.3.Major Research Questions

Will applying ITIL positively impact IT overall performance in Telecom Egypt in terms of consumers’ waiting time and IT systems’ stability?

1.4.Minor Research Questions

•IS the lack of efficient allocation of IT resources positively associated with the performance of IT departments in public sector telecom companies?

•Does ITIL improves IT systems resource utilization

•What are the return on investment for ITIL implementation

•Does ITIL reduce repair time?

•Does ITIL improve employee productivity due to reduced down times?

•Does ITIL Address IT productivity due eliminating loosely defined, stand-alone processes which reduces re-word and the redundant work

•Measure if a direct correlation exists between customer satisfaction and the use of ITIL?

•Determine if Customer Satisfaction is an indication of effective service provision?

•Does ITIL will face resistance to organizational change?

1.5.Major Research Hypothesis

Applying ITIL positively impacts IT overall performance in Telecom Egypt in terms of consumers’ waiting time and IT systems’ stability.

1.6.Minor Hypotheses

•No significant technology upgrades in Telecom Egypt will impact the performance within next 3 years while implementation of ITIL
•NO other improvement programs will be implemented concurrently like Six Sigma or CMMI or other quality improvement programs.
•No Relation between strategic, tactical and operation level.
•No defined rules or procedures for internal processes.
•No processes arrangements between IT Department and other departments.
•Incomplete definition of the service level users can expect

URL to access my thesis "ITIL and Efficient IT Performance "

For full access to my thesis "ITIL and Efficient IT Performance in Public sector companies"

Why not ITIL?

Why not ITIL? If there isn't the business case for the project. I'm looking at a $10M ITIL project that went nowhere.

There may well be a good case for doing it, but decision makers should be aware that the case is not based on empirical evidence but rather on the same gut feel you express: "Based on experience I'd say ..." Nobody can prove that ITIL saves a cent.

So I'm not saying don't do ITIL. I'm just saying be aware that when you decide to, you are basing that decision on faith and instinct.

Quantifying ITIL

I'm a dinosaur in this business and as others have said ITIL is not new. Quantifying it is varied from organization to organization based on it's maturity level, superficially, it's corporate culture.

The current organization I'm with (local government), I had brought the concept of ITIL to them over ten years ago but only just recently has it begun as a project. It's been a long hard road so far and it isn't getting any easier.

However, one quantifier that's been supremely identified is that incompetence cannot be promoted as it had been in the past. Instead of blaming the users, the technologies or the projects themselves, ITIL has boxed in the human element and exposed where incompetence had been "hiding". This may sound odd but from a management perspective, when a corporate culture has been more of an ostrich than an eagle, the people processes are forced to change in order to embrace ITIL in any form, even if only for the Help Desk.

ITIL is time consuming which for many translates to "cost". Yet for private sector it has become a "banner" to improve the bottom line and the only quantifier of any real value. For public sector, it has become the "governance" and accountability banner that politicians and tax payers are rallying behind. Public sector is about judiciously spending money and not shareholder dividends.

So do I debate for or against your assumption that ITIL has no clothes (value)? I believe that each organization finds value at whatever level or pain point being addressed. Is the value equal? Only they can say. But in my case, although painful, it has become a tool to quickly establish true IT leadership, accountability and competency.

ITIL might make a difference but we have almost zero evidence

Oooh I never said ITIL has no value: I just said that before embarking on an ITIL project we have almost zero evidence that it WILL have value.

More subtly, we also have no evidence that ITIL makes any more difference than any other shakeup of process, though once again it might.


Oooh I don't know

I use ITIL in my infrastructure and its working out just fine. What kind of evidence do you need?

[link removed as suspected link-spam, sorry]

Working out just fine

Amongst all the ITIL hype a message that gets forgotten is that in most organisations the true value of using ITIL is not the mythical "become a world class service management organisation by the end of the year" but to bring a way of working that is broken back under control to deliver a consistent service that is fit for people to use, and that they can trust. It is unexciting and dull and doesn't make front page news when you achieve it - but that's just fine and how it should be.

In theory the more broken your way of working is to start with the more benefit ITIL should deliver.

ITIL might make a difference...

That's because there are no tangible benefits for bringing ITIL in, unless you already have good metrics in place and are able to ascertain the cost of failure (Incidents) and cost of change (RFCs) to your organisation. Most of the groundwork in implementing ITIL is spent on gaining baseline metrics for further development. If that's not done... the implementation will eventually fail.


I won't hide my bias, my screen name pretty much signposts it anyway. However, you brought up the subject of baselines.

Of all the implementations of ITIL did no-one take a baseline of incidents an other data before and after? Wouldn't that provide data to prove how wonderful ITIL is?

No-one took baseline data when they implemented it where I work and they said they'd baseline it afterwards (for maintenance and imporvements I guess). How do you know if it's performing better, or worse, than it was before?

How does big business sign up for something that draws so much expense (this is just relative to my experience) due to Service desk software, additional staff, staff training and further ongoing training without having a clue how their implementation will perform? Like all the best snake oil vendors do, they use the patter and schpeil and the 'customer' is so bewildered they don't question anything. That and a bit of "Well, if XYZ uses ITIL it must be good".

What's worse is that because ITIL is so woolly about how you implement it, how do you know what the ITIL lies about improvements are based on, ROI figures, Maintenance levels, incident completions per month - is it level 5 maturity? 4, 3, 2, 1... Something in between or some other arbitrary thing plucked out of the ether like a Victorian spiritualist gathering?

How do people in business assess the prospective performance and benefits of ITIL? If ITIL tells you XYZ will be better using their processes, how do you know the increase in benefit over what you do now is worth the finance? Isn't it great that ITIL doesn't tell you how to implement stuff because that way it can't tell you how much you need to spend!

the same circular reasoning as cults use

You can measure effectiveness with metrics like incident counts, but most metrics are not good measures. for example a GOOD service desk process improvement would result in an INCREASE in incidents, at least initially, as people make more use of a better service.

So usually the baseline is an abstract figure called maturity. And maturity is nearly always measured against ITIL as the benchmark. The logic is you rate poorly against ITIl so you need ITIL to fix it. And lo and behold after you implement ITIL, your ITIl maturity improves, so ITIL was the right solution and ITIL delivered.

This is the same circular reasoning as cults use: to measure you against their answer. it is also similar to the trick the Scientologists play when they accost people in the street and offer personality readings. Surprise that the readings come up with broken personalities that only they can fix.

Quantifying the ITIL benefits


I'm afraid I'm going to agree with you as well - but it's interesting that this topic is starting to be recognised. I've had this conversation with stalwarts of the service management industry and never got a straight answer - equally as a consultant I've had some difficult presentations with perceptive clients - it's hard to justify the cost of service management implementations unless they're on the back of other cost saving initiatives such as off shoring.

I suppose you can approach the ITIL argument in terms of maturity and quality of the IT industry - ITIL isn't so much about saving money as making IT a 'premium' brand with the whistles and bells that come with it. But as organisations drive to be ever leaner, it's not always an approach that's going to fly.

Keep posting on the blog - I'll be reading with interest.

Unkind words.

Thankyou for those kinds words, Claire. I too have been less than comfortable in front of an audience: "skating on thin ice" re evidence.

Now surely someone has some unkind words?!!!! Nobody begs to differ? Ideas only move forward in an environment of robust debate


I am with you. But IMHO you forgot the following:

There are not very many companies that established ALL ITIL processes (That´s btw. sth. I would not do, too). I just know of one survey targeting the question which ITIL processes are established and in which order they were established. But this one (you might find it here (text is in german, picture stating the order of adaption and non implemented processes): is rather old and based on a survey by a consultant company.

Most of the companies establishing ITIL processes (an percentage of about 60% only do Service Desk, Incident Management and sometimes Configuration Management, anyhow, which is another estimation with no proven data base) do that, because they want to be able to measure or prove the success of their IT processes ("Where is all that money going to?"). So most of the times there is no data base to compare before and after ITIL. And hey: We all know the mess with processes in IT, don't we?

IMHO the best reference for ITIL processes in IT is the raise in maturity both on provider and on customer side. You have to make sure, that you are talking the same language, think of all the processes necessary and ensure that the same wording is used on both sides. ITIL can help with that.

ITIL is over-hyped at the moment (Hype as defined by the Gartner Hype Cycle: we don't need to discuss that ;-)

Scientific Theory

Cite (,3800011701,39159368,00.htm):
Aidan Lawes, CEO of the IT Service Management Forum (ITSMF), says: "There is nothing else quite like ITIL. It is not a methodology per se - it is just advice and guidance. ITIL is not based on scientific theory, it is documented best practice. ITIL is widely used because it is commonsense but it is not a bible that should be followed blindly." It is not necessary to adopt all of the ITIL best practices; organisations should only choose those parts of ITIL that are most relevant for their own situation.

Pseudo Science

I went to an academic conference the other day on the subject of philosophy's role in exposing pseudo-science. Various criteria put forward for deciding something falls into this dubious category included:

- Anachronistic ideas that have already proven unworkable
- Appeal to myths
- A grab bag approach to evidence
- Irrefutable hypotheses
- Argument from spurious similarity
- Refusal to revise in the light of criticism

- Escape hatch hypotheses
- No self correction
- Focus on confirmation not refutation
- Excessive relience on anecdotal evidence
- No peer review
- Absence of connectivity with other knowledge
- Use of impressive sounding jargon
- Failure to specify settings under which claims do not hold good

Thank goodness ITIL could never be accused of any of those things!

Applies to a good bit of IT

Sounds like they were describing quite a few IT management practices. They quickest way to spot an IT model or practice based on pseudo-science is to simply ask, "How do you know?"

It's a great form of entertainment.

decision to invest funds in its adoption should be evidence-base

The fact that ITIL itself is not based on scientific research is not the issue (in this blog entry), but the business decision to invest funds in its adoption should be. That is to say, I'm not looking for evidence to support why ITIl does something a particular way, I'm looking for evidence that doing it that way returns a benefit to the business (financial or other) sufficient to make adopting ITIL worthwhile.

To return to the CMM analogy: the CMM evidence that I respect is not evidence that 5 maturity levels is the right number; it is evidence that companies that moved from maturity 2 to 3 in software development saw an average x% reduction in errors and a y% reduction in costs.

Likewise the evidence does not have to be related to adopting all of ITIL. In fact I'd love to see some evidence relating to the benefits of adopting fragments of it, since that is what the majority of sites do.

raising maturity: it is a circular arguement

Hooray. Thankyou Holger, I was wondering when someone would actually enter into discussion. Sadly you didn't take a contrary position :-)

See my comments about raising maturity: it is a circular arguement. Sure doing ITIL will make you better at doing ITIL. That doesn't mean the money was well spent, and it doesn't measure whether it was or not.

With regard to only doing bits of ITIL, the empirical research could still measure the impact. In fact that would be another useful Ph. D. thesis for someone:
Is there an 80/20 rule? Does doing all 10 or 11 or 13 ITIL disciplines deliver much more benefit than just doing ServiceDesk-Incident-Problem-Change?

(One day I'll do a blog on how they are the only interesting bits of ITIL anyway)

Thankyou for engaging. Anyone else?

P.S. Thankyou for the definition of the GHC: I struggled to find one on Gartner's site. Should have thought of Wikipedia.

P.P.S. I like the way that survey you cited has asked what order they were done in: that is a new twist.

Syndicate content