Executing a megaproject to plan is hard. The Channel Tunnel, which connects the UK to France, had cost overruns of 80%; the Denver International Airport had cost overruns of 200%; the Suez Canal had cost overruns of a spectacular 1900%. Delivering a project on or under budget is so uncommon that Bent Flyvbjerg coined the iron law of megaprojects: "Over budget, over time, under benefits, over and over again."
In How Big Things Get Done: The Surprising Factors that Determine the Fate of Every Project, from Home Renovations to Space Exploration and Everything in Between, Bent Flyvbjerg and co-author Dan Gardner aim to offer a corrective to that iron law. They summarize decades of learning from megaproject management and offer straightforward heuristics and best practices that can be applied to projects at any scale, arguing that successful projects require us to identify and overcome our cognitive biases, relying instead on quality data to guide project planning. The book is underpinned by Bent’s deep research on megaproject management. He is an economic geographer, first BT Professor at Oxford University and VKR Professor at the IT University of Copenhagen. Bent recently joined Scope of Work's Members’ Reading Group for a conversation about project management and execution. What follows is an edited and condensed transcript of our discussion.
Hillary Predko: A throughline connecting so many megaprojects gone awry is tunnel boring machines. There are so many stories where a tunnel boring machine fails and gets stuck underground, pushing out the budget and timeline. It reminded me of the uniqueness bias you write about: Everyone thinks that their tunnel will be different but again, and again, the same problems crop up.
Bent Flyvbjerg: It’s not only tunnels – with any project, people tend to think their project is unique. And shockingly, that’s built into the major project management associations’ definitions of “a project.” PMI, Project Management Institute in the United States, defines a project as a unique venture. The same with APM, the Association for Project Management, the sister organization in the UK. Unique is a keyword in these definitions.
At some early stage, somebody sat down and decided that a project is something unique and that has had all sorts of negative repercussions. If you think something is unique, you don't have much reason to look at other projects, right? You can't learn anything from others. So you stunt your learning right from the outset. With a definition like that, you encourage people not to learn because each project is going to be unique. That's a huge mistake – it's just dead wrong. We prove that with data, there's lots to learn – you don't have to be an engineer or project manager to understand this.
For example, everyone will insist that their children are unique. But at the same time, any person with any common sense will also say that when it comes to medical science, there is a lot to learn from the diseases of other children. If your kid were to get ill, you would immediately draw on this knowledge to get them healthy again. That's the way we need to think about projects.
HP: You mention a database of megaproject data you maintain in the book, and we're all very curious to learn more. How are you gathering this data? And how does it inform your work both as an academic and consultant?
BF: The most difficult part of the whole thing is to get valid and reliable data. I'm an economic geographer, a specialized type of economist that is studying the economics of geography. Any economist will know that if you want to study unemployment, inflation rates, or productivity, and so on, you just contact the national statistical office in your country. There are lots of data that you can draw down, and you'll be in the business of doing research immediately.
The data are there because each nation-state has made the point of collecting the data that they think are important for running the economy. But it turns out that nobody thought that data about projects was worth collecting – even though we are spending trillions of dollars on them around the world. So I found there was no standardized data collection for projects, and as a result, you have all sorts of low-quality professional work and research going on because there is no good information.
I realized this way back when I studied the first megaproject in Danish history, a tunnel and bridge between East and West Denmark that went very badly. I documented the cost overruns, delays, and so on, but when I wanted to compare them to other projects, the data just wasn't there. Nowhere on the planet could we find a large data set that would reliably answer the question of whether the cost overrun and the delay of the Great Belt Tunnel in Denmark were normal.
So I figured, okay, that's interesting – this is something that stimulates a scholar like myself. If there's a white area on the map, something that has never been done before, I want to do it. That's when I started collecting data, and it took several people working around the world five years to collect the first 258 data points for transport infrastructure. And this was the largest database by far at the time: 258.
It showed that the cost overruns we had seen in Denmark were common worldwide. This was such a sensational finding that it got in the New York Times! That's not common for the kind of research that we are doing, but that encouraged me. I was thinking, “Hey, there's something here, and I want to answer more questions here. How big are the cost overruns? How big are the delays? Are we delivering the benefits that we are promising to deliver?”
You asked how we collect the data. Well, the first few hundred were arduous, really mining out the data, nugget by nugget. It was really hard work going into individual projects, getting the accounting out, and so on, to figure out what the real costs were compared to the estimated cost. Often the teams hadn't even done that themselves. You would be surprised to see how little collective memory there is on projects. Nobody cares to stop, look back and ask whether they accomplished what was promised. People are just thinking about the present and the next phase.
At one stage, I was contacted by McKinsey Corporation, the big consultancy, and they wanted us to gather data on IT projects. They contacted me and said, “Bent, we think that we have a way that we can get data more easily from our clients. We will offer to benchmark our client’s projects if they give us their data.” So, we made it into this kind of barter economy. We give something and we get something.
Luckily, quite a few companies agreed to do that and we got data. Of course, we would've been in deep trouble if nobody had wanted to do this, because we wouldn't have any data to do the benchmarking. We have used this methodology in 23 different areas – defense, mining, water, and energy projects – you name it, it's in there.
In consulting, when people ask us to help them, we say we'll do it, but only on the condition that we can use the data that we collect. We sign NDAs and make all the data anonymous which is important because a lot of this stuff is business secrets or government secrets. So they don't want it out there and nobody wants to look foolish. And a lot of these numbers do make people look foolish.
Alex Animashaun: I'm a mechanical engineer, and I work on automotive and things like that. Have you got any tips for how we could collect this kind of data to use for benchmarking moving forward?