Skip to Content
Policy

Learning from catastrophe

Three books reckon with technological complexity and the wicked problems it creates.

left side of illustration is a hand holding an alarm clock in black and white, while the right side is a colorful chaos
Jinhwa Jang

The philosopher Karl Popper once argued that there are two kinds of problems in the world: clock problems and cloud problems. As the metaphor suggests, clock problems obey a certain logic. They are orderly and can be broken down and analyzed piece by piece. When a clock stops working, you’re able to take it apart, look for what’s wrong, and fix it. The fix may not be easy, but it’s achievable. Crucially, you know when you’ve solved the issue because the clock starts telling the time again. 

""
Wicked Problems: How to Engineer a Better World
Guru Madhavan
W.W. NORTON, 2024

Cloud problems offer no such assurances. They are inherently complex and unpredictable, and they usually have social, psychological, or political dimensions. Because of their dynamic, shape-shifting nature, trying to “fix” a cloud problem often ends up creating several new problems. For this reason, they don’t have a definitive “solved” state—only good and bad (or better and worse) outcomes. Trying to repair a broken-down car is a clock problem. Trying to solve traffic is a cloud problem.  

Engineers are renowned clock-problem solvers. They’re also notorious for treating every problem like a clock. Increasing specialization and cultural expectations play a role in this tendency. But so do engineers themselves, who are typically the ones who get to frame the problems they’re trying to solve in the first place. 

In his latest book, Wicked Problems, Guru Madhavan argues that the growing number of cloudy problems in our world demands a broader, more civic-minded approach to engineering. “Wickedness” is Madhavan’s way of characterizing what he calls “the cloudiest of problems.” It’s a nod to a now-famous coinage by Horst Rittel and Melvin Webber, professors at the University of California, Berkeley, who used the term “wicked” to describe complex social problems that resisted the rote scientific and engineering-based (i.e., clock-like) approaches that were invading their fields of design and urban planning back in the 1970s. 

Madhavan, who’s the senior director of programs at the National Academy of Engineering, is no stranger to wicked problems himself. He’s tackled such daunting examples as trying to make prescription drugs more affordable in the US and prioritizing development of new vaccines. But the book isn’t about his own work. Instead, Wicked Problems weaves together the story of a largely forgotten aviation engineer and inventor, Edwin A. Link, with case studies of man-made and natural disasters that Madhavan uses to explain how wicked problems take shape in society and how they might be tamed.

Link’s story, for those who don’t know it, is fascinating—he was responsible for building the first mechanical flight trainer, using parts from his family’s organ factory—and Madhavan gives a rich and detailed accounting. The challenges this inventor faced in the 1920s and ’30s—which included figuring out how tens of thousands of pilots could quickly and effectively be trained to fly without putting all of them up in the air (and in danger), as well as how to instill trust in “instrument flying” when pilots’ instincts frequently told them their instruments were wrong—were among the quintessential wicked problems of his time. 

To address a world full of wicked problems, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.

Unfortunately, while Link’s biography and many of the interstitial chapters on disasters, like Boston’s Great Molasses Flood of 1919, are interesting and deeply researched, Wicked Problems suffers from some wicked structural choices. 

The book’s elaborate conceptual framework and hodgepodge of narratives feel both fussy and unnecessary, making a complex and nuanced topic even more difficult to grasp at times. In the prologue alone, readers must bounce from the concept of cloud problems to that of wicked problems, which get broken down into hard, soft, and messy problems, which are then reconstituted in different ways and linked to six attributes—efficiency, vagueness, vulnerability, safety, maintenance, and resilience—that, together, form what Madhavan calls a “concept of operations,” which is the primary organizational tool he uses to examine wicked problems.

It’s a lot—or at least enough to make you wonder whether a “systems engineering” approach was the correct lens through which to examine wickedness. It’s also unfortunate because Madhavan’s ultimate argument is an important one, particularly in an age of rampant solutionism and “one neat trick” approaches to complex problems. To effectively address a world full of wicked problems, he says, we’re going to need a more expansive and inclusive idea of what engineering is and who gets to participate in it.  

""
Rational Accidents: Reckoning with Catastrophic Technologies
John Downer
MIT PRESS, 2024

While John Downer would likely agree with that sentiment, his new book, Rational Accidents, makes a strong argument that there are hard limits to even the best and broadest engineering approaches. Similarly set in the world of aviation, Downer’s book explores a fundamental paradox at the heart of today’s civil aviation industry: the fact that flying is safer and more reliable than should technically be possible.

Jetliners are an example of what Downer calls a “catastrophic technology.” These are “complex technological systems that require extraordinary, and historically unprecedented, failure rates—of the order of hundreds of millions, or even billions, of operational hours between catastrophic failures.”

Take the average modern jetliner, with its 7 million components and 170 miles’ worth of wiring—an immensely complex system in and of itself. There were over 25,000 jetliners in regular service in 2014, according to Downer. Together, they averaged 100,000 flights every single day. Now consider that in 2017, no passenger-carrying commercial jetliner was involved in a fatal accident. Zero. That year, passenger totals reached 4 billion on close to 37 million flights. Yes, it was a record-setting year for the airline industry, safety-wise, but flying remains an almost unfathomably safe and reliable mode of transportation—even with Boeing’s deadly 737 Max crashes in 2018 and 2019 and the company’s ongoing troubles

Downer, a professor of science and technology studies at the University of Bristol, does an excellent job in the first half of the book dismantling the idea that we can objectively recognize, understand, and therefore control all risk involved in such complex technologies. Using examples from well-known jetliner crashes, as well as from the Fukushima nuclear plant meltdown, he shows why there are simply too many scenarios and permutations of failure for us to assess or foresee such risks, even with today’s sophisticated modeling techniques and algorithmic assistance.

So how does the airline industry achieve its seemingly unachievable record of safety and reliability? It’s not regulation, Downer says. Instead, he points to three unique factors. First is the massive service experience the industry has amassed. Over the course of 70 years, manufacturers have built tens of thousands of jetliners, which have failed (and continue to fail) in all sorts of unpredictable ways. 

This deep and constantly growing data set, combined with the industry’s commitment to thoroughly investigating each and every failure, lets it generalize the lessons learned across the entire industry—the second key to understanding jetliner reliability. 

Finally is what might be the most interesting and counterintuitive factor: Downer argues that the lack of innovation in jetliner design is an essential but overlooked part of the reliability record. The fact that the industry has been building what are essentially iterations of the same jetliner for 70 years ensures that lessons learned from failures are perpetually relevant as well as generalizable, he says. 

That extremely cautious relationship to change flies in the face of the innovate-or-die ethos that drives most technology companies today. And yet it allows the airline industry to learn from decades of failures and continue to chip away at the future “failure performance” of jetliners.

The bad news is that the lessons in jetliner reliability aren’t transferable to other catastrophic technologies. “It is an irony of modernity that the only catastrophic technology with which we have real experience, the jetliner, is highly unrepresentative, and yet it reifies a misleading perception of mastery over catastrophic technologies in general,” writes Downer.

For instance, to make nuclear reactors as reliable as jetliners, that industry would need to commit to one common reactor design, build tens of thousands of reactors, operate them for decades, suffer through thousands of catastrophes, slowly accumulate lessons and insights from those catastrophes, and then use them to refine that common reactor design.  

This obviously won’t happen. And yet “because we remain entranced by the promise of implausible reliability, and implausible certainty about that reliability, our appetite for innovation has outpaced our insight and humility,” writes Downer. With the age of catastrophic technologies still in its infancy, our continued survival may very well hinge not on innovating our way out of cloudy or wicked problems, but rather on recognizing, and respecting, what we don’t know and can probably never understand.  

If Wicked Problems and Rational Accidents are about the challenges and limits of trying to understand complex systems using objective science- and engineering-based methods, Georgina Voss’s new book, Systems Ultra, provides a refreshing alternative. Rather than dispassionately trying to map out or make sense of complex systems from the outside, Voss—a writer, artist, and researcher—uses her book to grapple with what they feel like, and ultimately what they mean, from the inside.

""
Systems Ultra: Making Sense of Technology in a Complex World
Georgina Voss
VERSO, 2024

“There is something rather wonderful about simply feeling our way through these enormous structures,” she writes before taking readers on a whirlwind tour of systems visible and unseen, corrupt and benign, ancient and new. Stops include the halls of hype at Las Vegas’s annual Consumer Electronics Show (“a hot mess of a Friday casual hellscape”), the “memetic gold mine” that was the container ship Ever Given and the global supply chain it broke when it got stuck in the Suez Canal, and the payment systems that undergird the porn industry. 

For Voss, systems are both structure and behavior. They are relational technologies that are “defined by their ability to scale and, perhaps more importantly, their peculiar relationship to scale.” She’s also keenly aware of the pitfalls of using an “experiential” approach to make sense of these large-scale systems. “Verbal attempts to neatly encapsulate what a system is can feel like a stoner monologue with pointed hand gestures (‘Have you ever thought about how electricity is, like, really big?’),” she writes. 

Nevertheless, her written attempts are a delight to read. Voss manages to skillfully unpack the power structures that make up, and reinforce, the large-scale systems we live in. Along the way, she also dispels many of the stories we’re told about their inscrutability and inevitability. That she does all this with humor, intelligence, and a boundless sense of curiosity makes Systems Ultra both a shining example of the “civic engagement as engineering” approach that Madhavan argues for in Wicked Problems, and proof that his argument is spot on. 

Bryan Gardiner is a writer based in Oakland, California.

Keep Reading

Most Popular

How to opt out of Meta’s AI training

Your posts are a gold mine, especially as companies start to run out of AI training data.

Why does AI hallucinate?

The tendency to make things up is holding chatbots back. But that’s just what they do.

The return of pneumatic tubes

Pneumatic tubes were supposed to revolutionize the world but have fallen by the wayside. Except in hospitals.

How a simple circuit could offer an alternative to energy-intensive GPUs

The creative new approach could lead to more energy-efficient machine-learning hardware.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.