What the heck do we mean by “resilient design” anyway?

Michael Mehaffy, New Urban Network

We hear the word “resilience” bandied about a lot these days. Like the word “sustainability,” it’s a word that’s all too ripe for misuse and vagueness. But vagueness is the last thing we need if we want to make effective responses to our looming urban challenges.

The latest event that has triggered discussion of resilience is the Japanese tsunami. Like Hurricane Katrina, or the earthquake in Haiti, or the other natural disasters of the last few years, this horrifying event has made us deeply aware of the fragility — the non-resilience — of our technologies. Especially, we have seen the fragility and the non-resilience of our settlement technologies. This is where the work of urban designers, and New Urbanists in particular, comes to bear.

A number of prominent New Urbanists came away from the reconstruction work after Hurricane Katrina bitterly disappointed with the pace of reconstruction, and indeed, they concluded that our own planning efforts had failed. I think that’s the wrong lesson to draw. Certainly, horrendous mistakes were made, and the mistakes of government agencies (FEMA, DOTs, councils, etc.) were especially appalling.  

But if there’s one thing that I hope characterizes New Urbanists, it’s the ability to learn from mistakes. That means learning from our own mistakes, and from those of others too. And it means we can learn how to correct the mistakes, no matter how daunting or frustrating. After all, these are not mysterious kinds of problems, but comprehensible malfunctions and, at least in that sense, correctable. They are structures that can be designed and re-designed, like any other.

I think we are in fact learning and making progress, and that’s painful and messy. But it always is.

There’s a great story about Thomas Edison being interviewed by a young newspaper reporter shortly before he discovered a successful filament for the light bulb. The reporter noted that Edison had tried 10,000 different candidates and none had worked. Didn’t he think it was time to give up, since he had failed so many times? “Failed? Young man, I haven’t failed — I’ve just found 10,000 ways that won’t work!”

Sometimes it’s failures that teach the most. Sometimes the name of the game is learning through experiment, and through seeming failure — adapting, adjusting, iterating, and making slow but steady progress.

We can see this same kind of learning process — the same “iterative adaptation” — happening in nature, and creating highly evolved, self-organized kinds of structures. These structures are often the very models of resilience. Here, I think, is the lesson that is key for us.

So what is meant by the term “resilience?” It’s usually defined as the ability of a system to absorb disruptions without tipping over to a new kind of order — one that can be disastrous for the organisms involved. A pond within a meadow might get colder and colder, with minimal disruptions — but when it hits the “tipping point” of 32 degrees Fahrenheit, it “tips over” into a radically new state of order: namely, it freezes over. This new state can be catastrophic for the organisms involved, and for the ecosystem as a whole — unless they have already evolved resilience against the freezing condition. (That is, unless they have “learned,” so to speak.)

C.S. Holling, one of the pioneers of resilience theory, once made an important distinction between the kind of resilience that we humans often try to achieve — what he called “engineered resilience” — and the kind of resilience seen in natural systems, which he called “ecological resilience.” He noted that the latter is typically able to cope with far more complex and unpredictable kinds of events. Why is this?

The former kind of resilience is maintained under what are called “equilibrium” conditions — things are predictable, orderly, part of a “rational” scheme of things. But the second kind has managed to cope with what physicists call “far from equilibrium” conditions, like the chaotic patterns of weather, or earthquakes.

The former kind of resilient system works very well within the parameters for which it is designed. But when it’s a subset of the latter kind of resilient system, the results can be disastrous.  The trouble is, in the end, all such sub-systems are subsets of nature itself, and its non-equilibrium conditions. All such human systems are subsets of nature — industries, economies, cities. We can only pretend that we are not for so long, and then nature will catch up with us. And so it is doing, in the age of Peak Oil, Climate Change and other “slow converging catastrophes.”

Thus we have a need for much greater resilience, and greater sustainability.

As an example, we can look at the Fukushima nuclear reactors. They were masterfully designed to produce reliable, clean, safe energy for many years. That system was, in its own self-specified terms, resilient. As long as the conditions for which the system was designed were maintained, all was well.

But the trouble was, the system did not exist in a vacuum — it was embedded in a much larger set of natural systems. The critical factor in this case was the dynamic and non-linear system of tectonic plates, with their unpredictable patterns of strain and sudden movement. When one such unpredictable event occurred, it released a massive wall of water that damaged critical infrastructure within the reactor cooling systems. The result was a series of compounding disasters — giving us a classic example of the failure of ultimately non-resilient systems.

Unfortunately, this example is a good model of our entire global system of technology. We have engineered our wonders to function beautifully within their own assumed conditions of equilibrium. But when these subsystems are combined into larger systems, we enter the world of non-linear dynamics, complex interactions, chaotic behavior — and often, disastrous non-resilience. We can experience this disaster in an acute form, in disasters like Katrina, the Hatian earthquake, or Japanese tsunami. Or we can experience it in chronic form, in the “long emergency” of a civilization that fuels its growth with unsustainable levels of waste and fragmentation.

The disaster is easy to see in the context of a natural event like a hurricane or earthquake. But the same chaos can result from our own technologies as they combine in unforeseen ways, with unintended consequences. We fail to learn from the structural characteristics of natural systems. Specifically, we fail to build in the capacity to adapt to unforeseen circumstances. We fail to provide diversity, redundancy, network overlaps. We fail to allow fine-grained adaptation, modification and “learning.” As a result, we set ourselves up to be vulnerable to unforeseen combinations that produce unintended consequences.

The recent mortgage meltdown was an example — a kind of “perfect storm” of interlocking domino effects that quickly led to global economic disaster. The fact that this crisis started in the sprawling American suburbs is not coincidence.

Our modern settlement technologies have their own problems with resilience, as most of us understand. In the case of the mortgage meltdown, the siting of houses out in fringe settlements, requiring excessive driving — and therefore excessive reliance on cheap gasoline — was a chief ingredient of the ensuing disaster.

This settlement pattern was part and parcel of the larger evolution of a very linear, extractive kind of technology, over-reliant on a segregated, rational design within equilibrium conditions. (Le Corbusier expressed this sprawl theory very well in describing how “we shall use up tires, wear out road surfaces and gears, consume oil and gasoline, all of which will necessitate a great deal of work ... enough for all.) As long as we had plenty of cheap fuel, and the environmental chickens did not come home to roost (e.g. climate change) we lived very well indeed. But this fueling of growth with waste was (and is) the very definition of unsustainable. It was, and is, non-resilient.

In New Orleans we talked of the disaster of the storm, and the disaster before the storm — the functionally segregated planning that fragmented and damaged the city. So our work was not simply to rebuild what existed in July 2005, but to build for the next century. The kind of resilient technology we sought, capable of coping with non-linear phenomena, could not be created with linear methods. The old top-down planning methods would not work, unless we understood how to apply them very strategically, as catalysts. In many cases this meant taking a very different approach.

In a way, it meant using a lot less Daniel Burnham, and a lot more Jane Jacobs.

I think this is a good lesson for New Urbanists. It doesn’t mean that we don’t use masterplans, or new design standards, or new codes, or any of the other tools in the NU toolkit. It does mean that we need to supplement those with capacity-building and asset-building tools. It means we need to think differently about our challenges.

In particular, we need tools to transform the interlocking "operating system" that fed the old patterns of decline, fragmentation and waste. This system produced perverse incentives, and allowed inefficient, non-resilient development to sprout up again in all the wrong places. We can readily see the elements — the incentives and disincentives that make bad projects pencil and good ones die on the vine: the zoning codes, hidden subsidies, bank (and fed) lending rules, retail protocols, building codes, and much more. All this has to be gone through and cleaned out, like a flooded basement full of muck.

And it must be replaced with processes that are more adaptable, more resilient, more supportive of growth.

What are the characteristics of the resilient structures that result? Once again, they are networks, not hierarchies — perhaps street networks, social networks, networks of economic activity and collaboration. They employ diversity, not monocultures — diversity of uses, diversity of building ages, diversity of sizes and types. They are fine-grained enough to allow adaptation, modification, easy local repair, and “learning” rather than Big Dumb Exotic Structures. And they exhibit the hallmarks of self-organization: people doing things from the bottom-up, and doing so in a way that achieves organized complexity.

Why is self-organization so important? Because it is, by definition, an adaptive process that incorporates the ability to continue adapting. It builds in capacity to “learn” and to progress toward greater efficiency and greater optimization — which makes it leaner and more able to weather variations in input. This makes it more inherently resilient.

Informal settlements, such as slums, often exhibit remarkable levels of resilience. It is not a coincidence that they also exhibit the striking characteristics of self-organization. I think we can learn a lot from these self-organizing processes, and use them to achieve more resilient settlements in general. But of course, to do that, we have to understand how to catalyze the desirable forms of self-organization, because they don’t just happen. That often takes very strategic, limited top-down action.

Many informal settlements are marvels of self-organization, with efficient street networks, distribution of retail and other hard-to-get urban qualities. But at the same time, many of them have intolerable sanitation, lack of basic amenities, and dangerous levels of crime. The challenge is to design “just enough” to provide the needed resources — as our colleague Diarmaid Lawlor put it, literally and metaphorically speaking, we need to provide “just the pipe.”

But while this "remodeling job" is urgent in places like the US Gulf Coast and Japan, it's really just as relevant everywhere else. We're all in some version of New Orleans.

What are the planning principles that can guide such an approach? Here, for our money, are eight that we suggest are critical:

A.   Plan Strategically and Catalytically. Funds are scarce even in the most developed countries, and the premium is on targeting resources to catalyse desired urban growth and limit undesired growth. But with careful planning, these limited resources can be very powerful. For example, leaders can target key roads, water, sewer and transport services to provide desirable patterns of growth. Careful spacing will support further bottom-up growth by the agents of most of the development — the developers, builders and owners.

B.   Build Capacity. The most important job of planning is to empower the agents of development — the developers, builders and owners — to build in a coordinated way, and in a way that creates long-term and stable value for the community. This gives planners a very powerful tool: the inherent capacity of social self-organisation. But it requires that planners provide targeted supportive resources. It also requires building the capacity of local organisations and partnerships.

C.   Assemble toolkits.  Planners need to offer a localized set of tools that work together in a coordinated way — what’s sometimes called “plug and play.” These tools must be available to be assembled as needed, and adapted to the unique needs of a specific place. The most important tools are the ones that facilitate desirable growth, particularly at the scale of the block and the plot. (For example, variances and lot line adjustments.) This is the critical “grain of adaptation” that allows healthy future growth over time.

D.   Change the “operating system.”  Every increment of growth is ultimately dependent on the outcome of a complex set of rules, incentives, protocols, laws, codes and interests. If consumers don’t want it, it won’t get built. But equally, if bankers won’t fund it, or regulators won’t permit it, or leaseholders won’t write waiver agreements for it, or any of a thousand other incentives and disincentives are not in place — it won’t be viable, and it won’t be built. The job of a planner is to identify the critical changes that must be made in this complex structure of incentives and disincentives, so as to support efficient and healthy urban growth. This is an important and hopeful new field of urban planning, first described by the pioneer Jane Jacobs, and later dubbed “economic gardening.”

E.   Re-use “Local DNA”. Novelty and “thinking outside the box” can be exciting, but can also lead to disastrous failures. Often the best solution to a local problem has already been found, and is already incorporated into local patterns. This evolved “Local DNA” represents a powerful resource for planners. But it must be found, documented and made available to local builders and developers.

F.   Protect Resources. In addition to the most obvious way of protecting resources — legislative fiat — planners need to use another, more catalytic method: engaging economic processes, and changing the mix of incentives and rewards that drive development. Again, this is the “operating system,” which may reflect (or more often may not reflect) environmental costs. “Externalities” — costs and benefits that are not usually part of a project’s economics — need to be brought into the economic calculation.  The complex structure of these incentives and costs must be coordinated to maximize efficiency, and minimize unintended consequences.

G.   Maintain Justice. Local residents have a right to participate in acts of planning that will affect their own futures. At the same time, the process must (continually) ensure that justice is maintained for all residents, and not just those who happen to participate in the planning process. This includes the right to continued benefits from uncontaminated resources, like clean air and water, and healthy forests.

One of the tools to implement all these steps — documented in Steve Coyle’s new book, The Resilient and Sustainable Community Guidebook [link: http://www.amazon.com/Sustainable-Resilient-Communities-Comprehensive-Regions/dp/0470536470] — is what we call the “Neighborhood Rebuilding Center” or “Neighborhood Renaissance Center.” These were features of New Urbanists’ plans for New Orleans and the Gulf Coast after Katrina. Essentially these were capacity-building distribution points for strategic resources: technical information, financial resources, design services, pattern books and pattern languages, peer-to-peer collaborations, open-source evaluations of contractors and other service providers, a community wiki, computer support, and a “research librarian” to help answer any questions that couldn’t be answered otherwise.

Is this kind of capacity-building approach too expensive? I think that’s a false question. The real question is this: is the current business-as-usual approach — which amounts to throwing away a planet — actually the far too expensive option?

I think the answer is evident.

 

Comments