For the first time in history, more than half the global population resides in cities, a number that is expected to rise above 70 percent by 2050. This dramatic addition to the world’s urban population presents a grand challenge: how to ensure sustainable and socially responsible urban development. To tackle this challenge, city and regional planners have begun to incorporate methods from the social sciences, statistics, and public health into their work, moving their field from a design-driven approach to much more complex methods that are driven by data. This shifting paradigm is exemplified by a new software tool—dubbed UrbanSim—being developed by Professor Paul Waddell of the Department of City and Regional Planning at UC Berkeley. By learning from the real world, this software promises to fundamentally change the way city planning decisions are made.
Credit: Helene Moorman
UrbanSim is an open-source simulation software that explores the interactions between land use, transportation, economics, and the environment. “The relationships between transportation patterns, housing markets, and the labor and real estate markets more generally are very important to consider,” explains Waddell. However, until now, planners have had few rational ways to account for these complex interactions in their designs. UrbanSim solves this problem with a sophisticated computational model that uses data from a number of sources to identify patterns of behavior and applies the information to predict outcomes of various planning scenarios.
Analysis begins by assembling economic data about a given region from county census data, transportation agencies, and surveys of households and businesses. These data are used to detect trends corresponding to specific situations: for example, the probability that a household will move within a year based on its financial circumstances. Based on these trends, UrbanSim attempts to learn the statistical patterns that underlie decision-making in individual households and businesses (collectively called “agents”) so that it can predict their future behavior. These patterns then determine the rules that each agent must follow during the simulation. Describing behavior in this way—a method called Monte Carlo simulation because of its probabilistic rules (see “Toolbox,” BSR Fall 2012)—allows the researchers to better capture the often erratic patterns of decision-making that we see in everyday life.
To simulate a single agent’s actions, a random number is drawn and compared to the agent’s statistical rules. A corresponding behavior is then applied. For the example given above, if the random number is less than a certain value, the simulation predicts that the household will move. A new location for the household is then chosen by once again drawing a random number based on probable outcomes. Each set of rules is affected by the attributes of the agent, such as income, age, and number of children. Feedback patterns like real estate prices are also taken into account. The whole cycle is repeated annually across a 30-year period and considers millions of agents acting simultaneously.
Once the model is validated for a given city by comparing historical data with actual outcomes, new scenarios can be simulated to aid decision-making. UrbanSim can allow planners to investigate the effects of potentially expensive or complex decisions like the widening of freeways, construction of new roads, introduction of new transit lines, or changes in their levels of service. “When these investments are made, they can change patterns of accessibility to transit, or influence new housing developments,” explains Waddell. Thus, a more informed planning decision can be made by first using UrbanSim to investigate the vast number of potential outcomes in each alternative scenario.
In 2002 a prototype version of UrbanSim debuted in the Eugene-Springfield area of Oregon, and updated versions have since been applied in Detroit, San Francisco, Seattle, Paris, Brussels, and Zurich. In Eugene-Springfield, the simulation was validated with data from 1980 to 1994, and yielded results that correlated well with actual observed outcomes in several categories including population, number of housing units, employment rates, and land value. For over 89 percent of the specified zones, the model correctly predicted the change in the number of households to within 200. For employment, the model correctly predicted the number of jobs to within 200 in about 76 percent of the zones, an impressive feat because employment rates are hard to predict due to rapidly changing economic conditions. These results are remarkable given that UrbanSim learned the behavioral patterns of this region simply by churning through years and years of data. They also suggest that UrbanSim is on the right track to predicting future outcomes based on hypothetical planning scenarios.
All this churning of data does come at a cost, as the intricate models run by UrbanSim require a lot of highly complex computation. In fact, it takes a few days to run a complete simulation of one scenario on a computer with 96 gigabytes of RAM. For this reason, Waddell is also working in collaboration with the computer science department at UC Berkeley to make UrbanSim more efficient. These efforts are part of the $200 million Big Data Research and Development Initiative set up by the Obama administration in 2012 to help solve world problems that rely on computationally intensive tasks.
This funding is well timed. Projects like UrbanSim are making it increasingly clear that computational methods can give insight into complex systems that would never be obvious to the human eye. And it will become ever more useful to leverage the power of computers as ever more data become available. As impressive as the simulations carried out by UrbanSim already are, the era of big data in urban planning is just beginning.
This article is part of the Spring 2013 issue.
Notice something wrong?
Please report it here.