Skip to main content

The Changing American Neighborhood: 4. The American Urban Neighborhood under Siege, 1950–1990

The Changing American Neighborhood
4. The American Urban Neighborhood under Siege, 1950–1990
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeThe Changing American Neighborhood
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Preface
  2. Acknowledgments
  3. Introduction
  4. 1. Why Good Neighborhoods?
  5. 2. A Dynamic Systems Approach to Understanding Neighborhood Change
  6. 3. The Rise of the American Urban Neighborhood, 1860–1950
  7. 4. The American Urban Neighborhood under Siege, 1950–1990
  8. 5. The Polarization of the American Neighborhood, 1990–2020
  9. 6. Neighborhoods as Markets
  10. 7. Neighborhoods in an Era of Demographic Change and Economic Restructuring
  11. 8. The Continuing yet Changing Significance of Race
  12. 9. Agents of Neighborhood Change
  13. 10. Deconstructing Gentrification
  14. 11. The Crisis of the Urban Middle Neighborhood
  15. 12. The Persistence of Concentrated Poverty Neighborhoods
  16. 13. Neighborhood Change in the Suburbs
  17. 14. The Theory and Practice of Neighborhood Change
  18. Notes
  19. Index

4 THE AMERICAN URBAN NEIGHBORHOOD UNDER SIEGE, 1950–1990

In his observation with which the last chapter closed, Ray Suarez put his finger on an important part of the story. With little immigration since the early 1920s and the experience of World War II still vivid, the younger native-born generation was far more “American” than their immigrant parents or grandparents. They may have enjoyed the close ties and networks of the neighborhoods they grew up in, but they didn’t need them as much. They were comfortably at home in the wider American world and expected more from it than their parents or grandparents.

For many such people, the old neighborhood may have been comfortable, but it was also confining. Moreover, it was crowded, usually a little shabby, and had few homes for sale. People wanted larger homes, and they wanted new ones. The burgeoning single-family subdivisions that dotted the suburban landscape appealed far more than the central cities. Levittown, which rose from Long Island’s potato fields, was replicated on the outskirts of every major American city. From 1950 to 1960, St. Louis County, which surrounds the city of St. Louis on three sides (the other being the Mississippi River), almost doubled its housing stock, building nearly one hundred thousand new dwellings. As one mover told Suarez many years later, “people our age at that time wanted to buy houses, and there just weren’t any houses available in the city of St. Louis. So they all moved, and bought homes out in the county.… We bought our first home on the GI Bill, that’s the way everyone was going then.”1

Mortgages by the two key federal lending agencies, the Federal Housing Administration (FHA) and the Veterans Administration (VA), requiring little or no down payment fueled the suburban exodus. That exodus, however, was largely limited to white families partly because far more white urban families had the means to buy the new suburban houses but also because of overt racial discrimination by not only developers but also by government through the rules imposed by the FHA and the VA. White flight and suburbanization were about race but at the same time, however, they were not just about race.

The 1950s marked the onset of what came to be known as “the urban crisis.” Its most visible manifestation was the flight of white families from central cities. White families began to leave cities in large numbers soon after the war, whether or not those cities’ Black populations were growing either in the city or their neighborhood. As economist Leah Boustan has written, for “urban whites, most of whom never interacted with a Black family, leaving for the resource-rich suburbs was an economic calculus, one that was accelerated by the steady stream of poor migrants, both white and Black, into central cities.”2

Although Boston and Pittsburgh saw little Black in-migration during the 1950s, the white population in both cities dropped by over 15 percent between 1950 and 1960, more than in Chicago and Philadelphia although less than in St. Louis and Detroit, both of which lost nearly one-quarter of their white population during the same decade.3 All in all, thirty-two cities with populations over 100,000 saw their populations decline from their 1950 peak in the following decade, as shown in table 4.1. These thirty-two cities, which contained over 18 million people, or roughly 1 out of every 8 Americans, lost 1.2 million people. They were a roll call of urban America. Nearly all those who left were white.

Suburbanization tended to blur differences between different white ethnic and religious communities while widening the racial divide. The families who bought in the new subdivisions generally came from all white ethnic backgrounds, linked less by ethnic identity than by their shared identity as young couples with common aspirations to a distinct middle-class way of life. As William H. Whyte wrote at the time in The Organization Man, the suburbs “have become the second great melting pot.”4

The new suburbs were distinct neighborhoods in important respects, and the new suburbanites threw themselves with determination into the process of turning them into good neighborhoods. “There was something compulsive about it, something too intense to be natural,” Ehrenhalt writes. “Those people not only shared lawnmowers and cooked dinner for each other but joined and volunteered for clubs and social organizations with an energy that seems in retrospect to have bordered on the manic.”5 Eventually, as we will discuss in chapter 13, these suburbs of the 1950s would evolve into something very different.

Meanwhile, back in the cities, migration was turning into white flight. Continued Black in-migration was combined with sweeping changes to the physical fabric of older cities driven by urban renewal and the interstate highway system. Neither initiated the flight to the suburbs, but both combined to magnify its extent and exacerbate its effects well beyond what might have otherwise been the case.

The Federal Government and the Destruction of Urban Neighborhoods

The story of the federal role in the decline of the cities and the rise of the suburbs has been told elsewhere but is worth reexamining in light of its effect on urban neighborhoods.6 When the housing market collapsed during the Great Depression of the 1930s, the federal government created the FHA to revive mortgage lending by providing federally backed insurance to lenders. If a mortgage approved by the FHA went into default, the FHA would buy it from the lender, making the lender whole. The FHA was an elegant solution to the crisis of the time, as it established the thirty-year fixed-rate low–down payment home mortgage as the American norm and, at little expense to the public purse, led to a dramatic expansion of homeownership. It did so, however, in a racially discriminatory fashion.

In order to evaluate the potential risk of default of individual home mortgages, the FHA relied heavily on the work of economist Homer Hoyt, who became principal housing economist at the FHA soon after its founding in 1934. Given the pervasive racial and cultural assumptions of the time, it is unlikely that the FHA’s practices would have been radically different had Hoyt never existed. The central role he played and the intellectual firepower he brought to selling his ideas, however, makes him worthy of our attention.

Hoyt’s view of neighborhood change, which was reflected in FHA mortgage underwriting, can be summarized in three principles:

  1. Single-use neighborhoods are preferable to mixed-use neighborhoods. The segregation of land uses into separate use districts fostered by zoning was a positive value, as was low residential density. Thus, low-density suburban areas characterized by detached single-family homes were most favored.
  2. Newer neighborhoods are preferable to older neighborhoods. Reflecting neighborhood life cycle theory, Hoyt wrote that “it is well recognized that residential land values are at their peak when a neighborhood is new and that they gradually decline as the neighborhood reaches maturity and approaches old age.”7 Thus, the FHA made it far easier to obtain mortgage insurance on new compared to existing homes and all but impossible to obtain long-term financing for houses in need of rehabilitation in older neighborhoods.
  3. Homogeneous neighborhoods are preferable to diverse neighborhoods. Hoyt warned that it was in the “twilight zone, where members of different races lived together[,] that racial mixtures tend to have a depressing effect on land values.”8 Based on this principle, the FHA Underwriting Manual recommended against the introduction of “inharmonious racial or nationality groups” into existing neighborhoods or new developments, and encouraged the use of restrictive covenants in real estate deeds barring Blacks and often Jews and Asians from “invading” white neighborhoods.

While the federal government did not create racial segregation, through the FHA it “exhorted segregation and enshrined it as public policy.”9 Parenthetically, although FHA practices are often characterized as being based on the so-called redlining maps, those maps were prepared by a separate federal agency, the Home Owner’s Loan Corporation, and the two agencies had little to do with each other.

By following Hoyt’s principles, the FHA effectively denied older neighborhoods, particularly Black neighborhoods, adequate mortgage capital while making mortgages available to racially segregated developments in the emerging suburbs. After the war when Congress created the VA, which offered returning veterans mortgages with no down payment, the new agency adopted the FHA’s underwriting criteria, further extending the reach of Hoyt’s ideas.

As historian Kenneth Jackson sums it up, “the main beneficiary of the $119 billion in FHA mortgage insurance issued in the first four decades of FHA operation was suburbia, where almost half of all housing could claim FHA or VA financing in the 1950s and 1960s.”10 The behavior of private-sector lenders was much the same. In either case, it became far more difficult for people to qualify for mortgages in older mixed-use and more diverse urban neighborhoods. These practices continued overtly until President John F. Kennedy’s 1962 executive order barring racial discrimination in FHA lending, although many argue that they continued informally to varying degrees until enactment of the Fair Housing Act six years later and perhaps even after that. Their prevalence during these critical years in the postwar transformation of American life is a major reason for the magnitude of the wealth gap between white and African American households in the United States today.

The decline of cities had been a federal concern since the 1930s, when observers had first called attention to the beginnings of urban population loss.11 When prominent city planner Harland Bartholomew pointed out in 1940 that “decentralization of American cities has now reached the point where the main central city, at least, is in great jeopardy,” he was but one of many voices calling for a concerted federal strategy to address what were increasingly being seen as the two incipient crises of the city: the decline of the city as a center of business, commerce, and industry and the spread of slums and blight.12

The latter term, adapted from its historic use to describe plant diseases, came into widespread use in the 1930s and, like so much else in midcentury urban thinking in America, emerged from the work of the Chicago School. New Deal figures often drew explicit parallels between urban slum conditions and disease; as prominent New York City progressive Joseph McGoldrick put it, “we must cut out the whole cancer and not leave any diseased tissue.”13 The trope of urban blight was juxtaposed against the seductive modernist vision of cities by planners and architects such as Le Corbusier, whose 1920s Radiant City plans envisioned shiny new cities with housing, commerce, parkland, and highways each in their own rationally organized place with light and fresh air for all.

The economic gap between central cities and their suburbs grew wider and the calls for action more insistent after World War II. Those calls were answered with the urban renewal program, enacted in the 1949 Housing Act, with the dual goals of helping older cities reverse their decline and providing “a decent home and suitable living environment for every American family.” With its commitment to increase the supply of affordable housing and remove dilapidated slum housing, urban renewal had broad bipartisan support as well as the support of the nation’s African American leaders. Robert Weaver, later appointed as the first secretary of the Department of Housing and Urban Development (HUD), became “urban renewal’s most prominent advocate,” stating in one speech that “the major achievement of urban renewal is that it has restored hope for older cities and worn out neighborhoods.”14 As historian Steve Conn writes, “For this generation of liberals, urban renewal was an example of positive action from the federal government, the Civil Rights Act in bricks and mortar.”15

It was a generous program. In 1956, the federal government appropriated $3.23 billion for urban renewal, equal to roughly $30 billion in 2020 dollars, or roughly ten times recent annual appropriations for the Community Development Block Grant program. Urban renewal funds were difficult to turn down. The federal government provided two-thirds of the cost, while local governments could provide the other one-third in many ways, including staff salaries, land contributions, and public works projects.

The program was based, however, on a premise that was not only faulty but also in retrospect almost perverse: that urban decline could be halted by making the cities more like the suburbs. That meant clearing large tracts of land for large-scale development, whether downtown office buildings or public housing projects; demolishing thousands of “obsolete” buildings; and widening streets, laying out highways, and building parking garages to make the city more automobile-friendly. Reflecting the pervasive belief in neighborhood life cycles, the all but universal premise was that obsolete neighborhoods needed to be demolished and rebuilt from scratch.

Urban renewal respected no race, creed, or ethnicity. Many white ethnic neighborhoods faced the wrecking ball, such as Chicago’s Halsted Street and Boston’s West End, from which three thousand largely Italian American families were displaced. But African American neighborhoods were the hardest hit. Even in cities where the great majority of urban renewal’s victims were white, Black families were still disproportionately impacted. In 1960, “only” 32 percent of the families displaced by urban renewal in Boston were Black, but only 9 percent of that city’s total population was Black. In Baltimore, 89 percent of those displaced by urban renewal between 1951 and 1964 were Black.16 Black novelist James Baldwin summed up widely held feelings in one memorable sentence: “Urban renewal is Negro removal.”17

Suffering from the worst housing conditions, often strategically close to downtowns and with a largely disenfranchised population unable to challenge powerful political and business interests, neighborhoods of color became frequent targets of urban renewal. In 1957, three out of every four families displaced through urban renewal were Black or Puerto Rican.18 Overall, an estimated total of 500,000 households were displaced by the urban renewal program over its life, or 1.6 million to 2 million people.19 Of these, close to half were probably displaced from Black neighborhoods. In Chicago alone by 1966, 14,000 Black families had been displaced or were slated for relocation.20

The Mill Creek Valley urban renewal project in St. Louis was typical (figure 4.1). West of downtown, Mill Creek Valley in the 1950s was a long-standing Black community of nearly twenty thousand persons. Housing conditions were bad; a 1947 study had found that on one out of three blocks, all or nearly all the buildings were substandard. Many still relied on outdoor privies. The physical conditions of the area served as the city’s justification for urban renewal, although its central location was implicitly understood to be a key reason. The city received $23 million ($225 million in 2020 dollars) from the federal urban renewal program. Voters overwhelmingly approved a $10 million bond issue for the local share of the project cost. Demolition began in 1959, and by 1965 the last resident was removed.

While Mill Creek Valley looked like a “slum” to middle-class St. Louisans, it was in many ways the heart and soul of Black St. Louis. The area contained 839 businesses and institutions, including 42 churches and 13 hotels, many Black-owned, and a Negro Baseball League stadium seating ten thousand. The neighborhood was home to well-to-do professionals and businesspeople living side by side with working-class and poor households. Although much of its housing was seriously deficient and its residents were mostly poor, it was a real community and as much of a haven from the wider world as any immigrant enclave.21

Most civic leaders in St. Louis saw Mill Creek Valley not as a viable community but instead as an obsolescent area in need of modernization. They saw it as an embarrassment that greeted visitors to St. Louis when they arrived at Union Station and a blight that needed to be eradicated. And so it was. Most families ended up in segregated Black neighborhoods north of Delmar Boulevard, increasing the pressure on Black households already in those areas to move northward into then-white neighborhoods. For decades after the land was cleared, Mill Creek Valle was known as “Hiroshima Flats” for its vast expanses of vacant land. Even today most of the area remains a bleak landscape of 1970s and 1980s commercial and industrial buildings surrounded by parking lots, although in what might be seen as having ironic overtones, it also contains the campus of Harris-Stowe State University, a historically Black university.22

Figure 4.1: A 1956 photo of St. Louis mayor Raymond Tucker and civic leader and bond issue chairman Sidney Maestre looking out over Mill Creek Valley, a densely built up Black urban neighborhood slated for demolition.

FIGURE 4.1.    St. Louis mayor Raymond Tucker (at right) and civic leader and bond issue chairman Sidney Maestre looking out over the Mill Creek Valley area slated for demolition, 1956

(Photograph courtesy of Missouri History Museum)

As urban renewal accelerated during the 1950s, it was joined by a program that arguably had even more pernicious effects on cities in general and urban neighborhoods in particular. The interstate highway program, which began construction in 1956 following enactment of the Interstate Highway and Defense Act, was the single largest federal infrastructure investment in American history.23

The original plan called for about six thousand miles of the forty-one thousand–mile interstate system to pass through urban areas, dictating that highways would be cut through densely populated city neighborhoods, eminent domain would be used to take properties, and thousands of structures would be bulldozed to make way for ribbons of concrete.24 Low-income neighborhoods were often chosen as highway routes, in part to reduce land-acquisition costs and in part to complement the activities taking place simultaneously through urban renewal.25

With the federal government initially providing 90 percent of the cost of the new highways, states found it all but impossible to resist the program. Rights-of-way, sometimes far wider than needed, cut vast swaths through crowded neighborhoods. In Detroit, the right-of-way of the Chrysler Freeway, I-375, with service roads on either side, is 350 feet wide, greater than the length of a football field, only one-third of which was occupied by the highway itself. Barely a mile west of downtown Detroit, the intersection of I-96 and I-75 sprawls over forty acres, obliterating the equivalent of six to eight city blocks. Highways cut through many neighborhoods that had been spared by urban renewal. I-81 destroyed Syracuse’s largely Black 15th Ward, cutting the city’s heart in half, while I-375 obliterated what was left of Detroit’s Black Bottom after much of it had already disappeared through urban renewal (figures 4.2 and 4.3).

From Quiescence to Neighborhoodism

Poorly situated to challenge local elites’ strong support for these programs and with little experience of organized advocacy, the initial neighborhood response to both urban renewal and the highway program was one of quiescence. As these programs continued to devour urban neighborhoods, however, the response shifted from quiescence to activism, ultimately leading to a national revolt against freeways and urban renewal. While it was too little too late for most neighborhoods, the neighborhood revolt presaged the neighborhood movement of the 1970s and the growth of today’s community development networks.

The first neighborhood revolt against highways erupted in San Francisco in 1956, where plans to cut a highway through the western part of the city triggered such heated neighborhood opposition that the Board of Supervisors ultimately voted to cancel the project outright.26 Challenges to urban expressways increased in the mid-1960s as expressway projects were blocked in New York City, Boston, Baltimore, New Orleans, and a host of other cities.27

Successful opposition to highway projects, however, was almost always the product of middle-class or business interests with access to the corridors of power. Jane Jacobs’s successful fight to block the cross-Manhattan expressway, that would have sliced through Greenwich Village and Lower Manhattan, mobilized an alliance of affluent professionals with robust political connections, while in the Bronx, lower-income Jewish residents protested Robert Moses’s Cross-Bronx Expressway to no avail. In New Orleans, affluent French Quarter residents and business leaders blocked the riverfront expressway, but despite opposition, an elevated highway cut through the middle of Tremé, the heart of Black New Orleans.

Figure 4.2: A panoramic view of a commercial street in the midst of a densely developed neighborhood showing Hastings Street as it was in its heyday in the 1940s as the center of Detroit’s Black community.

FIGURE 4.2.    Hastings Street before highway construction in its 1940s heyday as the center of Detroit’s Black community

(Photograph courtesy of Detroit Historical Society)

Figure 4.3: A panoramic view of the same location showing a wide interstate highway that was constructed in the 1950s, replacing Hastings Street and the surrounding blocks.

FIGURE 4.3.    Hastings Street after construction of I-375 in the 1950s

(Photograph courtesy of Detroit Historical Society)

Fewer successful protests derailed urban renewal projects compared to interstate highways, perhaps because middle-class and affluent neighborhoods were far less likely to be in the path of urban renewal projects. The urban renewal program was ultimately done in not by grassroots pressure but by the growing elite consensus that the program, despite its great cost and devastating physical impact, had done little to revive central cities. By the time Robert Weaver became HUD secretary in 1965, he had long ceased to be an advocate for the program. In the end, the differential outcome of neighborhood revolts reflected race- and class-based power relations and political influence, an implicit contradiction that bedevils the neighborhood movement to this day.

The 1960s was an age of social ferment. Activism was part of the vocabulary of the time. The 1960s was also when poverty became a public issue, leading to the war on poverty legislation of 1965. A lasting legacy of this era was the community development corporation (CDC), nonprofit organizations dedicated to improving a specific neighborhood or cluster of neighborhoods. CDCs grew in part out of the war on poverty, specifically the program known as the Community Action Program, which has been called “the largest, most systematic neighborhood organizing project ever tried.”28

The Community Action Program bypassed city governments, seen by many of the planners of the war on poverty as indifferent to low-income and minority neighborhoods, and provided funds instead directly to local community action agencies with a mandate to foster “maximum feasible participation” of the residents of the low-income neighborhoods where they were located. Community action agencies often became vehicles for Black or insurgent political activism, leading many mayors to feel that the federal government was funding their enemies. At their behest, Congress reined in the agencies with the 1967 Green Amendment, giving local governments the power to decide who would be eligible for community action funds. Many community action agencies eventually evolved into CDCs.

The initial impetus for CDCs came from Senator Robert Kennedy after his famous walking tour of Bedford-Stuyvesant in February 1966. Ten months later, he announced a plan to create CDCs that would “combine the best of community action with the best of the private enterprise system.”29 These corporations would focus on economic development, giving local residents and businesses a stake in and substantial control over their activities. Kennedy authored legislation that created the Special Impact Program within the war on poverty to provide grant support to CDCs, initially funded at $25 million (roughly $200 million in 2021 dollars). CDCs received direct federal funding through the Special Impact Program and successor programs until the late 1970s. Encouraged by federal and foundation support, particularly the Ford Foundation, the number of CDCs steadily grew during the late 1960s and the 1970s. A 1994 survey found that of 534 organizations nationally identified as CDCs at that point, 39 had begun in the 1960s and 172 in the 1970s.30 We will return to CDCs and their role in neighborhood change later in this chapter.

By the 1970s, a neighborhood movement with its roots in highway and urban renewal battles was reaching critical mass. “Neighborhood” gave resident empowerment an ideological dimension grounded in Catholic social theory and the neighborhood ethnic identity rhetoric of the time.31 Presenting itself as flowing from American traditions of local democracy, self-reliance, and community, the neighborhood movement promoted the idea that many, if not most, of society’s problems could be fixed by empowering neighborhoods. We call this ideology of neighborhood empowerment “neighborhoodism.”

From the very beginning, neighborhoodism suffered from deep and ultimately unresolvable internal contradictions. The rhetoric of neighborhood empowerment masked fundamental conflicts over race, ethnicity, and social class as well as over the relationship between neighborhoods and government. While arguably it was never a coherent movement, for a brief period, perhaps helped by its ambiguous rhetoric, empowering neighborhoods became a potent political symbol resonating with both sides of the national political divide.

In the 1976 presidential campaign, both Jimmy Carter and Gerald Ford stressed the preservation of urban neighborhoods, privileging local voluntary initiatives over top-down government programs. Once elected, President Carter appointed Monsignor Geno Baroni, a prominent proponent of neighborhoodism, as Assistant Secretary of HUD for Neighborhood Development, Consumer Affairs, and Regulatory Functions. Although Baroni helped move the Community Reinvestment Act into law later that year, most of his initiatives foundered on the harsh budgetary realities of the late 1970s. The same was true of the National Commission on Neighborhoods, charged by Congress in 1977 to “investigate the causes of neighborhood decline” and “recommend changes in public policy so that the federal government becomes more supportive of neighborhood stability.”32 Although the commission delivered a voluminous report with over two hundred recommendations, its deliberations were riven by internal conflict, and it had little or no effect on public policy. The remaining years of Carter’s term were dominated by other issues, and the presidency would soon pass into very different hands.

While it may have appeared that whites and Blacks, middle class and poor, liberals and conservatives, could work together under the big tent of neighborhoodism, the idea that neighborhood empowerment can solve deeply rooted societal problems was doomed by its contradictions. In Ronald Reagan’s 1980 presidential campaign, “neighborhood” was no more than gauzy rhetoric, part of the five-word litany “Family, Work, Neighborhood, Peace, Freedom” that had become an organizing theme of his speeches.33 Using neighborhoods to evoke an earlier era when people took care of each other locally without incursions from federal bureaucrats, Reagan turned the ambiguous symbol of neighborhood empowerment into a cudgel against federal programs.

While the brief shining moment of “neighborhoodism” as an ideological movement died with Reagan’s election, efforts to build stronger neighborhoods that emerged under its rubric continued under new policy paradigms. Before turning to those paradigms, however, we need to look at the changes in neighborhood conditions and the federal response to those changes that were taking place during the 1960s and 1970s.

Neighborhood Change during the “Urban Crisis” Years

Urban renewal and highways, along with suburbanization, further destabilized an already struggling urban organism. Overcrowded Black neighborhoods were bulldozed, their residents dispersed. With few suburban options available to them, Black families began to move into those parts of the cities that had previously been barred to them, often neighborhoods where much of the white population was already leaving or predisposed to leave.

Whether white flight was inevitable and the extent to which it was spurred or exacerbated by blockbusting and similar practices has long been debated. The outcome, however, is well known. Millions of white families picked up and left urban neighborhoods beginning in the 1950s with this trend accelerating through the 1970s, particularly after the urban riots of the 1960s (figure 4.4). By the time white flight slowed around 1980, Chicago had lost over half of its 1950 white population, or 1.6 million residents, and Detroit lost nearly three-quarters, or 1.1 million residents. Meanwhile, undermined by urban renewal, highway construction, and finally the outward movement of the Black middle class, the solid Black neighborhoods of the immediate postwar years became a thing of the past.

While the rapid turnover of urban neighborhoods was itself destabilizing, compounded by the pernicious effects of blockbusting, a further destructive change was the creation of a seemingly permanent reservoir of vacant houses in cities such as Detroit and Philadelphia. While there are many reasons for the rise in vacant properties in the heart of America’s older cities, it all begins with a simple arithmetical reality: during the “urban crisis” years from 1950 to 1980, far more people left the cities than came in (figure 4.5). Leah Boustan calculated that “each black arrival led to 2.7 white departures.”34

Figure 4.4: A graph showing the percentage of change in the white population in seven cities for the 1950s, 1960s and 1970s, demonstrating that the percentage decline increased each decade relative to the preceding one.

FIGURE 4.4.    Percent change in white population in major cities by decade, 1950–1980

(Authors’ work based on decennial census data)

Cities in the 1950s were overcrowded, with few vacant homes available to buy or apartments to rent. By 1960, however, the construction of new housing had relieved the housing shortage, and an adequate supply of vacant homes was now available for home seekers. After 1960, continued net out-migration began to create a glut of urban housing. This was a new urban phenomenon. From the earliest years of American urban history through the 1950s, slum housing may have been dilapidated or unsafe but was always occupied. For the first time in American urban history, the excess of vacant urban housing became a major public issue.

Housing vacancy is a product of urban decline but it is also a cause of decline, creating spillover effects that extend into its surroundings that can lead to reinforcing loops of continuing abandonment. Strong links between vacant properties and negative neighborhood effects have been well established. Two studies of vacant properties in Philadelphia nearly a decade apart came to similar conclusions, with the latter study finding that the presence of a vacant property could reduce the value of properties on the same block by up to 20 percent and the earlier study finding that the reduction was $3,500 to $7,500.35

Vacant properties are also associated with crime, violence, and health problems. A recent study in Philadelphia found a strong relationship between the presence and number of vacant properties and reported aggravated assaults on the same block, with the risk of violence increasing as the number of vacant properties rose.36 A comprehensive review of research on health conditions found that vacant lots and abandoned buildings could negatively affect mental health and rates of chronic illness, sexually transmitted diseases, stunted brain and physical development in children, and unhealthy eating and exercise habits.37 Studies have also linked abandoned buildings and vacant lots to “the breakdown in social capital—crucial to a community’s ability to organize and advocate for itself.”38

Figure 4.5: A graph depicting relative Black in-migration and white out-migration in seven cities between 1940 and 1980, showing that in each city white out-migration significantly exceeded Black in-migration.

FIGURE 4.5.    Black in-migration and white out-migration in selected cities, 1940–1980

(Authors’ work based on decennial census data)

Although the urban renewal program was not formally abolished until 1974, it was fading by the mid-1960s, stung by public opposition and the growing evidence of its harmful effects and its failure to slow, let alone reverse, the tide of urban decline. With the creation of the new Department of Housing and Urban Development (HUD) in 1966, federal policy began to shift. In a reversal of the policies that guided the FHA for decades, Phillip Brownstein, the new undersecretary of HUD in charge of the FHA, informed FHA staff that “stimulating a flow of mortgage funds into the inner city, yes even into the slums, for the transfer of houses, for rehabilitation, and for new construction, is an FHA mission of the highest priority” and that a home loan application “should not be rejected simply because it involves poor people, or because it is in a portion of the city you have been accustomed to rejecting or red-lining for old-fashioned, arbitrary reasons.”39 From then through the early 1970s the federal government created an array of new programs to improve housing and neighborhood conditions, largely aimed at lower-income urban communities. The most important of these programs are shown in table 4.2.

Except for the Model Cities program and the modest Section 312 program, these initiatives were all mean-tested programs designed to either improve housing conditions or deliver social services to lower-income households. While those are important social policy goals, they were unlikely to improve neighborhoods. Means-tested housing projects, with some exceptions, have a poor track record in improving the neighborhoods in which they are built, while programs that raise the economic level of people living in distressed neighborhoods as often as not lead them to move out of those neighborhoods rather than improving the neighborhoods themselves.

Of these programs, the Section 235 program is most widely recognized to have done more harm than good to many urban neighborhoods. Few federal housing programs have had better intentions, poorer design, and worse execution. Although its goal of turning a million low-income families into homeowners was an admirable one, it was initiated with little awareness of the many pitfalls to such a strategy and implemented by FHA offices under pressure for quick results and little preparation for the job, often by the same people who only a few years earlier had been enforcing racial segregation in suburban subdivisions and denying loans in low-income communities of color. The program collapsed in the early 1970s under the weight of massive defaults and widespread misrepresentation and fraud by brokers, appraisers, contractors, and FHA officials, leaving behind a trail of nearly one hundred thousand empty houses and a host of destabilized neighborhoods.40 This outcome was far more than unfortunate, since a well-designed, well-executed program to expand lower-income homeownership at that point might well have stabilized many neighborhoods that subsequently declined.

The Model Cities program was meant to be very different. Seen by Weaver as the centerpiece of HUD’s new direction, it sought to link physical improvements, social services, antipoverty initiatives, and community participation in targeted neighborhoods in carefully selected cities. The program was hobbled from the beginning, however, by a mismatch between intentions and reality. It was initially envisioned as a narrowly targeted, carefully coordinated program to direct significant new HUD resources along with funds from existing programs in HUD and other federal agencies into neighborhoods in no more than 50 cities. Mayoral pressures, however, led to the program expanding from 50 to 150 neighborhoods, while a lukewarm Congress cut the program’s appropriation. Once begun, conflicts over strategy and priorities and the inherent conflict between the two pillars of community participation and centralized coordination of multiple programs magnified the disparity between the program’s ambitions and its limited reach. As two contemporary observers noted, “the contradictions between the strategies, inherent in the legislation, were not resolved in the regulations. Instead, they were crystallized, and the mixture of myth and truth in each strategy, as well as the contradictions, went unchallenged.”41 The Model Cities program was quietly closed down in 1974 after five years of operation.

The Model Cities program clearly failed in its larger goals of neighborhood transformation despite the possibility of individual success stories such as the South Bronx, where some credit the program with having jump-started that neighborhood’s revival.42 No systematic evaluation of the program’s outcomes was ever done, although a 1973 study commissioned by HUD found that while “Model Cities proposed to effect a significant change in the quality of life of selected American cities within the short span of five years[,] … [t]oday, some six years after initiation of the Model Cities program, it is clear that the goal of a significant improvement in the quality of life of selected urban neighborhoods has not yet been attained.”43 A recent University of Michigan student thesis compared the 1970–2000 trajectories of Model Cities neighborhoods in ten cities to surrounding neighborhoods, finding no meaningful difference.44 While no more than suggestive, this is consistent with our observations.

Given the Model Cities program’s limited resources and built-in conflicts and the powerful social and economic trends working against older cities and their neighborhoods, it may have been unreasonable to have expected significant results. The program, however, suffered from a more fundamental failure; namely, to the extent that it had an underlying basis in a theory of neighborhood change, that theory was fatally flawed. That flaw can be summed up succinctly. Rather than seeing neighborhoods as entities with their own distinct dynamics qua neighborhoods, the designers of the program saw them as simply the sum of the individual conditions of the people living in the neighborhood. In other words, they believed that if they could improve the conditions—health, education, housing, employment, and so forth—of the people living in the neighborhood, neighborhood revival would inevitably follow.

The Model Cities strategy was additive. For example, if improvements to the local public school and the construction of new low-income housing are each positive goods, then simultaneously providing both will be that much more beneficial, since not only will each deliver some benefit, but the (presumed) synergy between the two will also act as a multiplier of those benefits. While this may be true for individual beneficiaries, it is at best questionable in terms of its neighborhood impact. Neighborhood conditions are clearly affected by the social and economic condition of their residents, but they are far more than those conditions. The Model Cities program did not recognize any factors driving neighborhood change beyond the sum of the individual conditions of its residents. The role of market factors was not even considered, and while citizen participation was an important part of the process, it focused entirely on eliciting resident involvement in the design of housing and social programs and perhaps on energizing sluggish municipal bureaucracies rather than building community cohesion or social capital.

Those elements were not part of the program designers’ thinking, nor, as far as we can tell, were they raised by contemporary critics of the program. Neither protagonists nor critics possessed a theoretical framework or vocabulary to enable them to understand neighborhood dynamics in ways that would make it possible to design neighborhood strategies that could alter neighborhood trajectories. Neither the work of the Chicago School nor that of Homer Hoyt, or neighborhood life cycle theory, with which many of those involved may have been familiar, were of any use. All of them were overwhelmingly deterministic and even fatalistic in their premises, offering no insights about how to strengthen a struggling neighborhood by working with its residents rather than, as in the urban renewal model, clearing them out and replacing them with others.

In essence, the only model the planners of the Model Cities program had to work with was the social work model growing out of the settlement house movement of the early twentieth century and from which the public housing advocates of the New Deal emerged. The model was based on the idea of individual uplift, and the premise that changes to individual conditions, or physical improvements to the environment, could change neighborhood level social conditions, as epitomized by a widely distributed New Deal public housing poster (figure 4.6). It was not until the mid-1970s that new theories of neighborhood change began to emerge that offered a theoretical basis for community-based neighborhood revitalization.45

While the dominant urban narrative was one of decline amid growing poverty, unemployment, and vacant housing, a less visible yet parallel process of change was also taking place. Although white flight took a toll on the cities’ previously stable working-class and middle-class neighborhoods, they did not disappear. These neighborhoods, sometimes dubbed “middle neighborhoods,” continued to exist but were diminished in number and extent. Some were white ethnic neighborhoods such as The Hill, the Italian neighborhood in St. Louis. Often, however, they went through racial transformation but remained intact, as working-class and middle-class African American families seeking better housing conditions took advantage of the space left by white flight to leave the overcrowded, segregated Black ghettos and move into neighborhoods from which they had previously been excluded. These neighborhoods became and remained for decades thereafter Black neighborhoods of largely middle-class character. Such neighborhoods emerged in almost every large American city, including Lee-Harvard in Cleveland, Penrose in St. Louis, Overbrook in Philadelphia, and South Shore, the neighborhood where Michelle Obama grew up, in Chicago. They represent a far more important part of American neighborhood history than has been acknowledged. In chapter 11, we discuss their history and the daunting challenges they face today.

Morning in America and Continued Urban Decline

At the governmental level, the 1980s represented a fundamental change in course. While the erosion of government’s role as a would-be solver of social problems had already begun, it took on a far more prominent ideologically charged role under Ronald Reagan. In his 1981 inaugural address Reagan delivered the famous line “Government is not the solution to our problem, government is the problem.” Under the previous Carter administration, the erosion of government activism had taken on an apologetic, almost sub-rosa character. To the extent that HUD evinced concern with urban conditions and neighborhood decline during the 1970s, that came to an end. The Reagan administration’s approach to urban policy reflected in The President’s National Urban Policy Report of 1982 was that “urban America would improve and prosper only if the Reagan economic and federalism reforms succeeded. Thus, US urban policy, such as it is, exists only as derivative of these larger, more comprehensive domestic initiatives.”46 As Robert Beauregard writes, “through most of the 1980s the discourse on urban decline virtually disappeared. Dominant was revival, revitalization, renascence and rediscovery.”47

Figure 4.6: A New Deal poster showing a stylized figure of a man with a gun looming over apartment buildings and bearing the caption “Eliminate crime in the slums through housing.”

FIGURE 4.6.    New Deal poster: Housing as a driver of change in social conditions

(Source: Library of Congress)

Attention shifted from neighborhoods to downtowns and to a view of government as a facilitator for private investment rather than a force for social change. Intellectual ballast for that role was provided by Harvard political scientist Paul Peterson, who wrote in his influential 1981 book City Limits that “policies and programs can be said to be in the interest of cities whenever the policies or programs maintain or enhance the economic position, social prestige, or political power of the city, taken as a whole,” and that those policies should be “limited to those few which can plausibly be shown to be conducive to the community’s economic prosperity.”48

Fueled by generous depreciation allowances, investment flowed into downtowns after the end of the 1981–1982 recession. Glass-walled office buildings, malls, and waterfront festival marketplaces mixing retail, entertainment, and recreation, inspired by the Rouse Corporation’s successful projects in Boston and Baltimore, began to rise from the ground. Mayors, their cities still reeling from the fiscal shocks of the 1970s, became cheerleaders for developers rather than advocates for social and policy change.

The sight of gleaming new downtown towers and shopping malls obscured the fact that most of the nation’s older cities were still losing population. Despite the rhetoric of revival in the media discourse paralleling Reagan’s “morning in America” rhetoric, the overall trajectory of urban neighborhoods was still sharply downward, while early signs of decline were appearing in many of the suburbs that had been the promised land for so many only thirty years earlier. Despite the publicity given gentrification, a term that was popularized in the 1970s, it was vanishingly rare. Few neighborhoods that were already areas of concentrated poverty in the 1970s escaped poverty in the 1980s.

Crime and homicide rates increased during the 1980s—in some cities gradually and in others precipitously—in tandem with the crack epidemic. That increase was particularly great in Washington, D.C., as shown in figure 4.7, which also shows how significantly crime has dropped since the 1990s. Although crime increases affected entire cities, their effect was most palpable in high-poverty neighborhoods. As Patrick Sharkey, paraphrasing Elijah Anderson’s classic study of street life in Philadelphia’s ghetto, Code of the Street, wrote, “the dominant feature of public life in Philadelphia’s poorest neighborhoods was neither homelessness nor drug abuse nor prostitution; it was violence.”49 And as Ta-Nehisi Coates described in his memoir of growing up in Baltimore, “when crack hit Baltimore, civilization fell.”50

As Sharkey points out “much of what we know about urban poverty is based on a set of classic studies carried out between the early 1980s and the mid-1990s, when violence was extremely high or rising quickly.”51 Coupled with the inevitable media attention, that violent era still conditions our perception of urban reality, although that reality has changed much for the better despite worrisome evidence of some reversal since the onset of the COVID-19 pandemic.52 The years of violence weakened the social networks that had previously existed in many low-income communities, further undermining Black families already destabilized by poverty. This led to stresses in many Black middle-class neighborhoods, rendering those neighborhoods less able to withstand the destructive pressures that would arise after 2000.

Figure 4.7: A graph of homicide rates in Washington, D.C., from 1960 to 2019 showing how they rose in the 1960s and again in the 1980s, peaking in the 1990s before dropping sharply from then to the 2010s.

FIGURE 4.7.    Homicide rates in Washington, D.C., 1960–2019

(Authors’ work based on Federal Bureau of Investigation Uniform Crime Reports data)

During the 1990s many things began to improve. The crack epidemic waned and with it much of the violence. The community development movement grew along with a new paradigm for neighborhood change, as we discuss in the next part of this chapter. More significantly, economic growth coupled with the expansion of the Earned Income Tax Credit raised the incomes of millions of lower-income families, while in many of America’s hardest-hit cities the income gap between white and Black households narrowed and poverty declined.

The urban renascence media euphoria of the early 1980s had given way by the 1990s to a more measured perspective, and the growing economic polarization and social disparities of the cities had begun to gain recognition. As the millennium neared, moreover, the beginnings of an actual “return to the city” that had been predicted since the early 1970s were becoming visible, harbingers of large-scale change that would become apparent in the next decade with powerful effects in many urban neighborhoods. The 1990s ended on a note of cautious optimism for the future of America’s older cities and their neighborhoods.

The Rise of a New Community Development System

Although federal funds were slashed under Reagan, CDCs continued to be established. The 1994 survey found that an additional 263 CDCs formed during the 1980s, more than doubling the total around the United States.53 More important than the absolute number of CDCs, though, was the emergence of a new multilayered community development support system. By the end of the 1980s, a collection of programs and initiatives had gelled into a new policy paradigm for neighborhood intervention, replacing the federally driven paradigm of the 1960s and 1970s. Instead of programs run by public agencies, the new policy paradigm created tools administered by different agencies that could be combined with each other by local actors in multiple ways.54 Three notable changes characterized the new neighborhood strategy framework: networked nonprofitization, privatization, and devolution. While this framework has shown its ability to tailor activities to local conditions and leverage private investment, its strengths come with many often-overlooked weaknesses. We will discuss this framework here and then pick up the discussion of CDCs and their role in neighborhood change later in chapter 9.

Networked Nonprofitization

Through the 1970s, neighborhood-based activities in American cities received direct government support that went to entities such as community action agencies created for the express purpose of receiving and spending federal funds and to other organizations, including many citywide or regional social service providers. The CDC, an independent neighborhood-based entity, was a newcomer to the neighborhood scene.

The 1980s saw CDCs not only take on the central role in neighborhood change but also become embedded in a national network, including the National Congress for Community Economic Development trade association and national and regional support organizations known as intermediaries. Given the simultaneous trends toward privatization and devolution, the intermediaries played and continue to play a key role in connecting community development organizations to one another and to critical resources while providing technical assistance and acting to varying degrees as a voice for community development organizations on the national scene.

The three principal national intermediaries are the Local Initiatives Support Corporation (LISC), Enterprise Community Partners, and NeighborWorks America. The first two, notably, were top-down initiatives driven by foundations and corporations. LISC was created in 1979 under the aegis of the Ford Foundation, while Enterprise Community Partners was established in 1982 by James Rouse, a wealthy shopping center developer “to see that all low-income people have the opportunity for affordable housing and to move up and out of poverty.”55 NeighborWorks, by contrast, grew from local efforts, reflecting how community development has often advanced by sharing successful local efforts across the nation. The story of how it emerged is instructive.

Dorothy Richardson was a homeowner in Pittsburgh’s Central North Side neighborhood. In the mid-1960s, concerned about what she saw as her neighborhood’s decline, she began organizing her neighbors to put pressure on City Hall and on the city’s banks, which had largely written off her neighborhood. In 1968 she and her neighbors formed Neighborhood Housing Services (NHS) of Pittsburgh, raising $750,000 in grants, persuading local lenders to contribute to a loan fund for property improvements, and lobbying the city to enforce housing codes and improve neighborhood schools and parks. As described in a 1976 New York Times article, the NHS’s approach was “to contain and reverse neighborhood blight by throwing every known preservationist remedy into a small area on the edge of spreading deterioration.”56

The model that Richardson created through trial and error quickly caught on. In 1970 the Federal Home Loan Bank Board sponsored the Neighborhood Reinvestment Task Force, which promoted the NHS model of neighborhood-government-bank partnerships nationally. By 1975 there were forty-five NHS partnerships across the country, and in 1978 Congress created the Neighborhood Reinvestment Corporation, which became NeighborWorks in 2005, to support the burgeoning NHS network.

Intermediaries such as NeighborWorks work through local partnerships. LISC has established offices in thirty-five cities, where it supports local community development initiatives. NeighborWorks has 240 CDC partners in its NeighborWorks Network. Regional or local intermediaries also exist, of which one of the most notable is Cleveland Neighborhood Progress, funded by two major local foundations to complement the efforts of the national intermediaries. The central role of intermediaries reflects the pressures of the second element, privatization, in the neighborhood policy framework.

Privatization

Neighborhood policy prior to 1980 was largely a function of federal grant programs. To be sure foundations played a role, most notably the Ford Foundation’s 1961 Gray Areas program, which piloted many of the programs that became the war on poverty. After 1980, however, the balance shifted to private actors. In some respects, this was the culmination of a long-term trend. Since the mid-1960s the public housing program had been de-emphasized in favor of programs through which low-income housing was built by for-profit or nonprofit developers, including the 1974 Section 8 program under which low-income tenants received vouchers to live in privately owned rental housing. Still, all these programs were based on funds appropriated by Congress and administered by the federal government.

A major shift toward the privatization of public resources was the increasing substitution of tax incentives for direct grants, reflected most visibly in the Low-Income Housing Tax Credit (LIHTC) program, the single most important vehicle for developing affordable rental housing in the United States since its enactment in 1986. The LIHTC finances roughly 100,000 housing units each year, for a total of 3.23 million units by 2018.57 The LIHTC program raises private equity for the construction or rehabilitation of low-income rental housing by providing credits to wealthy investors and large corporations, which they can use to offset their federal tax obligations. The investors receive tax credits for ten years, while the housing must be occupied by low-income households for fifteen years.58

While LIHTC is a tool for building affordable housing rather than a neighborhood revitalization strategy, it has become widely used by CDCs in part because the program allows them to collect generous developer fees they can use to fund other operations. Their dependency on the program, however, has problematic implications. The design of the LIHTC program incentivizes siting developments in high-poverty neighborhoods, often areas with a surplus of private housing renting at levels similar to or even below the rents in LIHTC housing, while disincentivizing mixed-income developments.

From an institutional perspective, LIHTC is radically different from the traditional federal grant program. The LIHTC program exists not as an appropriation but instead as language in the Internal Revenue Code. Responsibility for allocating the credits is delegated to state housing agencies, which received allocations based on their state’s population. Once a developer has received an allocation for a project from the state, the developer must find an investor to buy the credits by investing equity in the project. Since the equity that can be raised through sale of the credits is often not enough to cover the full cost, the developer may also have to raise additional grant or loan funds from lenders, foundations, or state and local governments. Perhaps the most important role of both LISC and Enterprise Community Partners as intermediaries has been that of matchmakers between nonprofit developers and large corporate buyers of LIHTC tax credits.

Federal tax credit programs have proliferated, as can be seen in table 4.3. All of these programs have two clear downsides. First, they are inherently less efficient. They cost more than direct grants in terms of federal treasury outlays relative to the amount of funds that go to the housing project or neighborhood improvement activity, since they must provide private investors with enough profit to entice them to participate. Second, they are more complex and challenging to implement than grant programs. As a result, projects financed with tax credits, in addition to investor profits, are burdened with large fees to the lawyers, accountants, and consultants who assemble the complex deals, along with the intermediaries themselves. Housing advocate Chester Hartman described these programs as “feeding the sparrows by feeding the horses.”59 In the final analysis, the best argument that can be made in favor of tax credits over grant appropriations is that they tend to shelter the program from the annual appropriation process, thus keeping the program going despite unfavorable shifts in political winds. While that is true enough and important to the program’s beneficiaries, it is questionable as a basis for public policy.

Until recently, however, each tax credit program was targeted to a specific and beneficial social or economic objective, and one can argue that despite weaknesses in these programs, their overall effect has generally been positive. One cannot say that of the most recent tax credit, the Opportunity Zone program, which was part of the Federal Tax Cuts and Jobs Act of 2017. Individuals and firms that invest in projects in Qualified Opportunity Zones, which are largely low-income areas but include some contiguous non–low-income areas, receive significant breaks on capital gains taxes, substantially increasing the return on their investment.60

The attractiveness of the Opportunity Zone program to investors is not in doubt. The benefit to the affected neighborhood is more questionable. While almost a third of the residents of opportunity zones are below the poverty level, many of the zones have clearly been designated by the states as places where significant investment opportunities exist and where the presence of low-income people is at most incidental, as in one zone in the heart of downtown Portland, Oregon, and an oceanfront zone in West Palm Beach, Florida, home to a superyacht marina.61 Moreover, there is no requirement in the law that the investment benefit low-income households; indeed, the law does not even require public reporting of Opportunity Zone investments. A program designed to direct investment into low-income communities with no accountability and no provisions to ensure that the communities and their residents get any benefit from the investment is hardly likely to improve distressed neighborhoods or their residents.

Devolution

Finally, policymaking toward neighborhoods has also been decentralized. Devolution, the transfer of public-sector decision making from the federal government to states and localities, has become the norm. Categorical or project grants have been largely replaced by block grants, such as the Community Development Block Grant and HOME Partnerships, which cities receive based on a formula and which give local government discretion over how the funds are used within the broad parameters of federal law. While several grant programs were created by the Clinton administration, including Empowerment Zones and Homeownership Zones, they were one-shot programs rather than ongoing commitments. The only exception was HOPE VI, a program designed to dismantle distressed public housing projects and replace them with privately developed and operated mixed-income housing.

An argument can be made that the federal pullback has motivated state and local governments to increase their role. There is some truth to this. As federal funds have declined, state and local spending has increased, although it is hard to tell by how much since published data conflates funds originating with state and local government with federal pass-through funds. Some states have enacted low-income housing and historic tax credit programs designed to piggyback on the respective federal programs. State and local housing trust funds have also proliferated, collectively generating almost a billion dollars from dedicated sources of revenue in 2016.62 Although seemingly a large amount, in terms of the total demands on those trust funds it is barely more than trivial.

All three trends reflect the rightward shift in national politics and the demands of a neoliberal economic model. Block grants are popular with local elected officials of all political stripes. As David Erickson put it, “giving local communities the tools to help themselves is wildly popular with liberals and conservatives alike.”63 Reflecting on the great variety of tax credit projects around the nation, Erickson concluded that “the decentralized program showed how it could be all things to all people, which helps explain why so many members of Congress—across the political spectrum, from large and small cities—found it an easy program to defend and promote.”64 Nonprofitization also has distinct political advantages, as conservatives can support CDCs as examples of do-it-yourself local self-reliance preferable to government handouts.

Clearly, the new community development policy system has political advantages. But is it good policy? In many respects, the community development field has converted a political necessity into a policy virtue. Its advocates argue that the new decentralized tool-based policy system is more flexible and adaptive than any top-down policy process could ever be. By moving from government to “governance” (voluntary collaboration), they argue, the new system can meld the advantages of the public, private, and nonprofit sectors while ensuring equity through federal regulations requiring that programs target low-income households and disadvantaged neighborhoods. The private sector fosters efficiency and consumer responsiveness, while the nonprofit sector brings civic engagement and local knowledge, ensuring that projects reflect neighborhood needs and concerns. The transition to the new paradigm has been compared to the shift from routinized manufacturing to flexible specialization.65 Instead of a cookie-cutter, mass-production approach to housing, as in the large public housing projects of the 1950s and 1960s, the new system emphasizes customized production adapted to the specific needs of each community.

Figure 4.8: A graph showing how many different participants, including community development corporations, intermediaries, banks, foundations, and local governments, affect neighborhoods through the community development system.

FIGURE 4.8.    Participants in the community development system

This decentralized tool-based system, which has been called network governance, requires coordinating a dizzying array of actors (figure 4.8). It raises the collective action problem: that is, how to voluntarily gain the cooperation of multiple self-interested actors to achieve a common objective. In the absence of governmental coercion, advocates argue that the collective action problem can be solved by the accumulation of social capital, or trust.66 As each project brings together different actors who work together to achieve success, over time successful collaboration builds greater trust and enhanced collaboration across the entire system. Shared norms about what goals should be pursued and how the work should be done emerge, while social capital constrains self-regarding behavior and channels action in public-regarding directions.

That is an idealized vision, but the tool-based system of network governance does have strengths. If, as we argue, neighborhoods can only be understood in the context of local and regional conditions, then decentralization of decision making is a rational strategy. Network governance offers the promise of treating neighborhoods as dynamic, multifaceted systems. Conceptually, it is difficult to argue against local community development systems based on collaboration and trust. Of course, collaboration and trust are often harder to achieve and sustain in practice than in principle.

Faith in the virtues of a decentralized tool-based housing and community development system has become widespread, often framed in contrast to top-down bureaucratic inflexible government programs, which are frequently caricatured as the product of insensitive bureaucracies forcing unwelcome programs down the throats of resisting communities. But network governance has significant weaknesses. Above all, it has added a new layer of inequality. Cities and neighborhoods with sophisticated local community development networks are in a strong position to take advantage of the array of policy tools, while cities and neighborhoods lacking those networks, which is more often true of smaller cities and poor neighborhoods, are generally left out in the cold. In chapter 9 we revisit CDCs along with other key agents of neighborhood change in today’s cities, exploring how well the system actually works for inhabitants.

Annotate

Next Chapter
5. The Polarization of the American Neighborhood, 1990–2020
PreviousNext
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org