Chapter 6. Sanctioned Violence, Morality, and Cultural Evolution
The thesis that moral beliefs change with changes in the practice of socially-sanctioned violence has ramifications not only for the feasibility of the abolition of war, but also for theories of morality, human nature, and society more generally. This chapter explores some of those ramifications. The purpose is not so much to try to make a case for a particular view of the larger issues as to illuminate the thesis itself. For if the larger views which are implied by the thesis and tend to support it do not seem plausible, that would cast doubt on the thesis.
The material is organized in three main sections. Section 6.2 explores the relationship between the perceived utility of certain forms of violence and the degree of complexity of the cultures in which they are practiced. I argue that sanctioned forms of violence are associated with particular problems in a given form of social organization; and that the moral beliefs which justify given forms of socially-sanctioned violence concern not merely the actual or perceived utility of the practice as a means to certain ends, but the perceived significance of the ends themselves. The social goals that are perceived as vital in a given culture are always embedded in a world view which gives those goals value and meaning. World views, in turn, are both product and a determinant of the forms of political, social, and economic organization.140
Section 6.3 presents an hypothesis about cultural evolution which explains the rise and potential demise of war and other forms of socially-sanctioned large-group violence as a function of priorities among human needs. The hypothesis is that, given limited life expectancy in simple hunter-gatherer societies, people accepted a coercive, hierarchical form of social organization which, by increasing the food supply, fostered population growth and longevity at the price of a reduction in individual dignity and autonomy and an increase in human-on-human violence; but once a certain degree of physical security had been achieved, priorities switched back to favoring egalitarian forms of social organization which foster dignity, autonomy, and peace, as well as physical security. This model helps reconcile the intuitively plausible but conflicting propositions that beliefs about acceptable forms of violence vary with culture and that there exists a universal moral objection to deliberate violence by some people against others. In addition, by underscoring that recognition of the dignity and autonomy of the individual is the main issue addressed by democratic forms of social organization, it supports the thesis that the spread of democratic institutions is likely to lead to the abolition of war.
Finally, in Section 6.4, I conclude with a brief discussion of the prospects for the abolition of all forms of socially-sanctioned violence.
6.2 The Pattern of Successive Forms of Socially-Sanctioned Violence
Two quite different issues are posed by the historical sequence of successive forms of socially-sanctioned violence, each highly correlated with a particular stage (or stages) in the emergence of increasingly large societies with increasingly complex forms of political and economic organization. The first issue concerns particulars: Why did specific forms of socially-sanctioned violence arise in many different places and times independently, in the context of specific forms of social organization (which themselves arose in different places and times independently)? The second issue is more general: The historical pattern of successive forms of institutionalized violence suggests even if war ends, socially-sanctioned, institutionalized violence will never end; only the form will change. This inference is discussed in Section 6.4.
The Forms of Social Organization and the Forms of Sanctioned Violence
One interpretation of the most widely-practiced forms of socially-sanctioned violence and violation, which developed independently in various parts of the world, is that they arose at points of vulnerability typical of the forms of social organization in which they arose—that is, around threats which a given form of organization did not protect against, or around the potential for self-interested human exploitation associated with mechanisms that provided a social benefit.
Ritual cannibalism: The frequency of ritual cannibalism among simple societies, and its rarity among more complex societies, is, I believe, related to the greater intensity of fears about “death and the regeneration of life” among simple societies, combined with the focus among such societies on food-gathering and eating as the predominate necessity and the most structured activity of life.141 In the great majority of cases, the purpose of rituals involving the ingestion of symbolic body parts of a deceased person, whether a relative who is mourned or a stranger who is feared, is to control the impact of the death on both the dead and the living.
Ritual cannibalism differs from most forms of socially-sanctioned violence in that the action need not involve violence toward a living being (although lethal violence is part of the ritual in most cases of exo-cannibalism): the traumatic event is, instead, the violation of the body (and, implicitly, the individual) inherent in treating the flesh of the deceased like food. The violation of human dignity entailed in such a process is underscored if we contrast ritual cannibal practices with modern medical practices in which the body parts of some individuals (the blood and some organs) are ‘consumed’ by others in the form of blood transfusions and organ transplants. These procedures, which involve the same body parts that are involved in cannibal rituals, do not appear to the modem mind as ‘cannibal’ practices for one reason: the ‘consumption’ of organs is not through the mouth and alimentary tract, like food, but through surgical means long associated with healing and wholeness.
In the simple societies with cannibal rituals, as much as the practitioners may have been loathe to ingest some part of a dead human body, the act had a symbolic meaning which transcended its physical nature. In embracing the fearful, conquering it by making it intimately familiar, the practitioners inflicted as much violation and degradation on themselves as on their victims. By making themselves one with the victim, they eliminated or blurred the most terrifying aspect of death: the disappearance of the living into the unknown.
Later, as the scale and complexity of social organization grew, death must have become a less traumatic feature of life. Unlike the death of an individual in a group of 25 persons, the death of an individual in a society of many thousands or millions would affect those who knew the deceased but not the society as a whole. With growth in the scale, complexity, and anonymity of society, the shadow of death must have shrunk and, by comparison, much more energy must have been given to the trials and tribulations of life in a complex society. Once this happened, ritual cannibalism would no longer be needed as part of the routine of daily life as a means of vitiating the terror of death: the split between the personal and the public imposed by the large scale of social organization would have already eliminated death’s worst threat, which in simple societies would be the shrinking of the group below the scale of viability.
Religious human sacrifice: Human sacrifice as a form of socially-sanctioned violence is extremely rare among simple societies, which do not have anthropomorphic ‘gods’ that would require or receive sacrifices. (In place of gods, the spiritual concepts and origin myths of simple societies are generally organized around spirits that are immanent in the natural world, that is, in certain animals or plants, in the sky, the wind, the sun and moon, the earth, and so on. These spirits have great power, but they tend not to be omnipotent anthropomorphic beings like the later gods.)
Among the early agrarian states which created the first cities, built monumental architecture, and had pantheons of gods who controlled the vicissitudes of life, human sacrifice was universal. In most cases, the purpose of religious human sacrifice was to ensure the fertility of the land and the bounty of the crops, or to seek relief from a plague, drought, or other calamitous natural condition. We can infer from this correlation that the critical vulnerability of agrarian society was the susceptibility of farming to fail badly and cause devastating famines in times of bad weather; and the susceptibility of concentrations of population to decimating diseases. Since the weather and plagues were believed to be inflicted or controlled by the gods, trying to keep the gods happy with gifts and sacrifices of society’s most valued possessions was the best that could be done to ward off these dangers. Like ritual cannibalism, human sacrifice was perceived as a necessary evil; but in a mirror-image reversal of the cannibal practitioners’ violation of themselves by embracing their worst nightmares, the victim of religious human sacrifice was endowed with godlike qualities, as was required for communication between man and god. In virtually every culture, the very act of being sacrificed made the victim a god. In addition, in many cultures, the individual being prepared for sacrifice was treated as having already become godlike, or as the most revered person in the community.
Slavery, war, and burial sacrifice: The spiritual or religious transforming quality of human sacrifice and ritual cannibal practices, and the limited degree of violence in each distinguish those practices from the coinciding and later continuing practices of slavery and war, which involved massive violence and violation, and which did not involve any transforming symbolic meaning which bound victim and perpetrator together in a shared experience. The same can be said of the burial sacrifices of the wives, concubines, servants, soldiers, and other retainers of very highly-placed individuals, though on a smaller scale.142 Slavery and war were first institutionalized as large-scale, routinized instrumental means to social ends at about the same time that human sacrifice was introduced; and like human sacrifice, slavery and war both arose in places where the mobile subsistence economy had given way to fixed habitats, agriculture, the accumulation of wealth, a high degree of differentiation in the specialization of labor, and an extremely hierarchical form of social organization, with a king-god at the top, in some cases, and a mass of slaves at the bottom.
On the one hand, war and preparations for war must have been established as a means of defending territory and wealth from marauding bandits. Indeed, the hierarchical power structure of the early state, and, in particular, the extreme concentration of power in the hands of a divinely invested king, must have arisen at least in part specifically as a means of giving farmers an effective defense against thieves. On the other hand, the war-based conquests involved in the creation of virtually all of the early states suggests that the benefits of protection and economies of scale were only part of the motivation for the early state formation: it is likely that the rest of the motivation lay in the desire for wealth and power among the chief warriors and rulers. Similarly, the establishment of large-scale institutionalized slavery cannot be explained away as an essentially defensive reaction to a pressing social vulnerability: on the contrary, the creation of institutionalized slavery must have been a direct and immediate perversion of the concept of the accumulation of wealth, in which human beings became an owned form of wealth and human slave labor a means to wealth.
Of the three practices introduced at the time of the earliest states, human sacrifice ended the soonest, at least as a socially-sanctioned practice. Slavery (or serfdom) and war, however, continued into modern times as means of securing and protecting wealth. Only when an individual-centered view of the world, according equal dignity and worth to every human being, made slavery and serfdom appalling did the legal institutions perpetuating them end.
In the case of war, many observers have argued that the nature of wealth and power have changed so much during the 20th century that territorially-based warfare is increasingly irrelevant to obtaining or protecting either one.143 Modern societies and their great wealth are vulnerable to many forms of destruction and disruption; but their complexity, vulnerability, vast lines of communication, and global economic interdependence make them difficult if not impossible to seize and exploit militarily. Slowly but steadily, the means of coercion and exploitation by some people over others have shifted from military to economic, contributing to the growing sense that apart from defense and humanitarian intervention, war is neither an effective means nor an appropriate means to any ends.
6.3 Directed Cultural Evolution and Priorities Among Human Needs
The ultimate question about war is, Why war? Why would human beings invent such an awful practice, and, having experienced its horrors, repeat it over and over again for thousands of years? This question is reciprocal to the main subject of this essay: Having invented and practiced war, how might we end it? Addressing these questions, this section suggests that the rise and potential demise of war can be understood as part of a larger pattern of cultural evolution directed toward meeting a range of basic human needs.144
Costs and Benefits of the Shift from Small Hunter-gatherer Groups to Large Warring States145
Some of the hunter-gatherer cultures that have survived into modern times have practices which resemble war in some respects: two groups prepare for an armed confrontation; they identify each other in denigrating terms, often referring to each other as subhuman or non-human; they engage in an armed confrontation; and they retreat. Most such practices, however, have other features that make them unlike war: the participants do not kill or injure one another; the confrontation consists mainly of displays of ferocity and shouting; and the confrontation ends as soon as any injury has occurred on either side (if it has not already ended).
The simplest hunter-gatherer bands are mobile groups typically comprising 15–25 individuals, making up several families or an extended family. They have no domesticated animals or horticulture; they carry everything they own when they abandon their base camp after the food in an area is depleted; and they have few personal possessions. This means that they do not own the wealth or other natural resources, such as land, that are typically the object of war. Warlike rituals and, in some cases, raids by one group against another have three main purposes, none of which requires the mass killing or control of territory that are the defining features of war: first, distancing and frightening the unknown ‘other’ (that is, the group living nearest at hand); second, acquiring soul stuff; and third, winning renown and prestige for the display of courage in a dangerous situation. Warfare by a group with a growing population, aimed at seizing a certain area by systematically killing or driving off those who are currently using it, is not unknown; but this true warfare generally does not occur until a culture has domesticated plants or animals and grown to comprise more than one local band (in which case, it may be called a tribe, or, if it is still larger and more complex, a chiefdom).
War was not institutionalized in the form we know until the development of early agricultural societies, in which people began to store food surpluses against future times of need, and thereby created wealth which others could steal. Later, as agrarian societies became larger and more complex, with large cities, specialization of labor, and the accumulation of wealth in the form of manufactured goods, the object of war turned from merely seizing food to seizing non-perishable forms of wealth, and, finally, seizing and incorporating into an existing polity the land and the people that generate the surplus wealth.
The correlation between the development of warfare as a social institution and the rise of large, complex societies capable of accumulating great wealth suggests the following hypothesis about the direction of cultural evolution: Economies which generated food surpluses and, through the specialized labor permitted by these surpluses, non-perishable forms of wealth must have offered some benefit to the mass of individuals at the bottom of their hierarchical structures—a benefit that was worth the loss of autonomy and the increase in sanctioned and non-sanctioned violence which they experienced. Because the price was steep, and because throughout the world population size and longevity have increased as a function of the growing complexity of society, it is reasonable to assume that the benefit was increased life expectancy for oneself, one’s spouse, and one’s children. This interpretation leans toward the ‘population push’ view of the rise of complex societies because it suggests that the desire for better survival conditions for more people lay behind the development of new forms of technology and social organization, which increased per capita productivity.146
One argument against this thesis is that the hunter-gatherer societies that survived into modem times generally gather enough food to meet the group’s needs in a few hours of work per person per week, leaving the remaining time for leisure activities. Life in these groups seems to have been healthy, happy, diverse, and fulfilling (at least until the introduction of previously unknown diseases by foreigners), suggesting that people may have been both materially and emotionally better off in hunter-gatherer groups than in later complex societies.
I speculate, however, that the hunter-gatherer groups which survived into modem times did so because they represented particularly successful adaptations to their ecological environments. In other cases, the gradual early expansion of the populations in one region or another must have pushed offshoot groups into borderline ecological environments—for example, where it was cold, or where water was in short supply, or seasonal changes made the food supply unreliable, or disease was rampant. Such conditions would have made the hunting and gathering lifestyle much more stressful, with frequent hunger, malnutrition, or starvation, and low life expectancy. In those cases, the opportunity to secure a more predictable, less stressful, and longer life, with higher rates of survival into child-bearing years, would have been greatly desired.
There is substantial archaeological evidence to support the hypothesis that the desire to improve life expectancy was the motive for the development of agriculture and of large, complex societies. This evidence is reflected in the estimates of world population size and regional longevity shown in Table 6-1. Although all figures for population size before the last century, shown in part A, are tentative (and more speculative the further back they go), anthropologists, archaeologists, and demographers generally agree about the patterns of regional increase or decrease and the orders of magnitude involved. The estimates of longevity in the Eastern Mediterranean region from 30,000 BC to the present, shown in Table 6-1, part B, are based on the scientific analysis of skeletons excavated from areas where people of all ages were buried. For most of the sample years shown, 100 or more skeletons of each sex were available. Table 6-1, part C, gives life expectancies for European countries with comprehensive and carefully assessed written documents. The Mediterranean and European estimates are not directly comparable, because the Mediterranean data represent average age at death for those who had reached 15 (the beginning of child-bearing years), whereas the European estimates represent life expectancy at birth. If the Mediterranean data were rebased to life expectancy at birth, the longevity ages would fall by 5–10 years.
Table 6-1. Comparative historical estimates of world population size (in millions) and life expectancy, 40,000 BC to 1850 AD
Taken together, the estimates in Table 6-1 show the following overall relationship between stages of economic and social organization, on the one hand, and changes in population size and longevity, on the other. The first modern humans are estimated to have reached a population size of up to 5 million relatively early in the history of hunter-gatherers (35,000 years ago) and to have taken the next 30,000 years to double, reaching at most 10 million by the time of the introduction of agriculture. Life for the early homo sapiens sapiens must have been hard, since only half of all children survived to 15, and half of the remaining population was dead by about the age of 30. Disease and death in childbirth were important causes of early death. Until the medical advances of the last century, these factors kept life expectancy from rising higher than 50. Other factors which must have contributed to the low life expectancy among prehistoric hunter-gatherers were malnutrition, which lowered resistance to disease, starvation, wild animals (including poisonous snakes), severe weather, and accidents.
Though conditions may have been harsh, it is generally agreed that the low rate of overall population growth was due to relatively low birth rates as well as high death rates. Long breast-feeding led to ‘child spacing’ (intervals between pregnancies of 2–3 years), which was supplemented by abortion (induced in part by carrying infants while gathering food) and infanticide. In areas where survival was somewhat easier and the population did grow, population growth led to migration and dispersal, as new groups of 15–25 individuals broke off and moved further afield.
Increased local population density leading to innovations that would increase the productivity of a given area did occur in some places, but only gradually. This led to the earliest forms of horticulture, animal husbandry, and coastal harvesting of fish and shellfish; and it was associated with the development of semi-sedentary groups of bands (tribes or chiefdoms).147
After the earliest settlements based on agriculture appeared in 7000–8000 BC, the world’s population grew by a factor of 5 in 2,000 years, from 10 million to 50 million (in absolute terms, 10 times the growth that occurred over the previous 30,000 years). During the early flowering of agriculture between 5000 BC and 1AD, in the Middle East (including Egypt), the Indian subcontinent, and China, there was another fivefold increase in the world’s population, from 50 million to over 250 million.
During the same period (between 5000 BC and 600 BC), the quality of life for the average person in the Eastern Mediterranean improved so radically that life expectancy rose from 34 and 29 for men and women, respectively, to 45 and 36. Subsequently, however, this area suffered from over-population, urban crowding, lack of new conquered farm labor, a decline in nutrition, and rampant malaria and other diseases. In the first centuries AD, both the size of the population and life expectancy declined throughout the Eastern Mediterranean, from Rome to Carthage. Similarly, the Chinese population, which was the largest and the most dense in the world in 1AD, suffered from disease, drought, and war, and declined over the period to 1000 AD. Over the same period, the emergence of the early agricultural city states of Central and South America brought substantial growth to the population there, which continued to grow until the 16th–17th centuries, when it was decimated by diseases introduced by Europeans.
In Europe, growth of industry and intensification of agriculture after the Renaissance led to a new spurt of population growth and, more important, a steady rise in life expectancy for the growing population. This is particularly clear in estimates for British peers and for the bourgeoisie of Geneva (for both of which unusually detailed and reliable figures are available), which show a remarkably close match in average life expectancies over the period from 1650 to 1850: both sets start in the low 30s, move by 1750 to the mid 40s, and by 1850 reach the high 50s. (Again, these figures are shifted down by 5–10 years compared with those for the Eastern Mediterranean because they include child mortality.) Sweden shows the same trend for its entire national population, but lagged by 100 years; for France, Britain, and the Netherlands, gains in life expectancy for the population as a whole lag about 150 years behind those of the upper class. The differences for all of these populations horizontally across time and vertically across classes are largely attributable to reliable (lifelong) access to food in sufficient quantity and with sufficient diversity to meet essential nutritional needs. The important issue in modern Europe was not so much early death by starvation (although that did occur periodically throughout Europe), but weakened resistance to life-threatening diseases and injuries.
In sum, the introduction of large, hierarchically organized complex societies, in which a small minority of city-based rulers, priests, warriors, craftsmen, and servants were supported by a large majority of agricultural workers in the surrounding countryside, did create enough food to support both an increase in world population of between one and two orders of magnitude (from 10–50 million at the start to 300–800 million between 1 AD and 1750 AD), and an increase in average life expectancy at birth of 10–15 years, from the low to mid 20s to the high 30s. Within the boundaries of this general increase in quantity and quality of human life, there was, however, no steady growth trend in any part of the world. Instead, there were wide swings, in some cases enormous swings, in population size, and wide swings in life expectancy at different times in different regions, as a function of the decline and collapse of the system of agricultural intensification, due to pandemics, war, soil exhaustion, or political collapse. Not until the industrial revolution in the 18th century did steady, substantially unbroken growth in both population size and life expectancy take hold in virtually all parts of the world. (Even now, setbacks can occur, as in the recent substantial decline in life expectancy in Russia, due to a combination of inadequate nutrition and the virtual collapse of the medical system.)
I have reviewed the evidence on population size and longevity at some length in order to underscore what I hypothesize was a powerful motivation for the mass of ordinary people at the bottom of the heap to put up with the violence, indignity, and inequity of hierarchically-structured agricultural and feudal societies and empires, instead of returning to (or remaining with) the simpler but more peaceful, dignified, and egalitarian lifestyle of the hunter-gatherers. This motivation was to have a better chance at surviving and, on average, a better quality of life physically. It is likely that during periods of social collapse, as well as during plagues and famines, the quality of life and life expectancy in hierarchical societies were at least as bad as and sometimes worse than those of hunter-gatherers. But on the whole, this was not the case. In addition, the more cosmopolitan and diverse context of life in a large society would have been appealing even for those with very few resources. Markets, holidays, church services, and ceremonies would have all provided occasions for interaction with the larger society which replaced the small, insular groups of hunter-gatherers.
A Symmetrical Transition from Warring States to a Non-warring International System
My thesis about what may have originally made war acceptable as an institution has a corollary about what may make it unacceptable in the foreseeable future. The modem period has seen a transition very nearly opposite to the transition from small hunter-gatherer group to large agricultural societies, in two important respects. First, over the last two centuries, in parallel with an unprecedented growth in world population size and extension of life expectancy, there has been the first fundamental change in social and political organization since the shift from ‘acephalous’ hunter-gatherer bands to hierarchical city states. The modern change, embodied in democratic institutions, has been toward asserting egalitarian values and establishing ways to increase the opportunity for the average person to influence decisions made on behalf of the entire society. The common thread in the political, social, and economic changes of the modern period is the rise in the perceived dignity and worth of the individual, at the expense of the hierarchically-structured world order of earlier times and, in particular, at the expense of the top ranks in that order. This has led to a devolution and dispersal of political power from the center to the periphery, reflected both in the establishment of popularly elected representative government within nation-states, and in the dismantling of colonial empires (and attitudes) among nation-states.
Second, the rise in the dignity and worth of the individual and the leveling of power, in turn, have been associated with a change in the role of violence in society. Hierarchically-structured polities have always been maintained through threats of the use of force by the ruling elites. The authority of the top ruler, which was generally very broad, was largely arbitrary: it was based not on merit or popular selection, but on the arbitrary factors of inheritance backed up by wealth, with which the ruler purchased superior armed force to enforce his rule. An usurper with more powerful armed forces was generally recognized as the legitimate ruler as soon as he had demolished the fighting forces of his predecessor. This was not because people approved of armed take-over, but because the position of ruler was itself treated as a somewhat unpleasant necessity: someone has to do it, otherwise there will be anarchy and everyone will lose. The use of violence as a means to power was, thus, routinely practiced, recognized, and accepted.
The rise of an individual-centered view of the world has been associated with (and, I believe, caused) a declining tolerance for violence in general, and, specifically, the replacement of violence with verbal persuasion as the means of maintaining or changing a government and its policies.
Implicit in this interpretation of political and cultural developments over the past several centuries is a view of predominant individual priorities that determine the direction of cultural evolution. In contrast to the early change in social organization that introduced war—a change in which individuals sacrificed the autonomy, dignity, and non-violence of hunter-gatherer groups for the physical security and longevity offered by large agrarian states—the focus of contemporary social organization has been to move beyond mere physical security in ways that re-establish the long-lost conditions of autonomy, dignity, and non-violence.
Of course, these values are altered by the differences between the modern world and the small hunter-gatherer band or tribe. In small, self-sufficient bands, each individual’s autonomy and dignity and the group’s tendency to non-violence were practiced in a community in which each individual was well known (and probably related) to every other, and the welfare of each was of immediate concern to the others. In modern society, the situation is very nearly the reverse: the autonomy, dignity, and non-violence of each individual and of society as a whole exist within a framework in which people live at close quarters (in cities and towns) with innumerable strangers, and where the supporting roles of the community and the extended family are often modest. This difference in context gives the qualities of autonomy and dignity a different meaning. Earlier, the granting of autonomy and respect for the dignity of others were based on familiarity and love. Today, in towns where most people walk past or interact with scores or hundreds of strangers on a daily basis, recognizing the dignity and autonomy of others requires that each individual learn to apply the qualities of justice, fairness, tolerance, and reciprocity. Crime-ridden inner-city locales, the drug subculture, and the tendency of teenagers to rely on peers instead of parents for socialization all undermine this crucial learning process, producing uneven results.
In sum, in trying to account for the existence of destructive institutions, such as war and slavery, and to reconcile these institutions with our everyday sense of human nature, it is reasonable to infer that at a certain stage, people gave a higher priority to physical security and longevity than to autonomy, dignity, and non-violence; but that once a certain degree of security and life-length had been achieved, the quality of life was accorded an equal or greater priority, with stress on the qualities of dignity, autonomy, and freedom from war.
This view of the direction of cultural evolution in general, and of the causes of the rise and potential demise of war in particular, suggests that we should expect all forms of socially-sanctioned, institutionalized violence, not just war, to end in the foreseeable future. Similarly, so long as present trends continue, new forms of socially-sanctioned violence should not arise.148
The idea that the rekindling of egalitarian values and the creation of democratic institutions are likely to lead to the end of war has been challenged by scholars who point to the frequency with which democratic countries have been involved in war. It is, however, generally accepted that democracies tend not to go to war with each other; and this fact is central to the prospects for the spread of egalitarian values—specifically, the growth of respect for the dignity and worth of the individual—to lead to the end of war.
Since the American revolution, democratic nations have maintained a double standard regarding the use of force: they have supported the commitment to non-violence which is central to an egalitarian political system at home, while continuing to play the old game of power politics abroad. This suggests that the spread of democracy has not fostered commitment to defensive non-violence as a moral position on war, or else that this standard is not easily interpreted and applied in the international arena. The truth is, I believe, different and less discouraging to the prospects for the abolition of war.
In conflicts with non-democratic societies, the democracies have betrayed the standard of defensive non-violence in a way they are unlikely to do in conflicts with other democracies. In conflicts with non-democratic societies, the leaders of democratic societies have assumed that they are always on the defensive, and that their opponents are always the aggressors. Of course, political leaders always advance self-serving interpretations of questionable behavior in international politics; but in this case, objective aspects of the domestic politics and military alliances of the ‘communist’ countries have fostered this view. First, leaders and citizens in the democracies have assumed that governments that are oppressive at home will be prone to aggression abroad. Second, they have assumed that since non-democratic governments do not represent the will of their own people, trouncing them in an international conflict will help liberate an oppressed people—an action that can be interpreted as inherently defensive in nature. Third, during the bipolar era of the Cold War, political leaders in the democracies assumed that an overall policy of limiting the influence of the leading non-democratic nation, the former Soviet Union, justified giving armed support to non-democratic but capitalist governments or factions in smaller countries. The rationale was that nondefensive action on the local level—taking the side of repressive, autocratic governments—was an acceptable tactical move in the global strategic conflict with the Soviet Union, in which the democracies were (in their own view) on the defensive. Finally, democratic leaders assumed that governments which were undemocratic but capitalist were more likely to become democratic than those which were undemocratic and socialist.
From this set of a priori assumptions, democratic governments drew the conclusion that in any international or civil war, socialist governments and allies and clients of the former Soviet Union could reasonably be assumed to be the attackers and their opponents the defenders. For the large Western nations, particularly the United States, the effect of this blanket inference was to create a morally ambiguous public policy, in which the burden of decision on whether to use armed force in a given conflict rested on the old just war standard, while the public rhetoric justifying actual uses of armed force stressed the contemporary standard of defensive non-violence. The US debate on the Gulf War, held in Congress and in the press during the period between Iraq’s invasion of Kuwait (August 1990) and the US armed intervention to expel Iraqi troops (beginning in January 1991), provides an extraordinarily transparent view of the duality in Western grand strategy. In October 1990, President Bush floated the idea that the United States should use force to repel Iraq from Kuwait back because an Iraqi stranglehold on Middle East oil would lead to economic setbacks in the United States and other Western nations, including a loss of jobs in the United States. This ‘just war’ reasoning was not popular with the public, however: in opinion polls, the majority of people expressed the view that economic costs would not justify armed intervention. As a result, by December 1990, the Bush administration had switched to a entirely different reason for intervention, one that was consistent with commitment to defensive non-violence and supported by a majority of the public: this was that to deter future cases of armed international aggression in the new and unstable post-Cold War world, the United States would have to take a principled stand against this early and blatant example. From the sequence of events, we can reasonably infer protecting national economic interests was the primary motive for the use of force on the part of the decisionmakers; but taking a principled stand against international aggression, not advancing economic interests, was the primary motive for public acceptance of the use of force. The moral ambiguity that was common during the Cold War is now gradually giving way to the greater moral consistency to be expected in conflicts among democratic societies.
When both sides in a conflict have democratic governments, neither has a ready excuse to assume that all aggressiveness lies on one side and all defensiveness on the other. Given the democratic character of the opposing government, neither side’s leaders can justify war as fundamentally protective of the longer-term interests of the opponent’s people, or as essential to larger, longer-term goals for global peace and democracy. In place of the traditional ‘us and them’ mentality, the spread of democratic institutions creates an assumption of shared values, including the ultimate goal of developing an international society in which commitment to defensive non-violence is the rule. This assumption puts pressure on leaders on both sides in any conflict between democracies to identify non-violent means of conflict resolution.
Over the course of the 20th century, uses of armed force beyond national borders have been increasingly (albeit slowly) confined to situations that can be considered defensive or humanitarian. But as the standard of democratic commitment to non-violence has become stronger, the governments of the democracies have increasingly characterized ambiguous or self-interested uses of force as ‘defensive.’ The effect has been to corrupt the concept and language of ‘defense.’ The word ‘defense’ has become a portmanteau for any military action; used as an adjective, it routinely replaces the more neutral word ‘military,’ as in the phrase ‘defense budget.’ Ironically, even the name Department of Defense, which was introduced in the United States at the close of World War II, replacing ‘Department of War’ which had served since the country’s founding, was introduced at the very moment that the United States was becoming a great power, whose armed forces were intended for use mainly overseas, not for the defense of US territory.
Despite the corruption of the language of defense, the stringency with which the criterion of defensiveness constrains the use of force is likely to increase in direct proportion to the worldwide spread of democratic values and institutions. The governments of democratic societies, where most people believe that the use of force should be limited to defense, will tend not to initiate aggression. The constraint will be particularly powerful in conflicts with other democracies, where the populations on both sides will be extremely reluctant to support any government inclination to go to war. As democracy spreads and there are fewer conflicts whose treatment is amenable to the old just war standard, society’s sense of the relevance and appropriateness of this standard in any conflict will disappear.
6.4 The End of Socially-Sanctioned Forms of Violence
In illustrating the tremendous variability in socially-sanctioned forms of violence across time and culture, this essay documents successive forms of socially-sanctioned violence that stretch back for thousands of years, deep into prehistory. While appropriately conveying a sense that no particular form (such as war) is likely to be immutable, this list may also give the impression that for whatever reason, human societies have always been prone to invent and conduct sanctioned forms of group violence, and they are therefore likely to do so in future.
This argument is important not only in its general form, but also specifically in the context of the abolition of war, because for most people the idea that war cannot be abolished is probably only loosely and casually tied to particular obstacles to abolition. To the extent that this is so, references to ‘innate aggressiveness’ or other obstacles are equivalent to a throw-away phrase, such as ‘the human propensity for war, whatever that might be.’ In other words, for many people, the view that war cannot be abolished probably boils down to the following argument: ‘Since war or, if not war, some other form of group violence has been present throughout human history, sanctioned violence must not be susceptible to being abolished.’
This argument is based on an assumption contrary to modern thought: Since something has not happened in the past, it cannot happen in the future. Why would this contrary perspective be applied to war, when, in this century, people have done so many things which were previously not only undoable but virtually unthinkable, such as walk on the moon, or fly, or get a new heart?
There are, I believe, three aspects of war (and, in lurking behind war, other not yet identified future forms of socially-sanctioned violence) which lend it a special aura of immutability: First, it is morally profoundly controversial, at once terrible and just. To say that war could end would mean to acknowledge or embrace the notion that war is (at least potentially) no longer controversial, no longer just, only terrible. People do not believe that war can end because they believe that some wars are just: this means that they do not whole-heartedly want for it to end.
Second, war is haphazard. From time to time, regularities appear in the incidence of war, the conditions under which war breaks out, or the parties to war. But to the specialist as well as the lay person, war remains unpredictable. How can we expect an activity that is sporadic and unpredictable to end?
Third, war has long served as the apotheosis of human ills, as illustrated in the phrase ‘the scourge of war.’ For many people, the image suggested by the idea of ending war is probably something like creating a heaven on earth, a paradise in which there is no violence or iniquity and all people can live productive, happy, healthy lives. This is, of course, not the case. Ending war would not end any of life’s other mass woes: disease, poverty, starvation, malnutrition, illiteracy, greed, crime, unequal distribution of wealth, population explosion, environmental destruction, corruption, betrayal, and so on.
To make the idea of ending war conceivable, ‘war’ must be extricated from its larger-than-life symbolic meaning and scaled back to its ordinary, nasty self. The fact that societies stop a particular practice of death and destruction that has been historically sanctioned would not mean that the individual human beings who make up societies will have become more altruistic, generous, truthful, rational, or far-seeing; it would mean only that societies had abandoned one more barbarous custom, along with the others abolished earlier.
This discussion constitutes part of my response to the view that because some form of socially-sanctioned violence has always been with us, some form always will be: specifically, I suggest that the claim itself, which entails no particular evidence or argument, represents nothing more than an amalgam of the various sources of resistance to thinking through the meaning and conditions of abolition.
In addition, however, there is a direct response to the concern about unending socially-sanctioned violence. The reason that we can expect to find that all forms of socially-sanctioned group violence are ending is suggested by my characterization of modern social organization as an effort to retrieve the autonomy, dignity, and freedom from violence that were put on the back burner, so to speak, during humanity’s long quest for physical security and longevity. If this characterization is correct, this effort is likely to produce a general rejection of all forms of socially-sanctioned violence as no longer justifiable or tolerable.
140 On this point see Habermas (1979) and Honneth and Joas (1988/1980).
141 This idea, in some form, is found in nearly all anthropological studies of the origins and significance of ritual cannibalism, and is illustrated the titles of the following book-length works: The Mouth of Heaven: An Introduction to Kwakiutl Religious Thought (Goldman 1981); Cannibalisme et Immortalite: L’Enfant dans le Chaudron en Grece Ancienne (Cannibalism and Immortality: The Child in the Cauldron in Ancient Greece) (Halm-Tisserant 1993); Vital Souls: Bororo Cosmology, Natural Symbolism, and Shamanism (Crocker 1985); and Divine Hunger: Cannibalism as a Cultural System (Sanday 1986). Death and the Regeneration of Life is the title of a collection of essays edited by Bloch and Parry (1982) which includes articles about ritual cannibalism and other mortuary rituals.
142 Not surprisingly, the number of victims in burial sacrifice was inversely proportional to the wealth and power of the deceased: at one end of the spectrum, kings and emperors were buried with a huge retinue; at the other end, a wealthy farmer might be accompanied to the grave (or, in modem India, funeral pyre) by his wife.
143 See, for example, Mueller (1989), Luard (1986), and Keegan (1993).
144 In addition to Julian Steward (1955, 1977), this section draws on several general works on the theory of cultural evolution, that is, the theory of the features and causes of the development of successively larger and more complex societies over the course of human history. Approaching the subject from the perspectives of different schools of thought and different disciplines, these are: Boyd and Richerson (1985), Kroeber (1963), Lesser (1985), Bradie (1994), and Ingold (1988a, 1988b).
145 The discussion in this section of the differences between simple hunter-gatherer cultures and early agricultural states is based on three groups of scholarly sources: first, those which give overviews of prehistoric societies and modern-day hunter-gatherer societies, such as Clark (1969, 1970, 1977), Ingold et al. (1988a, 1988b), Leakey and Lewin (1977, 1979), Rouse (1972), and Wenke (1980); second, studies which review the factors involved in the rise of early states and the development of increasingly complex societies, notably Adams (1966), Fried (1967), Polgar (1975), Friedman and Rowlands (1978), Boserup (1981), Haas (1982), Sandars (1985), Haas et al. (1987), Johnson and Earle (1987), Cohen and Toland (1988), Kraus (1990), Upham (1990), and Lewellen (1992); and third, the ethnologies and histories of individual cultures and groups of cultures used as sources for Chapters 4 and 5 and the Appendix.
146 Ester Boserup’s seminal study Population and Technological Change (1981) makes a carefully argued case for the “population push” origin of larger and more complex societies. In a recent article, Netting (1990), while building on Boserup’s analysis, suggests that the rise of the early city states was less a product of the “intensification” of food production per worker through technological innovation than of the absolute size of the surrounding agricultural population in a given area. Like Boserup, Service stressed the benefits of early states as causes of their formation, including “increased production, a structure of redistribution which alleviates local disparities in resources..., and the rise of an...administrative structure which...would enhance and support the role of the chief, the court, and the priesthood, and ultimately would benefit the citizenry to a certain extent” (Yengoyan 1991).
147 Elman Service was a principal theorist of the characteristics of successively more complex forms of social organization, which he identified as band, tribe (a cluster of bands), chiefdom (a cluster of tribes), and primitive state (1971b). Many anthropologists have avoided the topic of the evolution of organizational scale and complexity on the grounds that it is hard to identify a set of well-defined features that are associated with any given form of organization between the small egalitarian band, with no hierarchically structured authority of any kind, and the complex early city-state. For the same reason, Service himself (1971a, p 157) later settled on just three groupings, which he called egalitarian society, hierarchical society and archaic civilization or classical empire.
148 On this point, two caveats must be added: First, the forms of institutionalized violence that arose in conjunction with new forms of organization in the past seem to have been located at weak points in the structure of society or the fabric of social thought, where sources of support were vulnerable to natural failure or human exploitation. Conceivably, new forms of institutionalized violence could arise in future in areas where the benefits of the new global interpenetration of finance, communications, and trade are susceptible to failure or interruption. Such a development would not necessarily negate the existence of a more general, longer-term trend toward the abolition of all forms of socially-sanctioned violence; it could merely stretch out the period over which the decline in tolerance for institutionalized violence takes place.
Second, there are signs that the over-arching project of the modem age—to make every family in every country healthy, wealthy, and wise, enfranchised, and violence-free—has not merely stalled but gone into reverse. The regressive trends include not merely a steadily rising rate of violent crime and a new surge of ethnic conflict, but also increased use of drugs; the rise of new global criminal networks for the acquisition and sale of drugs; the rise of unprotected early sex leading to growing rates of teenage pregnancy, venereal disease, and AIDS; declining national products and high unemployment rates in both rich and poor countries; and the conditions of anarchy or near-anarchy in many parts of the world.
The perspective presented in this essay does suggest that war could end in the foreseeable future despite these trends. The key point is that the likelihood of war is a function not of turmoil and fear, but of beliefs about the acceptability of war as a means of resolving problems, and the actual utility of war as a means of addressing the issues at hand. Apart from local failures of government, which invite local wars, none of the conditions described above is amenable to amelioration by resort to warfare. Going to war will not, for example, reduce teenage pregnancy, AIDS, or drug use, nor even restore employment and economic growth over the longer term.