Unfreezing the ice age: the truth about humanity’s deep past

Archaeological discoveries are shattering scholars’ long-held beliefs about how the earliest humans organised their societies – and hint at possibilities for our own

Tue 19 Oct 2021 01.00 EDT

In some ways, accounts of “human origins” play a similar role for us today as myth did for ancient Greeks or Polynesians. This is not to cast aspersions on the scientific rigour or value of these accounts. It is simply to observe that the two fulfil somewhat similar functions. If we think on a scale of, say, the last 3m years, there actually was a time when someone, after all, did have to light a fire, cook a meal or perform a marriage ceremony for the first time. We know these things happened. Still, we really don’t know how. It is very difficult to resist the temptation to make up stories about what might have happened: stories which necessarily reflect our own fears, desires, obsessions and concerns. As a result, such distant times can become a vast canvas for the working out of our collective fantasies.

Let’s take just one example. Back in the 1980s, there was a great deal of buzz about a “mitochondrial Eve”, the putative common ancestor of our entire species. Granted, no one was claiming to have actually found the physical remains of such an ancestor, but DNA sequencing demonstrated that such an Eve must have existed, perhaps as recently as 120,000 years ago. And while no one imagined we’d ever find Eve herself, the discovery of a variety of other fossil skulls rescued from the Great Rift Valley in east Africa seemed to provide a suggestion as to what Eve might have looked like and where she might have lived. While scientists continued debating the ins and outs, popular magazines were soon carrying stories about a modern counterpart to the Garden of Eden, the original incubator of humanity, the savanna-womb that gave life to us all.

Many of us probably still have something resembling this picture of human origins in our mind. More recent research, though, has shown it couldn’t possibly be accurate. In fact, biological anthropologists and geneticists are now converging on an entirely different picture. For most of our evolutionary history, we did indeed live in Africa – but not just the eastern savannas, as previously thought. Instead, our biological ancestors were distributed everywhere from Morocco to the Cape of Good Hope. Some of those populations remained isolated from one another for tens or even hundreds of thousands of years, cut off from their nearest relatives by deserts and rainforests. Strong regional traits developed, so that early human populations appear to have been far more physically diverse than modern humans. If we could travel back in time, this remote past would probably strike us as something more akin to a world inhabited by hobbits, giants and elves than anything we have direct experience of today, or in the more recent past.

[embedded content]
Get the Guardian’s award-winning long reads sent direct to you every Saturday morning

Ancestral humans were not only quite different from one another; they also coexisted with smaller-brained, more ape-like species such as Homo naledi. What were these ancestral societies like? At this point, at least, we should be honest and admit that, for the most part, we don’t have the slightest idea. There’s only so much you can reconstruct from cranial remains and the occasional piece of knapped flint – which is basically all we have.

What we do know is that we are composite products of this original mosaic of human populations, which interacted with one another, interbred, drifted apart and came together mostly in ways we can only still guess at. It seems reasonable to assume that behaviours like mating and child-rearing practices, the presence or absence of dominance hierarchies or forms of language and proto-language must have varied at least as much as physical types, and probably far more.

Perhaps the only thing we can say with real certainty is that modern humans first appeared in Africa. When they began expanding out of Africa into Eurasia, they encountered other populations such as Neanderthals and Denisovans – less different, but still different – and these various groups interbred. Only after those other populations became extinct can we really begin talking about a single, human “us” inhabiting the planet. What all this brings home is just how radically different the social and physical world of our remote ancestors would have seemed to us – and this would have been true at least down to about 40,000BC. In other words, there is no “original” form of human society. Searching for one can only be a matter of myth-making.


Over recent decades, archeological evidence has emerged that seems to completely defy our image of what scholars call the Upper Palaeolithic period (roughly 50,000-15,000BC). For a long time, it had been assumed that this was a world made up of tiny egalitarian forager bands. But the discovery of evidence of “princely” burials and grand communal buildings has undermined that image.

Rich hunter-gatherer burials have been found across much of western Eurasia, from the Dordogne to the Don. They include discoveries in rock shelters and open-air settlements. Some of the earliest come from sites like Sunghir in northern Russia and Dolni Vestonice in the Moravian basin, and date from between 34,000 and 26,000 years ago.

What we find here are not cemeteries but isolated burials of individuals or small groups, their bodies often placed in striking postures and decorated – in some cases, almost saturated – with ornaments. In the case of Sunghir that meant many thousands of beads, laboriously worked from mammoth ivory and fox teeth. Some of the most lavish costumes are from the conjoined burials of two boys, flanked by great lances made from straightened mammoth tusks.

Of similar antiquity is a group of cave burials unearthed on the coast of Liguria, near the border between Italy and France. Complete bodies of young or adult men, including one especially lavish interment known to archaeologists as Il Principe (“the Prince”), were laid out in striking poses and suffused with jewellery. Il Principe bears that name because he’s also buried with what looks to the modern eye like regalia: a flint sceptre, elk antler batons and an ornate headdress lovingly fashioned from perforated shells and deer teeth.

Another unexpected result of recent archaeological research, causing many to revise their view of prehistoric hunter-gatherers, is the appearance of monumental architecture. In Eurasia, the most famous examples are the stone temples of the Germus mountains, overlooking the Harran plain in south-east Turkey. In the 1990s, German archaeologists, working on the plain’s northern frontier, began uncovering extremely ancient remains at a place known locally as Gobekli Tepe. What they found has since come to be regarded as an evolutionary conundrum. The main source of puzzlement is a group of 20 megalithic enclosures, initially raised there around 9000BC, and then repeatedly modified over many centuries.

A megalithic enclosure at Gobekli Tepe in south-east Turkey

The enclosures at Gobekli Tepe are massive. They comprise great T-shaped pillars, some over 5 metres high and weighing up to 8 tonnes, which were hewn from the site’s limestone bedrock or nearby quarries. The pillars, at least 200 in total, were raised into sockets and linked by walls of rough stone. Each is a unique work of sculpture, carved with images from the world of dangerous carnivores and poisonous reptiles, as well as game species, waterfowl and small scavengers. Animal forms project from the rock in varying depths of relief: some hover coyly on the surface, others emerge boldly into three dimensions. These often nightmarish creatures follow divergent orientations, some marching to the horizon, others working their way down into the earth. In places, the pillar itself becomes a sort of standing body, with human-like limbs and clothing.

The creation of these remarkable buildings implies strictly coordinated activity on a really large scale. Who made them? While groups of humans not too far away had already begun cultivating crops at the time, to the best of our knowledge those who built Gobekli Tepe had not. Yes, they harvested and processed wild cereals and other plants in season, but there is no compelling reason to see them as “proto-farmers”, or to suggest they had any interest in orienting their livelihoods around the domestication of crops. Indeed, there was no particular reason why they should, given the availability of fruits, berries, nuts and edible wild fauna in their vicinity.

And while Gobekli Tepe has often been presented as an anomaly, there is in fact a great deal of evidence for monumental construction of different sorts among hunter-gatherers in earlier periods, extending back into the ice age.

In Europe, between 25,000 and 12,000 years ago, public works were already a feature of human habitation across an area reaching from Krakow to Kyiv. Research at the Russian site of Yudinovo suggests that “mammoth houses”, as they are often called, were not in fact dwellings at all, but monuments in the strict sense: carefully planned and constructed to commemorate the completion of a great mammoth hunt, using whatever durable parts remained once carcasses had been processed for their meat and hides. We are talking here about really staggering quantities of meat: for each structure (there were five at Yudinovo), there was enough mammoth to feed hundreds of people for around three months. Open-air settlements like Yudinovo, Mezhirich and Kostenki, where such mammoth monuments were erected, often became central places whose inhabitants exchanged amber, marine shells and animal pelts over impressive distances.

So what are we to make of all this evidence for princely burials, stone temples, mammoth monuments and bustling centres of trade and craft production, stretching back far into the ice age? What are they doing there, in a Palaeolithic world where – at least on some accounts – nothing much is ever supposed to have happened, and human societies can best be understood by analogy with troops of chimps or bonobos? Unsurprisingly, perhaps, some have responded by completely abandoning the idea of an egalitarian golden age, concluding instead that this must have been a society dominated by powerful leaders, even dynasties – and, therefore, that self-aggrandisement and coercive power have always been the enduring forces behind human social evolution. But this doesn’t really work either.

Evidence of institutional inequality in ice age societies, whether grand burials or monumental buildings, is sporadic. Richly costumed burials appear centuries, and often hundreds of miles, apart. Even if we put this down to the patchiness of the evidence, we still have to ask why the evidence is so patchy in the first place. After all, if any of these ice age “princes” had behaved like, say, bronze age (let alone Renaissance Italian) princes, we’d also be finding all the usual trappings of centralised power: fortifications, storehouses, palaces. Instead, over tens of thousands of years, we see monuments and magnificent burials, but little else to indicate the growth of ranked societies, let alone anything remotely resembling “states”.

To understand why the early record of human social life is patterned in this strange, staccato fashion we first have to do away with some lingering preconceptions about “primitive” mentalities.


In the late 19th and early 20th centuries, many in Europe and North America believed that “primitive” folk were not only incapable of political self-consciousness, they were not even capable of fully conscious thought on the individual level – or at least conscious thought worthy of the name. They argued that anyone classified as a “primitive” or “savage” operated with a “pre-logical mentality”, or lived in a mythological dreamworld. At best, they were mindless conformists, bound in the shackles of tradition; at worst, they were incapable of fully conscious, critical thought of any kind.

Nowadays, no reputable scholar would make such claims: everyone at least pays lip service to the psychic unity of mankind. But in practice, little has changed. Scholars still write as if those living in earlier stages of economic development, and especially those who are classified as “egalitarian”, can be treated as if they were literally all the same, living in some collective group-think: if human differences show up in any form – different “bands” being different from one another – it is only in the same way that bands of great apes might differ. Political self-consciousness among such people is seen as impossible.

And if certain hunter-gatherers turn out not to have been living perpetually in “bands” at all, but instead congregating to create grand landscape monuments, storing large quantities of preserved food and treating particular individuals like royalty, contemporary scholars are at best likely to place them in a new stage of development: they have moved up the scale from “simple” to “complex” hunter-gatherers, a step closer to agriculture and urban civilisation. But they are still caught in the same evolutionary straitjacket, their place in history defined by their mode of subsistence, and their role blindly to enact some abstract law of development which we understand but they do not. Certainly, it rarely occurs to anyone to ask what sort of worlds they thought they were trying to create.

Now, admittedly, this isn’t true of all scholars. Anthropologists who spend years talking to indigenous people in their own languages, and watching them argue with one another, tend to be well aware that even those who make their living hunting elephants or gathering lotus buds are just as sceptical, imaginative, thoughtful and capable of critical analysis as those who make their living by operating tractors, managing restaurants or chairing university departments.

French anthropologist Claude Levi-Strauss in the Brazilian Amazon, c1936

One of the few mid-20th-century anthropologists to take seriously the idea that early humans were our intellectual equals was Claude Levi-Strauss, who argued that mythological thought, rather than representing some sort of pre-logical haze, is better conceived as a kind of “neolithic science” as sophisticated as our own, just built on different principles. Less well known – but more relevant to the problems we are grappling with here – are some of his early writings on politics.

In 1944, Levi-Strauss published an essay about politics among the Nambikwara, a small population of part-time farmers, part-time foragers inhabiting a notoriously inhospitable stretch of savanna in north-west Mato Grosso, Brazil. The Nambikwara then had a reputation as extremely simple folk, given their very rudimentary material culture. For this reason, many treated them almost as a direct window on to the Palaeolithic. This, Levi-Strauss pointed out, was a mistake. People like the Nambikwara live in the shadow of the modern state, trading with farmers and city people and sometimes hiring themselves out as labourers. Some might even be descendants of runaways from cities or plantations.

For Levi-Strauss, what was especially instructive about the Nambikwara was that, for all that they were averse to competition, they did appoint chiefs to lead them. The very simplicity of the resulting arrangement, he felt, might expose “some basic functions” of political life that “remain hidden in more complex and elaborate systems of government”. Not only was the role of the chief socially and psychologically quite similar to that of a national politician or statesman in European society, he noted, it also attracted similar personality types: people who “unlike most of their companions, enjoy prestige for its own sake, feel a strong appeal to responsibility, and to whom the burden of public affairs brings its own reward”.

Modern politicians play the role of wheelers and dealers, brokering alliances or negotiating compromises between different constituencies or interest groups. In Nambikwara society this didn’t happen much, because there weren’t really many differences in wealth or status. However, chiefs did play an analogous role, brokering between two entirely different social and ethical systems, which existed at different times of year. During the rainy season, the Nambikwara occupied hilltop villages of several hundred people and practised horticulture; during the rest of the year they dispersed into small foraging bands. Chiefs made or lost their reputations by acting as heroic leaders during the “nomadic adventures” of the dry season, during which times they typically gave orders, resolved crises and behaved in what would at any other time be considered an unacceptably authoritarian manner. Then, in the rainy season, a time of much greater ease and abundance, they relied on those reputations to attract followers to settle around them in villages, where they employed only gentle persuasion and led by example to guide their followers in the construction of houses and tending of gardens. They cared for the sick and needy, mediated disputes and never imposed anything on anyone.

How should we think about these chiefs? They were not patriarchs, Levi-Strauss concluded; neither were they petty tyrants; and there was no sense in which they were invested with mystical powers. More than anything, they resembled modern politicians operating tiny embryonic welfare states, pooling resources and doling them out to those in need. What impressed Levi-Strauss above all was their political maturity. It was the chiefs’ skill in directing small bands of dry-season foragers, of making snap decisions in crises (crossing a river, directing a hunt) that later qualified them to play the role of mediators and diplomats in the village plaza. And in doing so they were effectively moving back and forth, each year, between what evolutionary anthropologists insist on thinking of as totally different stages of social development: from hunters-gatherers to farmers and back again.

Nambikwara chiefs were in every sense self-conscious political actors, shifting between two different social systems with calm sophistication, all the while balancing a sense of personal ambition with the common good. What’s more, their flexibility and adaptability enabled them to take a distanced perspective on whichever system obtained at any given time.


Let’s return to those rich Upper Palaeolithic burials, so often interpreted as evidence for the emergence of “inequality”, or even hereditary nobility of some sort. For some odd reason, those who make such arguments never seem to notice that a quite remarkable number of these skeletons bear evidence of striking physical anomalies that could only have marked them out, clearly and dramatically, from their social surroundings. The adolescent boys in Sunghir and Dolni Vestonice had pronounced congenital disfigurements; other ancient burial sites have contained bodies that were unusually short or extremely tall.

It would be extremely surprising if this were a coincidence. In fact, it makes one wonder whether even those bodies, which appear from their skeletal remains to be anatomically typical, might have been equally striking in some other way; after all, an albino, for example, or an epileptic prophet would not be identifiable as such from the archaeological record. We can’t know much about the day-to-day lives of Palaeolithic individuals buried with rich grave goods, other than that they seem to have been as well fed and cared for as anybody else; but we can at least suggest they were seen as the ultimate individuals, about as different from their peers as it was possible to be.

A reconstruction of an Upper Paleolithic mammoth hunter settlement at Dolni Vestonice in the Czech Republic

This suggests we might have to shelve any premature talk of the emergence of hereditary elites. It seems very unlikely that Palaeolithic Europe produced a stratified elite that just happened to consist largely of hunchbacks, giants and dwarves. Second, we don’t know how much the treatment of such individuals after death had to do with their treatment in life. Another important point here is that we are not dealing with a case of some people being buried with rich grave goods and others being buried with none. The very practice of burying bodies intact, and clothed, appears to have been exceptional in the Upper Palaeolithic. Most corpses were treated in completely different ways: de-fleshed, broken up, curated, or even processed into jewellery and artefacts. (In general, Palaeolithic people were clearly much more at home with human body parts than we are.)

The corpse in its complete and articulated form – and the clothed corpse even more so – was clearly something unusual and, one would presume, inherently strange. In many such cases, an effort was made to contain the bodies of the Upper Palaeolithic dead by covering them with heavy objects: mammoth scapulae, wooden planks, stones or tight bindings. Perhaps saturating them with such objects was an extension of these concerns about strangeness, celebrating but also containing something dangerous. This too makes sense. The ethnographic record abounds with examples of anomalous beings – human or otherwise – treated as exalted and dangerous; or one way in life, another in death.

Much here is speculation. There are any number of other interpretations that could be placed on the evidence – though the idea that these tombs mark the emergence of some sort of hereditary aristocracy seems the least likely of all. Those interred were extraordinary, “extreme” individuals. The way their corpses were decorated, displayed and buried marked them out as equally extraordinary in death. Anomalous in almost every respect, such burials can hardly be interpreted as proxies for social structure among the living. On the other hand, they clearly have something to do with all the contemporary evidence for music, sculpture, painting and complex architecture. What is one to make of them?


This is where seasonality comes into the picture. Almost all the ice age sites with extraordinary burials and monumental architecture were created by societies that lived a little like Levi-Strauss’s Nambikwara, dispersing into foraging bands at one time of year, gathering together in concentrated settlements at another. True, they didn’t gather to plant crops. Rather, the large Upper Palaeolithic sites are linked to migrations and seasonal hunting of game herds – woolly mammoth, steppe bison or reindeer – as well as cyclical fish-runs and nut harvests. This seems to be the explanation for those hubs of activity found in eastern Europe at places like Dolni Vestonice, where people took advantage of an abundance of wild resources to feast, engage in complex rituals and ambitious artistic projects, and trade minerals, marine shells and furs. In western Europe, equivalents would be the great rock shelters of the French Perigord and the Cantabrian coast, with their deep records of human activity, which similarly formed part of an annual round of seasonal congregation and dispersal.

Archaeology also shows that patterns of seasonal variation lie behind the monuments of Gobekli Tepe. Activities around the stone temples correspond with periods of annual superabundance, between midsummer and autumn, when large herds of gazelle descended on to the Harran plain. At such times, people also gathered at the site to process massive quantities of nuts and wild cereal grasses, making these into festive foods, which presumably fuelled the work of construction. There is some evidence to suggest that each of these great structures had a relatively short lifespan, culminating in an enormous feast, after which its walls were rapidly filled in with leftovers and other refuse: hierarchies raised to the sky, only to be swiftly torn down again. Ongoing research is likely to complicate this picture, but the overall pattern of seasonal congregation for festive labour seems well established.

Such oscillating patterns of life endured long after the invention of agriculture. They may be key to understanding the famous Neolithic monuments of Salisbury Plain in England, and not just because the arrangements of standing stones themselves seem to function (among other things) as giant calendars. Stonehenge, framing the midsummer sunrise and the midwinter sunset, is the most famous of these monuments. It turns out to have been the last in a long sequence of ceremonial structures, erected over the course of centuries in timber as well as stone, as people converged on the plain from remote corners of the British Isles at significant times of year. Careful excavation shows that many of these structures were dismantled just a few generations after their construction.

Children of the Nambikwara Sarare tribe in Mato Grosso state, Brazil

Still more striking, the people who built Stonehenge were not farmers, or not in the usual sense. They had once been; but the practice of erecting and dismantling grand monuments coincides with a period when the peoples of Britain, having adopted the Neolithic farming economy from continental Europe, appear to have turned their backs on at least one crucial aspect of it: they abandoned the cultivation of cereals and returned, from around 3300BC, to the collection of hazelnuts as their staple source of plant food. On the other hand, they kept hold of their domestic pigs and herds of cattle, feasting on them seasonally at nearby Durrington Walls, a prosperous town of some thousands of people – with its own Woodhenge – in winter, but largely empty and abandoned in summer.

All this is crucial because it’s hard to imagine how giving up agriculture could have been anything but a self-conscious decision. There is no evidence that one population displaced another, or that farmers were somehow overwhelmed by powerful foragers who forced them to abandon their crops. The Neolithic inhabitants of England appear to have taken the measure of cereal-farming and collectively decided that they preferred to live another way. We’ll never know how such a decision was made, but Stonehenge itself provides something of a hint since it is built of extremely large stones, some of which (the “bluestones”) were transported from as far away as Wales, while many of the cattle and pigs consumed at Durrington Walls were laboriously herded there from other distant locations.

In other words, and remarkable as it may seem, even in the third millennium BC coordination of some sort was clearly possible across large parts of the British Isles. If Stonehenge was a shrine to exalted founders of a ruling clan – as some archaeologists now argue – it seems likely that members of their lineage claimed significant, even cosmic roles by virtue of their involvement in such events. On the other hand, patterns of seasonal aggregation and dispersal raise another question: if there were kings and queens at Stonehenge, exactly what sort could they have been? After all, these would have been kings whose courts and kingdoms existed for only a few months of the year, and otherwise dispersed into small communities of nut gatherers and stock herders. If they possessed the means to marshal labour, pile up food resources and provender armies of year-round retainers, what sort of royalty would consciously elect not to do so?


Recall that for Levi-Strauss, there was a clear link between seasonal variations of social structure and a certain kind of political freedom. The fact that one structure applied in the rainy season and another in the dry allowed Nambikwara chiefs to view their own social arrangements at one remove: to see them as not simply “given”, in the natural order of things, but as something at least partially open to human intervention. The case of the British Neolithic – with its alternating phases of dispersal and monumental construction – indicates just how far such intervention could sometimes go.

The political implications of this are important, as Levi-Strauss noted. What the existence of similar seasonal patterns in the Palaeolithic suggests is that from the very beginning, or at least as far back as we can trace such things, human beings were self-consciously experimenting with different social possibilities.

It’s easy to see why scholars in the 1950s and 60s arguing for the existence of discrete stages of political organisation – successively: bands, tribes, chiefdoms, states – did not know what to do with Levi-Strauss’s observations. They held that the stages of political development mapped, at least very roughly, on to similar stages of economic development: hunter-gatherers, gardeners, farmers, industrial civilisation. It was confusing enough that people like the Nambikwara seemed to jump back and forth, over the course of the year, between economic categories. Other groups would appear to jump regularly from one end of the political spectrum to the other. In other words, they threw everything askew.

Seasonal dualism also throws into chaos more recent efforts at classifying hunter-gatherers into either “simple” or “complex” types of social organisation, since what have been identified as the features of “complexity” – territoriality, social ranks, material wealth or competitive display – appear during certain seasons of the year, only to be brushed aside in others by the exact same population. Admittedly, most professional anthropologists nowadays have come to recognise that these categories are hopelessly inadequate, but the main effect of this acknowledgment has just been to cause them to change the subject, or suggest that perhaps we shouldn’t really be thinking about the broad sweep of human history at all any more. Nobody has yet proposed an alternative.

Meanwhile, as we’ve seen, archaeological evidence is piling up to suggest that in the highly seasonal environments of the last ice age, our remote ancestors were behaving much like Nambikwara. They shifted back and forth between alternative social arrangements, building monuments and then closing them down again, allowing the rise of authoritarian structures during certain times of year then dismantling them. The same individual could experience life in what looks to us sometimes like a band, sometimes a tribe, and sometimes like something with at least some of the characteristics we now identify with states.

With such institutional flexibility comes the capacity to step outside the boundaries of any given structure and reflect; to make and unmake the political worlds we live in. If nothing else, this explains the “princes” and “princesses” of the last ice age, who appear to show up, in such magnificent isolation, like characters in some kind of fairytale or costume drama. If they reigned at all, then perhaps it was, like the ruling clans of Stonehenge, just for a season.


If human beings, through most of our history, have moved back and forth fluidly between different social arrangements, assembling and dismantling hierarchies on a regular basis, perhaps the question we should ask is: how did we get stuck? How did we lose that political self-consciousness, once so typical of our species? How did we come to treat eminence and subservience not as temporary expedients, or even the pomp and circumstance of some kind of grand seasonal theatre, but as inescapable elements of the human condition?

In truth, this flexibility, and potential for political self-consciousness, was never entirely lost. Seasonality is still with us – even if it is a pale shadow of its former self. In the Christian world, for instance, there is still the midwinter “holiday season” in which values and forms of organisation do, to a limited degree, reverse themselves: the same media and advertisers who for most of the year peddle rabid consumerist individualism suddenly start announcing that social relations are what’s really important, and that to give is better than to receive.

Among societies like the Inuit or the Kwakiutl of Canada’s Northwest Coast, times of seasonal congregation were also ritual seasons, almost entirely given over to dances, rites and dramas. Sometimes, these could involve creating temporary kings or even ritual police with real coercive powers. In other cases, they involved dissolving norms of hierarchy and propriety. In the European middle ages, saints’ days alternated between solemn pageants where all the elaborate ranks and hierarchies of feudal life were made manifest, and crazy carnivals in which everyone played at “turning the world upside down”. In carnival, women might rule over men and children be put in charge of government. Servants could demand work from their masters, ancestors could return from the dead, “carnival kings” could be crowned and then dethroned, giant monuments like wicker dragons built and set on fire, or all formal ranks might even disintegrate into one or other form of bacchanalian chaos.

What’s important about such festivals is that they kept the old spark of political self-consciousness alive. They allowed people to imagine that other arrangements are feasible, even for society as a whole, since it was always possible to fantasise about carnival bursting its seams and becoming the new reality. May Day came to be chosen as the date for the international workers’ holiday largely because so many British peasant revolts had historically begun on that riotous festival. Villagers who played at “turning the world upside down” would periodically decide they actually preferred the world upside down, and took measures to keep it that way.

Medieval peasants often found it much easier than medieval intellectuals to imagine a society of equals. Now, perhaps, we begin to understand why. Seasonal festivals may be a pale echo of older patterns of seasonal variation – but, for the last few thousand years of human history at least, they appear to have played much the same role in fostering political self-consciousness, and as laboratories of social possibility.

Adapted from The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow, published by Allen Lane. To order a copy, go to Guardian Bookshop

Leave a Reply

Your email address will not be published. Required fields are marked *