Is history a matter of individual agency and agency or of finding and quantifying structures and patterns?
…
On a Louisiana cotton plantation owned by Bennet H Barrow, in 1841 and 1842, "a total of 160 lashes were administered, an average of 0.7 lashes per hand per year." That's how economists Robert Fogel and Stanley L Engerman calculated, drawing on tables created by Louisiana historian Edwin Adams Davis, from the owner's diaries.
Economists put forward this figure, among many others, in a controversial revision of the historiography of slavery, Time on the Cross: The Economics of American Negro Slavery (1974). The book was marketed to appeal to a wide audience, the scientific authority of the work (the footnote-free volume was accompanied by a supplement rich in equations and tables), and to provoke controversy. The inevitability of the Civil War was at stake. And along the way, starting with the neoclassical model of a rational slaveholder, showing that the plantation business was profitable, the authors went so far as to write that:
Plantation owners tried to imbue slaves with a “Protestant” work ethic… Such an attitude could not be imposed on slaves. It had to be evoked.
They also minimized the sexual abuse of slaves and the separation of families.
Discussions about Time on the Cross have cast a long shadow over relations between economists and historians in the United States. Fogel and Engerman portrayed themselves, and are still heralded by many, as the heroes of a “new economic history” – also called “cliometrics” – that would use numbers and models to make history more efficient. They argued that National Science Foundation grants that could fund teams of assistants would ultimately disprove the theses of earlier historians. Meanwhile, most historians in the 1970s rejected not only the book's assessment of slavery but its quantitative methods in general because they failed to take into account the complexity of human experience and action. This was, in fact, yet another episode in a decades-long war. Why would there be battles over numbers in history? Because quantitative methods have long been a sign and symbol of much more than arithmetic or computer proficiency. Disciplines often develop through wars along binary divides, and “quantitative vs. qualitative” is a classic. This becomes a problem when each entrenched camp remains static.
However, there were always exceptions: daring scholars who crossed the battlefield to explore or create new territories. Among the outraged academics who wrote hundreds of pages refuting Fogel and Engerman's assumptions and evidence was the social historian Herbert Guttman, an expert on African-American families who did not hesitate, in his research , to translate the original sources into numbers. No number would do, however. As Gutman pointed out in Slavery and the Numbers Game (1975), it made no more sense to talk about receiving 0.7 lashes per year than it would to write that 99.998 per cent of Blacks living in the US were not lynched in 1889. Another way to look at the average of 0.7 was to estimate that each slave on the plantation saw one lash every 4.56 days – that of a man once a week, plus a woman once every 12 days. So much for evacuation as opposed to threat.
That there is more than one way to interpret the numbers may seem obvious, but it bears repeating at a time when, once again, historians who claim to emulate the supposedly "hard" sciences are able to receive huge grants and hire armies of helpers. Sketching the history of the number wars may be a way to avoid, once again, pitting a hard, scientific, masculine, simplistic, materialistic quantification against an ambiguous, painful, feminine, complex, humanitarian history. – a way to less boring books and articles.
The historian of tomorrow will be a programmer or he will not exist,” wrote Emmanuel Le Roy Laduie in 1968. Those were the heyday of a vogue for quantitative history shared across many countries and disciplines. characterized as "new": not only "new economic history", but also "new social history", "new political history". The novelty could be questioned: in fact there was a history of strong cards in previous decades and, as early as 1903, the young French sociologist François Simian had set the tone for the war of words. He publicly attacked Charles-Victor Langlois and Charles Seignobos, famous history professors and proponents of source criticism (then a new method, still a standard of the historical profession today).
Simiand blamed the three “idols of the historical race”: “the political idol … the individual idol … [and the] chronological idol,” and the historians' unscientific view of causality. In contrast, sociological methods, namely statistics, would allow historians to persistently uncover key phenomena, particularly in economic history, rather than focusing on biographies of leaders or battles. This episode set the tone for the numbers wars. The numbers would be on the side of real science, bottom-up history, and/or long-term trends in material life. On the other hand, trying to shoot "that goddess bitch, quantification" in 1963, Carl Bridenbaugh, the president of the American Historical Association, spoke defending the 'sense of individual men living and having their daily life', 'understanding' and 'capitals'.
Bridenbaugh's speech was decidedly conservative, but quantitatively oriented "new historians" had very different policies and theories, from Marxists to neoclassical economists. What they all shared was an international trend toward positivism in the social sciences. The "new social historians" focused on people, subjects, and sources that had previously been largely dismissed by mainstream historians: they used tax registers, marriage records, and so on to record the full range of ordinary people's life experiences. But they often saw these ordinary people as a whole, as constrained by objective "social structures" of which they were not fully aware. The keyword 'structures', whether used to refer to class struggle or to the typical size and arrangement of households, suggests that details did not matter much.
The "new historians" trusted 20th century statistics and ranked the "reliability" of the sources accordingly. The more massive and homogeneous, the more suitable for modern categories, the better. Other features of the sources were problems to be solved. In 1955, one of the pioneers of the "new political history," Walter Dean Burnham, published nearly 1,000 pages of data on presidential ballots, 1836-92. Similar books were still commonly seen in the 1980s.
Radical scholars were not satisfied with writing a history of the masses. They tried to recover individual voices
When the data was analyzed, it had to be done on large computers located outside the history departments. Data production was even more intensive and relied on an increased division of labor, using research assistants to read sources, as well as punch card operators, cartographers, and other specialists. This type of highly hierarchical collaborative research was expressly designed to follow the 'hard science' model. Many "new historians", as radical political opponents of Fordism, could not fully approve of this organization. Fogel and Engermann's critics have spoken of the work of "helots" – from the Spartan slaves – in historical "factories".
Some pioneering efforts of this type are still reported today: the University of Michigan study of the 1427 Florence census is a landmark in the history of the family. However, many series or their analyzes were never completed, and the data were often lost in successive generations of computer formats. More generally, questions have arisen about the returns on such investments.
Not coincidentally, the neoliberal economic policies of the 1980s reduced funding for historical research projects in most countries. Dissertations had to make sense as readable books if one wanted to obtain a tenure-track position. Thus, quantification seemed less satisfactory at a time when microcomputers allowed historians to process their own data. In fact, the ready availability of computing power diminished the value of the methods adopted in part for the distinction they offered.
However, quantification in history departments didn't just die quietly. He was also attacked for his excessive focus on structures and series. While the "new social history" initially aimed to make ordinary (in the sense of the non-famous) people legitimate objects of inquiry, radical scholars were no longer content with writing a history of the masses. Not only were they increasingly interested in minorities, and not just in the "average man." They also sought to recover individual voices and collectives. Overall, most historians rejected Marxism, economic determinism, structuralism, and quantification and favored the individual, narrative, and cultural or political dimensions of history—seeing all of these as antagonistic to quantification.
From the early 1990s onwards, quantitative methods courses for historians in universities mostly they disappeared . Those that survive in some places are often disconnected from the rest of the curriculum, similar to foreign language courses intended for use in a specific set of sources, such as posed by Margo Anderson in 2007. Economic history is mostly written in economics departments, where it is taken for granted to reconstruct estimates of gross domestic product for periods when there were no official statistics and, in fact, no nation-states. Meanwhile, most members of history departments regard such efforts as anachronistic and pointless (why would past GDPs matter anyway?) – if they even know they exist. There is no generally agreed upon banner for non-quantitative history: it is simply a fact of academic life. Lawrence Stone foreshadowed a "Revival of the Narrative" in 1979. but from the perspective of the history departments, the war of numbers faded into memory due to the lack of serious opponents.
For many historians today, the use of computers or numbers rather refers to "digital humanities," a phrase popularized in the mid-2000s, or, in the past decade, big data. Most bearers of these flags actively ignore the existence of the "new historians" of the 1960s – they prefer to present their efforts as completely original and look not to sociology or economics but to computer science or physics for inspiration. However, they have reinvented many of the old slogans: "Retooling" history to work more like the "hard sciences", with more money, more teamwork and more objectivity. Fifty years after Le Roy, Ladurie, the historic programmer is back, with a vengeance. When they know the first wave, they think it failed because it came too early. Better computers and more digitized data will now allow success.
Funding agencies seem to share this view. Since the 2000s, there has been an increase in grants for anything "digital" or "big". This led to massive hiring of temporary staff for data entry. Some digital humanities projects are once again pretending to render all previous scholarship obsolete and then channeling a lot of resources without producing original historical results. The Venice Time Machine project, launched in 2013, aimed to create "a multi-dimensional model of Venice and its evolution covering a period of more than 1,000 years". With the digitization of kilometers of files. It would undoubtedly advance research in computer science, particularly visual character recognition, but the benefits for historical knowledge were far from clear. Perhaps predictably, the project ran into serious problems and was put on hold in 2019, but an eight-year "phase two" is now being touted.
Meanwhile, The History Manifesto (2014) by David Armitage and Jo Guldi arrived to the point of criticizing many historians' undue interest in records, individual agency, and the political stakes of identities. Among the new wave, they adopted the most belligerent rhetoric, accompanied by high publicity – something like a new Fogel-Engerman duo. In an essay for Aeon, commenting on the specialization of non-data-based historians, they wrote : "Why not throw all these introspective but highly competent monographs and journal articles into a humanities bonfire?"
As historical data becomes available online, many analyze it without questioning its origin and construction
It is striking that much of the criticism leveled against first-wave “big science” around 1980 could be repeated today without any change: rich funding, mathematical sophistication, exhaustive data-seeking – but very few new ideas about the past. Those who admire, for example, Google Ngrams without questioning what exactly is in Google Books are, to us, not much different from those who admired deflated salary numbers without questioning their sources.
"Digital humanities" projects face the same pitfalls as the old "new histories" of the 1960s, when some practitioners felt that the use of new techniques to store and analyze data made source criticism optional. The new vogue for "greatness" is often based on the naive old idea that many biases will cancel each other out. Archeology as well as ancient and medieval history offer many cautionary tales in this regard. For example, Søren Michael Sindbæk, a pioneer in the careful network analysis of archaeological and narrative medieval sources, wrote that 'big data' is rarely good". This statement concludes the experiment of in a large heterogeneous data warehouse for maritime networks. A network visualization of this data revealed major patterns in archaeological knowledge and ignorance – a useful result in its own right, but not to be confused with patterns in medieval metaphors. However, as historical data becomes more readily available online, many economists, physicists and others analyze it without questioning its origin and construction.
We like the advice of the historian Mateusz Fafinski that “historical data is not your familiar kitty. She's a toothy tiger who will eat you and your village of data scientists for breakfast if you don't treat her with respect." But we are not sure that the division of power between the branches is such, at present, as to permit this kind of retribution. Non-quantitative historians are not likely to win the war for interdisciplinary grants anytime soon. Nor will historians prevent practitioners from other disciplines from misinterpreting data. However, leaving the battlefield to find solace in forgetting the squabbles over methods in (financially) besieged parts of history would not satisfy us. Outright wars over methods have been counterproductive when they have entrenched dull research standards on either side of the divide. However, there have always been exceptions – quieter hybridizations that actually produced new ideas. These are the ones we like to lift from the shadows of less belligerent scholarship.
Although others touted so-called digital revolutions, many colleagues devised productive ways to bridge the gap between humanistic and social-scientific approaches. They created various alliances between variants of critical source and situated knowledge on the one hand and quantitative, formal or data-centered methods on the other. There are many unexpected examples – paths why they did not merge into a school or sub-discipline. As early as 1978, Cissie Fairchilds used statistical techniques for an in-depth analysis of a supposedly small sample of records relating to illegitimate births in the late 18th century. in France. Her research is notable for its carefully discussed relabeling of relationships and social lineages that, in her sources, were categorized according to the laws of the time. Her goal was to bring her quantification as close as the source allowed to the lived experiences and voices of women.
In 2017, art historian Yael Rice published a study 16th century Mughal court manuscripts. These include not only beautiful lighting, but also names that document the collaboration behind images – for example, between a designer and a colorist. Rice used these abductions to uncover the workings of laboratories that had left few other traces. He found a steady rotation of collaboration that could explain how a comprehensive style of court painting developed.
Two women historians thus discovered, through a careful and inventive reading of their sources, things that their predecessors found impossible to appreciate. Some of their results came through methods not taught in most graduate programs in history (statistical tests for the Fairchilds, network analysis for Rice) and which no historian could have implemented on a personal computer before the late 1960s. 1990. However, they did the work themselves – and it was only part of their research. Not something that required them to stop challenging categories, forget individual agency, or set aside aesthetic questions. Quantitative techniques were only one tool among others that helped them do their work as humanitarians.
In this universe, populated by diverse colleagues who produce domestic recipes and bespoke formalization, we feel politically and scientifically at home – much more so than in the world of “reconstructed” historians. Plus, what happens here is less boring. Using quantitative methods to prove the weight of known constructs is like beating a dead horse with a highly sophisticated horse. Do we need to spend so much time and effort to collect, measure and classify if the results are predictable or expected? There are, however, different, more satisfactory ways of measuring.
Counting always requires a decision to regard two items as equivalent or different from each other
It is possible to use quantification at microscales, even to assess the exclusion of individuals and discuss their treatment. What drives quantification is the density of information and the determination to deal with it in a systematic way – rather than dealing directly with whole societies. In the 1980s, some Italian microhistorians explored the meanings of Edoardo Grendi's phrase "extraordinary normal" using quantitative techniques, and others have done so ever since. For example, in 2012 Paul Ocobock used the account of a Kenyan boy being beaten by Asia to frame the his examinationof corporal punishment in colonial Kenya – but weaved it together with a quantitative study of court and prison records, showing which dimensions of the boy's experience were ordinary (his age and gender) and which were exceptional (his Asian ancestry) . The data need not consist of long, homogeneous series concerning a single phenomenon. Complex sets of trajectories and interactions are more interesting: patterns can be discovered thanks to techniques such as multiple correspondence analysis, network analysis or sequence analysis.
Counting always requires a decision to regard two items as equivalent or different from each other. Quantification uses categorical data. However, this categorization need not be done in a formal, ahistorical way or remain blind to the complexities of lived identities. Assigning categories such as 'type of occupation', gender or religion to persons based on historical sources is always a scientifically difficult and politically charged enterprise – whether the aim is to produce rates or to write a narrative. Some quants take this task seriously. A prerequisite for this is the acceptance of the complexities, biases and silences in our sources.
Most quants usually talk about “cleaning” data, implying that heterogeneity in sources is a problem to be solved – and that solving it is a secondary task. For us, on the contrary, generating data from sources and creating categories that do not erase all complications is not just the longest and most complex stage of research. it is also the most interesting. We teach value, for sly historians (a phrase coined in feminist scholarship), outliers and oddities as well as missing data – in short, dirty data, stored in complex spreadsheets. And we teach techniques of analysis developed in, for example, cross-sectoral economics: it is important to say that inequalities simply do not add up. exploring howInequalities then interact is, arguably, one of the most fascinating tasks in social history.
In short, alternative quantifiers share microhistorian Giovanni Levi's will to avoid a passive approach to data and sources and the uninspired positivism associated with "assertive, authoritarian forms of discourse." They don't want to use adverbs like "often" or "generally" without the support of accurate data. but they use formalization as an aid to intuition rather than confirmation.
Even when the numbers seem to definitively disprove conventional wisdom, the research shouldn't end there. Where did the conventional wisdom (either historians or actors) come from in the first place, if the data so easily disproves it? For example, Geoff Cunfer shown that the "Dust Bowl" was not directly caused, as is generally believed, by the indiscriminate plowing of the prairies—but, rather, by drought, which often produced similar dust storms in the late 19th century. He didn't stop there, however, but explained how the lack of data on past storms and artwork representing the "Dust Bowl" had affected previous interpretations.
Numbers, when used as tools, not as fetishes, allow for defamiliarization, comparison, and oblique readings of sources: they can be part of an experimental practice of history that is playful, in the sense of not boring, but attentive to morality. Rather than limiting intuition and creativity, quantitative methods can stimulate them. Turning historical sources into data need not be a way to impoverish them, to erase lived experiences. Quantification or "digital history" simply offers new recipe books, among other things, allowing us to see sources differently and encouraging new interpretations. These prescriptions are good at asking clear questions and categorizations, preventing us from sweeping widely shared experiences and embarrassing exceptions under our interpretive rugs. Certainly not the ultimate weapon that some still hope for,
*Cover photo: Children at the New York Foundling Hospital doing their exercises, 1899. Courtesy of the Museum of the City of New York/Getty
Source: aeon.co