If citing, please kindly acknowledge copyright © Penelope J. Corfield (2019)
The terminology, derived from Charles Darwin,1 is hardly elegant. Yet it highlights rival polarities in the intellectual cast of mind. ‘Lumpers’ seek to assemble fragments of knowledge into one big picture, while ‘splitters’ see instead complication upon complications. An earlier permutation of that dichotomy was popularised by Isaiah Berlin. In The Hedgehog and the Fox (1953), he distinguished between brainy foxes, who know many things, and intellectual hedgehogs, who apparently know one big thing.2

Fox from © Clipart 2019; Hedgehog from © (2019)

These animalian embodiments of modes of thought are derived from a fragmentary dictum from the classical Greek poet Archilochus; and they remain more fanciful than convincing. It’s not self-evident that a hedgehog’s mentality is really so overwhelmingly single-minded.3 Nor is it clear that the reverse syndrome applies particularly to foxes, which have a reputation for craft and guile.4 To make his point with reference to human thinkers, Berlin instanced the Russian novelist Leo Tolstoy as a classic ‘hedgehog’. Really? The small and prickly hedgehog hardly seems a good proxy for a grandly sweeping thinker like Tolstoy.

Those objections to Berlin’s categories, incidentally, are good examples of hostile ‘splitting’. They quibble and contradict. Sweeping generalisations are rejected. Such objections recall a dictum in a Poul Anderson sci-fi novella, when one character states gravely that: ‘I have yet to see any problem, which, when you looked at it in the right way, did not become still more complicated’.5

Arguments between aggregators/generalisers and disaggregators/sceptics, which occur in many subjects, have been particularly high-profile among historians. The lumping/splitting dichotomy was recycled in 1975 by the American J.H. Hexter.6 He accused the Marxist Christopher Hill not only of ‘lumping’ but, even worse, of deploying historical evidence selectively, to bolster a partisan interpretation. Hill replied relatively tersely.7 He rejected the charge that he did not play fair with the sources. But he proudly accepted that, through his research, he sought to find and explain meanings in history. The polarities of lumping/splitting were plain for all to see.

Historical ‘lumpers’ argue that all analysis depends upon some degree of sorting/processing/generalising, applied to disparate information. Merely itemising date after date, or fact after fact ad infinitum, would not tell anyone anything. On those dreadful occasions when lecturers do actually proceed by listing minute details one by one (for example, going through events year by year), the audience’s frustration very quickly becomes apparent.

So ‘lumpers’ like big broad interpretations. And they tend to write big bold studies, with clear long-term trends. Karl Marx’s panoramic brief survey of world history in nine pages in The Communist Manifesto was a classic piece of ‘lumping’.8 In the twentieth century, the British Marxist historian E.P. Thompson was another ‘lumper’ who sought the big picture, although he could be a combative ‘splitter’ about the faults of others.9

‘Splitters’ conversely point out that, if there were big broad-brush interpretations that were reliably apparent, they would have been discovered and accepted by now. However, the continual debates between historians in every generation indicate that grand generalisations are continually being attacked. The progression of the subject relies upon a healthy dose of disaggregation alongside aggregation. ‘Splitters’ therefore produce accounts of rich detail, complications, diversities, propounding singular rather than universal meanings, and stressing contingency over grand trends.

Sometimes critics of historical generalisations are too angry and acerbic. They can thus appear too negative and destructive. However, one of the twentieth-century historians’ most impressive splitters was socially a witty and genial man. Intellectually, however, F.J. ‘Jack’ Fisher was widely feared for his razor-sharp and trenchant demolitions of any given historical analysis. Indeed, his super-critical cast of mind had the effect of limiting his own written output to a handful of brilliant interpretative essays rather than a ‘big book’.10 (Fisher was my research supervisor. His most caustic remark to me came after reading a draft chapter: ‘There is nothing wrong with this, other than a female desire to tell all and an Oxbridge desire to tell it chronologically.’ Ouch! Fisher was not anti-woman, although he was critical of Oxbridge where I’d taken my first degree. But he used this formulation to grab my attention – and it certainly did).

Among research historians today, the temperamental/intellectual cast of mind often inclines them to ‘splitting’, partly because there are many simplistic generalisations about history in public circulation which call out for contradiction or complication. Of course, the precise distribution around the norm remains unknown. These days, I would guestimate that the profession would divide into roughly 45% ‘lumpers’, seeking big grand overviews, and 55% ‘splitters’, stressing detail, diversity, contingency. The classification, however, does depend partly on the occasion and type of output, since single-person expositions on TV and radio encourage generalisations, while round-tables and panels thrive on disagreement where splitters can come into their own.

Moreover, there are not only personal variations, depending upon circumstance, but also major oscillations in intellectual fashions within the discipline. In the later twentieth century, for example, there was a growing, though not universal, suspicion of so-called Grand Narratives (big through-time interpretations).11 The high tide of the sceptical trend known as ‘revisionism’ challenged many old generalisations and easy assumptions. Revisionists did not constitute one single school of thought. Many did favour conservative interpretations of history, but, as remains apparent today, there was and is more than one form of conservatism. That said, revisionists were generally agreed in rejecting both left-wing Marxist conflict models of revolutionary change via class struggles and liberal Whiggish linear models of evolving Progress via spreading education, constitutional rights and so forth.12

Yet the alignments were never simple (a splitterish comment from myself). Thus J.H. Hexter was a ‘splitter’ when confronting Marxists like Hill. But he was a ‘lumper’ when propounding his own Whig view of history as a process of evolving Freedom. So Hexter’s later strictures on revisionism were as fierce as was his earlier critique of Hill.13

Ideally, most research historians probably seek to find a judicious balance between ‘lumping’/‘splitting’. There is scope both for generalisations and for qualifications. After all, there is diversity within the human experience and within the cosmos. Yet there are also common themes, deep patterns, and detectable trends.

Ultimately, however, the dichotomous choice between either ‘lumping’ or ‘splitting’ is a completely false option, when pursued to its limits. Human thought, in all the disciplines, depends upon a continuous process of building/qualifying/pulling down/rebuilding/requalifying/ and so on, endlessly. With both detailed qualifications and with generalisations. An analysis built upon And+And+And+And+And would become too airy and generalised to have realistic meaning. Just as a formulation based upon But+But+But+But+But would keep negating its own negations. So, yes. Individually, it’s worth thinking about one’s own cast of mind and intellectual inclinations. (I personally enjoy both lumping and splitting, including criticising various outworn terminologies for historical periodisation).14 Furthermore, self-knowledge allows personal scope to make auto-adjustments, if deemed desirable. And then, better still, to weld the best features of ‘lumping’ and ‘splitting’ into original thought. And+But+And+Eureka.


1 Charles Darwin in a letter dated August 1857: ‘It is good to have hair-splitters and lumpers’: see Darwin Correspondence Letter 2130 in

2 I. Berlin, The Hedgehog and the Fox: An Essay on Tolstoy’s View of History (1953).

3 For hedgehogs, now an endangered species, see S. Coulthard, The Hedgehog Handbook (2018). If the species were to have one big message for humans today, it would no doubt be: ‘Stop destroying our habitat and support the Hedgehog Preservation Society’.

4 M. Berman, Fox Tales and Folklore (2002).

5 From P. Anderson, Call Me Joe (1957).

6 J.H. Hexter, ‘The Burden of Proof: The Historical Method of Christopher Hill’, Times Literary Supplement, 25 Oct. 1975, repr. in J.H. Hexter, On Historians: Reappraisals of Some of the Makers of Modern History (1979), pp. 227-51.

7 For Hill’s rebuttal, see The Times Literary Supplement, 7 Nov. 1975, p. 1333.

8 K. Marx and F. Engels, The Manifesto of the Communist Party (1848), Section I: ‘Bourgeois and Proletarians’, in D. McLennan (ed.), Karl Marx: Selected Writings (Oxford, 1977), pp. 222-31.

9 Among many overviews, see e.g. C. Efstathiou, E.P. Thompson: A Twentieth-Century Romantic (2015); P.J. Corfield, E.P. Thompson, Historian: An Appreciation (1993; 2018), in PJC website’s/CorfieldPdf45.

10 See P.J. Corfield, F.J. Fisher (1908-88) and the Dialectic of Economic History (1990; 2018), in PJC website’s/CorfieldPdf46.

11 See esp. J-F. Lyotard, The Postmodern Condition: A Report on Knowledge (Paris, 1979; in Eng. transl. 1984), p. 7, which detected ‘an incredulity toward meta-narratives’; and further discussions in G.K. Browning, Lyotard and the End of Grand Narratives (Cardiff, 2000); and A Munslow, Narrative and History (2018). Earlier Lawrence Stone, a classic historian ‘lumper’, had detected a return to narrative styles of exposition: see L. Stone, ‘The Revival of Narrative: Reflections on a New Old History’, Past & Present, 85 (1979), pp.  3-24. But in this essay Stone was detecting a decline in social-scientific styles of History-writing – not a return to old-style Grand Narratives.

12 Revisionism is sufficiently variegated to have avoided summary within one big study. But different debates are surveyed in L. Labedz (ed.), Revisionism: Essays on the History of Marxist Ideas (1962); J.M. Maddox, Hiroshima in History: The Myths of Revisionism (1974; 2011); L. Brenner, The Iron Wall: Zionist Revisionism from Jabotinsky to Shamir (1984); E. Longley, The Living Stream: Literature and Revisionism in Ireland (Newcastle upon Tyne, 1994); and M. Haynes and J. Wolfreys (eds), History and Revolution: Refuting Revisionism (2007).

13 J.H. Hexter (1910-96) founded in 1986 the Center for the History of Freedom at Washington University, USA, where he was Professor of the History of Freedom, and launched The Making of Modern Freedom series. For his views on revisionism, see J.H. Hexter, ‘Historiographical Perspectives: The Early Stuarts and Parliaments – Old Hat and the Nouvelle Vague’, Parliamentary History, 1 (1982), pp. 181-215; and analysis in W.H. Dray, ‘J.H. Hexter, Neo-Whiggism and Early Stuart Historiography’, History & Theory, 26 (1987), pp. 133-49.

14 See e.g. P.J. Corfield, ‘Primevalism: Saluting a Renamed Prehistory’, in A. Baysal, E.L. Baysal and S. Souvatzi (eds), Time and History in Prehistory (2019), pp. 265-82; and P.J. Corfield, ‘POST-Medievalism/ Modernity/ Postmodernity?’ Rethinking History, 14 (2010), pp. 379-404; also on’s/CorfieldPdf20.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 101 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2019)

Joining a public committee of any kind? Before getting enmeshed in the details, I recommend studying the rulebook. Why on earth? Such advice seems arcane, indeed positively nerdy. But I have a good reason for this recommendation. Framework rules are the hall-mark of a constitutionalist culture.

Fig.1 The handsome front cover of the first edition of Robert’s Rules of Order (1876): these model rules, based upon the practices of the US Congress, remain widely adopted across the USA, their updating being undertaken by the Robert’s Rules Association, most recently in 2011.

Once, many years ago, I was nominated by the London education authority – then in the form of the Inner London Education Authority or ILEA – onto a charitable trust in Battersea, where I live. I accepted, not with wild enthusiasm, but from a sense of civic duty. The Trust was tiny and then did not have much money. It was rumoured that a former treasurer in the 1930s had absconded with all the spare cash. But anyway in the early 1970s the Trust was pottering along and did not seem likely to be controversial.

My experience as a Trustee was, however, both depressing and frustrating. The Trust was then named Sir Walter St. John’s Trust; and it exists today in an updated and expanded guise as the Sir Walter St. John’s Educational Charity ( It was founded in 1700 by Battersea’s local Lord of the Manor, after whom it is named. In the 1970s, the Trust didn’t do much business at all. The only recurrent item on the agenda was the question of what to do about a Victorian memorial window which lacked a home. The fate of the Bogle Smith Window (as it was known) had its faintly comic side. Surely somewhere could be found to locate it, within one or other of the two local state-sector grammar schools, for which the Trust was ground landowner? But soon the humour of wasting hours of debate on a homeless window palled.

I also found it irksome to be treated throughout with deep suspicion and resentment by most of my fellow Trustees. They were Old Boys from the two schools in question: Battersea Grammar School and Sir Walter St. John School. All the Trust business was conducted with outward calm. There were no rows between the large majority of Old Boys and the two women appointed by the ILEA. My fellow ILEA-nominee hardly ever attended; and said nothing, when she did. Yet we were treated with an unforgiving hostility, which I found surprising and annoying. A degree of misogyny was not unusual; yet often the stereotypical ‘good old boys’ were personally rather charming to women (‘the ladies, God bless’em’) even while deploring their intrusion into public business.

But no, these Old Boys were not charming, or even affable. And their hostile attitude was not caused purely by misogyny. It was politics. They hated the Labour-run ILEA and therefore the two ILEA appointees on the Trust. It was a foretaste of arguments to come. By the late 1970s, the Conservatives in London, led by Councillors in Wandsworth (which includes Battersea) were gunning for the ILEA. And in 1990 it was indeed abolished by the Thatcher government.

More than that, the Old Boys on the Trust were ready to fight to prevent their beloved grammar schools from going comprehensive. (And in the event both schools later left the public sector to avoid that ‘fate’). So the Old Boys’ passion for their cause was understandable and, from their point of view, righteous. However, there was no good reason to translate ideological differences into such persistently rude and snubbing behaviour.

Here’s where the rulebook came into play. I was so irked by their attitude – and especially by the behaviour of the Trust’s Chair – that I resolved to nominate an alternative person for his position at the next Annual General Meeting. I wouldn’t have the votes to win; but I could publicly record my disapprobation. The months passed. More than a year passed. I requested to know the date of the Annual General Meeting. To a man, the Old Boys assured me that they never held such things, with something of a lofty laugh and sneer at my naivety. In reply, I argued firmly that all properly constituted civic bodies had to hold such events. They scoffed. ‘Well, please may I see the Trust’s standing orders?’ I requested, in order to check. In united confidence, the Old Boys told me that they had none and needed none. We had reached an impasse.

At this point, the veteran committee clerk, who mainly took no notice of the detailed discussions, began to look a bit anxious. He was evidently stung by the assertion that the Trust operated under no rules. After some wrangling, it was agreed that the clerk should investigate. At the time, I should have cheered or even jeered. Because I never saw any of the Old Boys again.

Several weeks after this meeting, I received through the post a copy of the Trust’s Standing Orders. They looked as though they had been typed in the late nineteenth century on an ancient typewriter. Nonetheless, the first point was crystal clear: all members of the Trust should be given a copy of the standing orders upon appointment. I was instantly cheered. But there was more, much more. Of course, there had to be an Annual General Meeting, when the Chair and officers were to be elected. And, prior to that, all members of the Trust had to be validly appointed, via an array of different constitutional mechanisms.

An accompanying letter informed me that the only two members of the Trust who were correctly appointed were the two ILEA nominees. I had more than won my point. It turned out that over the years the Old Boys had devised a system of co-options for membership among friends, which was constitutionally invalid. They were operating as an ad hoc private club, not as a public body. Their positions were automatically terminated; and they never reappeared.

In due course, the vacancies were filled by the various nominating bodies; and the Trust resumed its very minimal amount of business. Later, into the 1980s, the Trust did have some key decisions to make, about the future of the two schools. I heard that its sessions became quite heated politically. That news was not surprising to me, as I already knew how high feelings could run on such issues. These days, the Trust does have funds, from the eventual sale of the schools, and is now an active educational charity.

Personally, I declined to be renominated, once my first term of service on the Trust was done. I had wasted too much time on fruitless and unpleasant meetings. However, I did learn about the importance of the rulebook. Not that I believe in rigid adhesion to rules and regulations. Often, there’s an excellent case for flexibility. But the flexibility should operate around a set of framework rules which are generally agreed and upheld between all parties.

Rulebooks are to be found everywhere in public life in constitutionalist societies. Parliaments have their own. Army regiments too. So do professional societies, church associations, trade unions, school boards, and public businesses. And many private clubs and organisations find them equally useful as well. Without a set of agreed conventions for the conduct of business and the constitution of authority, there’s no way of stopping arbitrary decisions – and arbitrary systems can eventually slide into dictatorships.

As it happens, the Old Boys on the Sir Walter St. John Trust were behaving only improperly, not evilly. I always regretted the fact that they simply disappeared from the meetings. They should at least have been thanked for their care for the Bogle Smith Window. And I would have enjoyed the chance to say, mildly but explicitly: ‘I told you so!’

Goodness knows what happened to these men in later years. I guess that they continued to meet as a group of friends, with a great new theme for huffing and puffing at the awfulness of modern womanhood, especially the Labour-voting ones. If they did pause to think, they might have realised that, had they been personally more pleasant to the intruders into their group, then there would have been no immediate challenge to their position. I certainly had no idea that my request to see the standing orders would lead to such an outcome.

Needless to say, the course of history does not hinge upon this story. I personally, however, learned three lasting lessons. Check to see what civic tasks involve before accepting them. Remain personally affable to all with whom you have public dealings, even if you disagree politically. And if you do join a civic organisation, always study the relevant rulebook. ‘I tried to tell them so!’ all those years ago – and I’m doing it now in writing. Moreover, the last of those three points is highly relevant today, when the US President and US Congress are locking horns over the interpretation of the US constitutional rulebook. May the rule of law prevail – and no prizes for guessing which side I think best supports that!

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 99 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2018)

History is a subject that deals in ‘thinking long’. The human capacity to think beyond the immediate instant is one of our species’ most defining characteristics. Of course, we live in every passing moment. But we also cast our minds, retrospectively and prospectively, along the thought-lines of Time, as we mull over the past and try to anticipate the future. It’s called ‘thinking long’.

Studying History (indicating the field of study with a capital H) is one key way to cultivate this capacity. Broadly speaking, historians focus upon the effects of unfolding Time. In detail, they usually specialise upon some special historical period or theme. Yet everything is potentially open to their investigations.

Sometimes indeed the name of ‘History’ is invoked as if it constitutes an all-seeing recording angel. So a controversial individual in the public eye, fearing that his or her reputation is under a cloud, may proudly assert that ‘History will be my judge’. Quite a few have made such claims. They express a blend of defiance and  optimism. Google: ‘History will justify me’ and a range of politicians, starting with Fidel Castro in 1963, come into view. However, there’s no guarantee that the long-term verdicts will be kinder than any short-term criticisms.

True, there are individuals whose reputations have risen dramatically over the centuries. The poet, painter and engraver William Blake (1757-1827), virtually unknown in his own lifetime, is a pre-eminent example. Yet the process can happen in reverse. So there are plenty of people, much praised at the start of their careers, whose reputations have subsequently nose-dived and continue that way. For example, some recent British Prime Ministers may fall into that category. Only Time (and the disputatious historians) will tell.

Fig. 1 William Blake’s Recording Angel has about him a faint air of an impish magician as he points to the last judgment. If this task were given to historians, there would be a panel of them, arguing amongst themselves.

In general, needless to say, those studying the subject of History do not define their tasks in such lofty or angelic terms. Their discipline is distinctly terrestrial and Time-bound. It is prone to continual revision and also to protracted debates, which may be renewed across generations. There’s no guarantee of unanimity. One old academic anecdote imagines the departmental head answering the phone with the majestic words: ‘History speaking’.1 These days, however, callers are likely to get no more than a tinny recorded message from a harassed administrator. And academic historians in the UK today are themselves being harried not to announce god-like verdicts but to publish quickly, in order to produce the required number of ‘units of output’ (in the assessors’ unlovely jargon) in a required span of time.

Nonetheless, because the remit of History is potentially so vast, practitioners and students have unlimited choices. As already noted, anything that has happened within unfolding Time is potentially grist to the mill. The subject resembles an exploding galaxy – or, rather, like the cosmos, the sum of many exploding galaxies.

Tempted by that analogy, some practitioners of Big History (a long-span approach to History which means what it says) do take the entire universe as their remit, while others stick merely to the history of Planet Earth.2 Either way, such grand approaches are undeniably exciting. They require historians to incorporate perspectives from a dazzling range of other disciplines (like astro-physics) which also study the fate of the cosmos. Thus Big History is one approach to the subject which very consciously encourages people to ‘think long’. Its analysis needs careful treatment to avoid being too sweeping and too schematic chronologically, as the millennia rush past. But, in conjunction with shorter in-depth studies, Big History gives advanced students a definite sense of temporal sweep.

Meanwhile, it’s also possible to produce longitudinal studies that cover one impersonal theme, without having to embrace everything. Thus there are stimulating general histories of the weather,3 as well as more detailed histories of weather forecasting, and/or of changing human attitudes to weather. Another overarching strand studies the history of all the different branches of knowledge that have been devised by humans. One of my favourites in this genre is entitled: From Five Fingers to Infinity.4 It’s a probing history of mathematics. Expert practitioners in this field usually stress that their subject is entirely ahistorical. Nonetheless, the fascinating evolution of mathematics throughout the human past to become one globally-adopted (non-verbal) language of communication should, in my view, be a theme to be incorporated into all advanced courses. Such a move would encourage debates over past changes and potential future developments too.

Overall, however, the great majority of historians and their courses in History take a closer focus than the entire span of unfolding Time. And it’s right that the subject should combine in-depth studies alongside longitudinal surveys. The conjunction of the two provides a mixture of perspectives that help to render intelligible the human past. Does that latter phrase suffice as a summary definition?5 Most historians would claim to study the human past rather than the entire cosmos.

Yet actually that common phrase does need further refinement. Some aspects of the human past – the evolving human body, for example, or human genetics – are delegated for study by specialist biologists, anatomists, geneticists, and so forth. So it’s clearer to say that most historians focus primarily upon the past of human societies in the round (ie. including everything from politics to religion, from war to economics, from illness to health, etc etc). And that suffices as a definition, provided that insights from adjacent disciplines are freely incorporated into their accounts, wherever relevant. For example, big cross-generational studies by geneticists are throwing dramatic new light upon the history of human migration around the globe and also of intermarriage within the complex range of human species and the so-called separate ‘races’ within them.6 Their evidence amply demonstrates the power of longitudinal studies for unlocking both historical and current trends.

The upshot is that the subject of History can cover everything within the cosmos; that it usually concentrates upon the past of human societies, viewed in the round; and that it encourages the essential human capacity for thinking long. For that reason, it’s a study for everyone. And since all people themselves constitute living histories, they all have a head-start in thinking through Time.7

1 I’ve heard this story recounted of a formidable female Head of History at the former Bedford College, London University; and the joke is also associated with Professor Welch, the unimpressive senior historian in Kingsley Amis’s Lucky Jim: A Novel (1953), although upon a quick rereading today I can’t find the exact reference.

2 For details, see the website of the Big History’s international learned society (founded 2010): My own study of Time and the Shape of History (2007) is another example of Big History, which, however, proceeds not chronologically but thematically.

3 E.g. E. Durschmied, The Weather Factor: How Nature has Changed History (2000); L. Lee, Blame It on the Rain: How the Weather has Changed History (New York, 2009).

4 F.J. Swetz (ed.), From Five Fingers to Infinity: A Journey through the History of Mathematics (Chicago, 1994).

5 For meditations on this theme, see variously E.H. Carr, What is History? (Cambridge 1961; and many later edns); M. Bloch, The Historian’s Craft (in French, 1949; in English transl. 1953); B. Southgate, Why Bother with History? Ancient, Modern and Postmodern Motivations (Harlow, 2000); J. Tosh (ed.), Historians on History: An Anthology (2000; 2017); J. Black and D.M. MacRaild, Studying History (Basingstoke, 2007); H.P.R. Finberg (ed.), Approaches to History: A Symposium (2016).

6 See esp. L.L. Cavalli-Sforza and F. Cavalli-Sforza, The Great Human Diasporas: The History of Diversity and Evolution, transl. by S. Thomas (Reading, Mass., 1995); D. Reich, Who We Are and Where We Got Here: Ancient DNA and the New Science of the Human Past (Oxford, 2018).

7 P.J. Corfield, ‘All People are Living Histories: Which is why History Matters’. A conversation-piece for those who ask: Why Study History? (2008) in London University’s Institute of Historical Research Project, Making History: The Discipline in Perspective; and also available on Pdf1.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 94 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2018)
Historians, who study the past, don’t undertake this exercise from some vantage point outside Time. They, like everyone else, live within an unfolding temporality. That’s very fundamental. Thus it’s axiomatic that historians, like their subjects of study, are all equally Time-bound.1

Nor do historians undertake the study of the past in one single moment in time. Postmodernist critics of historical studies sometimes write as though historical sources are culled once only from an archive and then adopted uncritically. The implied research process is one of plucking choice flowers and then pressing them into a scrap-book to some pre-set design.

On such grounds, critics of the discipline highlight the potential flaws in all historical studies. Sources from the past are biased, fallible and scrappy. Historians in their retrospective analysis are also biased, fallible and sometimes scrappy. And historical writings are literary creations only just short of pure fiction.2

Historians should welcome scepticism this dose of scepticism – always a useful corrective. Yet they entirely reject the proposition that trying to understand bygone eras is either impossible or worthless. Rebuttals to postmodernist scepticism have been expressed theoretically;3 and also directly, via pertinent case studies which cut through the myths and ‘fake news’ which often surround controversial events in history.4

When at work, historians should never take their myriad of source materials literally and uncritically. Evidence is constantly sought, interrogated, checked, cross-checked, compared and contrasted, as required for each particular research theme. The net is thrown widely or narrowly, again depending upon the subject. Everything is a potential source, from archival documents to art, architecture, artefacts and though the gamut to witness statements and zoological exhibits. Visual materials can be incorporated either as primary sources in their own right, or as supporting documentation. Information may be mapped and/or tabulated and/or statistically interrogated. Digitised records allow the easy selection of specific cases and/or the not-so-easy processing of mass data.

As a result, researching and writing history is a slow through-Time process – sometimes tediously so. It takes at least four years, from a standing start, to produce a big specialist, ground-breaking study of 100,000 words on a previously un-studied (or under-studied) historical topic. The exercise demands a high-level synthesis of many diverse sources, running to hundreds or even thousands. Hence the methodology is characteristically much more than a ‘reading’ of one or two key texts – although, depending upon the theme, at times a close reading of a few core documents (as in the history of political ideas) is essential too.

Mulling over meanings is an important part of the process too. History as a discipline encourages a constant thinking and rethinking, with sustained creative and intellectual input. It requires knowledge of the state of the discipline – and a close familiarity with earlier work in the chosen field of study. Best practice therefore enjoins writing, planning and revising as the project unfolds. For historical studies, ‘writing through’ is integral, rather than waiting until all the hard research graft is done and then ‘writing up’.5

The whole process is arduous and exciting, in almost equal measure. It’s constantly subject to debate and criticism from peer groups at seminars and conferences. And, crucially too, historians are invited to specify not only their own methodologies but also their own biases/assumptions/framework thoughts. This latter exercise is known as ‘self-reflexivity’. It’s often completed at the end of a project, although it’s then inserted near the start of the resultant book or essay. And that’s because writing serves to crystallise and refine (or sometimes to reject) the broad preliminary ideas, which are continually tested by the evidence.

One classic example of seriously through-Time writing comes from the classic historian Edward Gibbon. The first volume of his Decline & Fall of the Roman Empire appeared in February 1776. The sixth and final one followed in 1788. According to his autobiographical account, the gestation of his study dated from 1764. He was then sitting in the Forum at Rome, listening to Catholic monks singing vespers on Capitol Hill. The conjunction of ancient ruins and later religious commitments prompted his core theme, which controversially deplored the role of Christianity in the ending of Rome’s great empire. Hence the ‘present’ moments in which Gibbon researched, cogitated and wrote stretched over more than 20 years. When he penned the last words of the last volume, he recorded a sensation of joy. But then he was melancholic that his massive project was done.6 (Its fame and the consequent controversies last on today; and form part of the history of history).

1 For this basic point, see PJC, ‘People Sometimes Say “We Don’t Learn from the Past” – and Why that Statement is Completely Absurd’, BLOG/91 (July 2018), to which this BLOG/92 is a companion-piece.

2 See e.g. K. Jenkins, ReThinking History (1991); idem (ed.), The Postmodern History Reader (1997); C.G. Brown, Postmodernism for Historians (Harlow, 2005); A. Munslow, The Future of History (Basingstoke, 2010).

3 J. Appleby, L. Hunt and M. Jacob, Telling the Truth about History (New York, 1994); R. Evans, In Defence of History (1997); J. Tosh (ed.), Historians on History (Harlow, 2000); A. Brundage, Going to the Sources: A Guide to Historical Research and Writing (Hoboken, NJ., 2017).

4 H. Shudo, The Nanking Massacre: Fact versus Fiction – A Historian’s Quest for the Truth, transl. S. Shuppan (Tokyo, 2005); Vera Schwarcz, Bridge across Broken Time: Chinese and Jewish Cultural Memory (New Haven, 1998).

5 PJC, ‘Writing Through a Big Research Project, not Writing Up’, BLOG/60 (Dec.2015); PJC, ‘How I Write as a Historian’, BLOG/88 (April 2018).

6 R. Porter, Gibbon: Making History (1989); D.P. Womersley, Gibbon and the ‘Watchmen of the Holy City’: The Historian and his Reputation, 1776-1815 (Oxford, 2002).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 92 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2018)

People sometimes say, dogmatically but absurdly: ’We don’t learn from the Past’. Oh really? So what do humans learn from, then? We don’t learn from the Future, which has yet to unfold. We do learn in and from the Present. Yet every moment of ‘Now’ constitutes an infinitesimal micro-instant an unfolding process. The Present is an unstable time-period, which is constantly morphing, nano-second by nano-second, into the Past. Humans don’t have time, in that split-second of ‘Now’, to comprehend and assimilate everything. As a result, we have, unavoidably, to learn from what has gone before: our own and others’ experiences, which are summed as everything before ‘Now’: the Past.

It’s worth reprising the status of those temporal categories. The Future, which has not yet unfolded, is not known or knowable in its entirety. That’s a definitional quality which springs from the unidirectional nature of Time. It does not mean that the Future is either entirely unknown or entirely unknowable. As an impending temporal state, it may beckon, suggest, portend. Humans are enabled to have considerable information and expectations about many significant aspects of the Future. For example, it’s clear from past experience that all living creatures will, sooner or later, die in their current corporeal form. We additionally know that tomorrow will come after today, because that is how we habitually define diurnal progression within unilinear Time. We also confidently expect that in the future two plus two will continue to equal four; and that all the corroborated laws of physics will still apply.

And we undertake calculations, based upon past data, which provide the basis for Future predictions or estimates. For example, actuarial tables, showing age-related life expectancy, indicate group probabilities, though not absolute certainties. Or, to take a different example, we know, from expert observation and calculation, that Halley’s Comet is forecast to return into sight from Earth in mid-2061. Many, though not all, people alive today will be able to tell whether that astronomical prediction turns out to be correct or not. And there’s every likelihood  that it will be.

Commemorating a successful prediction,
in the light of past experience:
a special token struck in South America in 2010 to celebrate
the predicted return to view from Planet Earth
of Halley’s Comet,
whose periodicity was first calculated by Edward Halley (1656-1742)

Yet all this (and much more) useful information about the Future is, entirely unsurprisingly, drawn from past experience, observations and calculations. As a result, humans can use the Past to illuminate and to plan for the Future, without being able to foretell it with anything like total precision.

So how about learning from the Present? It’s live, immediate, encircling, inescapably ‘real’. We all learn in our own present times – and sometimes illumination may come in a flash of understanding. One example, as Biblically recounted, is the conversion of St Paul, who in his unregenerate days was named Saul: ‘And as he journeyed, he came near Damascus; and suddenly there shined round about him a light from heaven. And he fell to the earth, and heard a voice saying unto him, “Saul, Saul, why persecutest thou me?”’1 His eyes were temporarily blinded; but spiritually he was enlightened. Before then, Saul was one of the Christians’ chief persecutors, ‘breathing out threatening and slaughter’.2 Perhaps a psychologist might suggest that his intense hostility concealed some unexpressed fascination with Christianity. Nonetheless, there was no apparent preparation, so the ‘Damascene conversion’ which turned Saul into St Paul remains the classic expression of an instant change of heart. But then he had to rethink and grow into his new role, working with those he had been attempting to expunge.

A secular case of sudden illumination appears in the fiction of Jane Austen. In Emma (1815), the protagonist, a socially confident would-be match-maker, has remained in ignorance of her own heart. She encourages her young and humble protégé, Harriet Smith, to fancy herself in love. They enjoy the prospect of romance. Then Emma suddenly learns precisely who is the object of Harriet’s affections. The result is wonderfully described.3 Emma sits in silence for several moments, in a fixed attitude, contemplating the unpleasant news:

Why was it so much worse that Harriet should be in love with Mr Knightley, than with Frank Churchill? Why was the evil so dreadfully increased by Harriet’s having some hope of a return? It darted through her, with the speed of an arrow, that Mr Knightley must marry no one but herself!

I remember first reading this novel, as a teenager, when I was as surprised as Emma at this development. Since then, I’ve reread the story many times; and I can now see the prior clues which Austen scatters through the story to alert more worldly-wise readers that George Knightley and Emma Woodhouse are a socially and personally compatible couple, acting in concert long before they both (separately) realise their true feelings. It’s a well drawn example of people learning from the past whilst ‘wising up’ in a single moment. Emma then undertakes some mortifying retrospection as she gauges her own past errors and blindness. But she is capable of learning from experience. She does; and so, rather more artlessly, does Harriet. It’s a comedy of trial-and-error as the path to wisdom.

As those examples suggest, the relationship of learning with Time is in fact a very interesting and complex one. Humans learn in their own present moments. Yet the process of learning and education as a whole has to be a through-Time endeavour. A flash of illumination needs to be mentally consolidated and ‘owned’. Otherwise it is just one of those bright ideas which can come and as quickly go.   Effective learning thus entails making oneself familiar with a subject by repetition, cogitation, debating, and lots of practice. Such through-Time application applies whether people are learning physical or intellectual skills or both. The role of perspiration, as well as inspiration, is the stuff of many mottoes: ‘practice makes perfect’; ‘if at first you don’t succeed, try and try again’; ‘stick at it’; ‘never stop learning’; ‘trudge another mile’; ‘learn from experience’.

Indeed, the entire corpus of knowledge and experience that humans have assembled over many generations is far too huge to be assimilated in an instant. (It’s actually too huge for any one individual to master. So we have to specialise and share).

So that brings the discussion back to the Past. It stretches back through Time and onwards until ‘Now’. Of course, we learn from it. Needless to say, it doesn’t follow that people always agree on messages from former times, or act wisely in the light of such information. Hence when people say: ‘We don’t learn from the Past’, they probably mean that it does not deliver one guiding message, on which everyone agrees. And that’s right. It doesn’t and there isn’t.

One further pertinent point: there are rumbling arguments around the question – is the Past alive or dead? (With a hostile implication in the sub-text that nothing can really be learned from a dead and vanished Past.) But that’s not a helpful binary. In other words, it’s a silly question. Some elements of the past have conclusively gone, while many others persist through time.4 To take just a few examples, the human genome was not invented this morning; human languages have evolved over countless generations; and the laws of physics apply throughout.

Above all, therefore, the integral meshing between Past and Present means that we, individual humans, have also come from the Past. It’s in us as well as, metaphorically speaking, behind us. Thinking of Time as running along a pathway or flowing like a river is a common human conception of temporality. Other alternatives might envisage the Past as ‘above’, ‘below’, ‘in front’, ‘behind’, or ‘nowhere specific’. The metaphor doesn’t really matter as long as we realise that it pervades everything, including ourselves.

1 Holy Bible, Acts 9: 3-4.

2 Ibid, 9:1.

3 J. Austen, Emma: A Novel (1815), ed. R. Blythe (Harmondsworth, 1969), p. 398.

4 P.J. Corfield, ‘Is the Past Dead or Alive? And the Snares of Such Binary Questions’, BLOG/62 (Feb.2016).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 91 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2017)

Speakers and writers constantly adopt and play with new words and usages, even while the deep grammatical structures of language evolve, if at all, only very slowly. I remember an English class at school when I was aged about twelve or thirteen when we were challenged to invent new words. The winning neologism was ‘puridence’. It meant: by pure coincidence. Hence, one could say ‘I walked along the pavement, puridence I slipped and fell on a banana skin’. The winner was my class-mate Audrey Turner, who has probably forgotten. (I wonder whether anyone else remembers this moment?)

2017-12 No1 slip-man-black-banana-md

Fig.1 Slip Man Black Banana:
‘Puridence I slipped and fell on a banana skin’

Another new word, invented by my partner Tony Belton on 26 October 2013, is ‘wrongaplomb’. It refers to someone who is habitually in error but always with total aplomb. It’s a great word, which immediately summons to my mind the person for whom the term was invented. But again, I expect that Tony has also forgotten. (He has). New words arrive and are shed with great ease. This is one which came and went, except for the fact that I noted it down.

No wonder that dictionary compilers find it a struggle to keep abreast. The English language, as a Germanic tongue hybridised by its conjunction with Norman French, already has a huge vocabulary, to which additions are constantly made. One optimistic proposal in the Gentleman’s Magazine in 1788 hoped to keep a check upon the process in Britain, by establishing a person or committee to devise new words for every possible contingency.1 But real-life inventions and borrowings in all living languages were (and remain) far too frequent, spontaneous and diffuse for such a system to work. The Académie française (founded 1635), which is France’s official authority on the French language, knows very well the perennial tensions between established norms and innovations.2 The ‘Immortels’, as the 40 academicians are termed, have a tricky task as they try to decide for eternity. Consequently, a prudent convention ensures that the Académie’s rulings are advisory but not binding.

For my part, I love encountering new words and guessing whether they will survive or fail. In that spirit, I have invented three of my own. The first is ‘plurilogue’. I coined this term at an academic seminar in January 2016 and then put it into a BLOG.3 It refers to multi-lateral communications across space (not so difficult in these days of easy international messaging) and through time. In particular, it evokes the way that later generations of historians constantly debate with their precursors. ‘Dialogue’ doesn’t work to explain such communications. Dead historians can’t answer back. But ‘plurilogue’ covers the multiplicity of exchanges, between living historians, and with the legacy of ideas from earlier generations.

Will the term last? I think so. Having invented it, I then decided to google (a recently-arrived verb). To my surprise, I discovered that there already is an on-line international journal of that name. It has been running since 2011. It features reviews in philosophy and political science. My initial response was to find the prior use annoying. On the other hand, that’s a selfish view. No one owns a language. Better to think that ‘plurilogue’ is a word whose time has come. Its multiple coinages are a sign of its relevance. Humans do communicate across time and space; and not just in dialogue. So ‘plurilogue’ has a tolerable chance of lasting, especially as it’s institutionalised in a journal title.

2017-12 No2 plurilogue Vol 1
A second term that I coined and published in 2007 is ‘diachromesh’.4 It defines the way that humans (and everything in the cosmos for good measure) are integrally situated in an unfolding through-Time, also known as the very long term or ‘diachronic’. That latter word is itself relatively unusual. But it has some currency among historians and archaeologists.

The ‘diachronic’ is the alternate pair to the ‘synchronic’ (the immediate fleeting moment). Hence my comment that: ‘the synchronic is always in the diachronic – in that every short-term moment contributes to a much longer term’. Equally, the conjunction operates the other way round. ‘The diachronic is always in the synchronic – in that long-term frameworks always inform the passing moment as well’.5 Therefore it follows that, just as we can refer to synchromesh gear changes, operating together in a single moment of time, so it’s relevant to think of diachromesh, effortlessly meshing each single moment into the very long-term.6

So far so good. Is diachromesh liable to last? I can’t find a journal with that name. However, the word in is circulation. Google it and see. The references are few and far between. But! For example, in an essay on the evolution of the urban high street, architectural analyst Sam Griffiths writes: ‘The spatial configuration of the grid is reticulated in space and time, a materialisation of Corfield’s (2007) “diachromesh”.’7

2017-12 No3 clock in Guildford high street

Fig.3 Guildhall Clock on Guildford High Street, marking each synchronic moment since 1683 in an urban high street, diachromeshed within its own space and time.

Lastly, I also offered the word ‘trialectics’ in 2007. Instead of cosmic history as composed of binary forces, I envisage a dynamic threefold process of continuity (persistence), gradual change (momentum) and macro-change (turbulence).8 For me, these interlocking dimensions are as integral to Time as are the standard three dimensions of Space.

Be that as it may, I was then staggered to find that the term had a pre-history, of which I was hitherto oblivious. Try web searches for trialectics in logic; ecology; and spatial theories, such as Edward Soja’s planning concept of Thirdspace.9 Again, however, it would seem that this is a word whose time has come. The fact that ‘trialectics’ is subject to a range of nuanced meanings is not a particular problem, since that happens to so many words. The core of the idea is to discard the binary of dialectics. Enough of either/or. Of point/counter-point; or thesis/antithesis. Instead, there are triple dimensions in play.

Coining new words is part of the trialectical processes that keep languages going through time. They rely upon deep continuities, whilst experiencing gradual changes – and, at the same time, facing/absorbing/rejecting the shock of the new. Luckily there is already a name for the grand outcome of this temporal mix of continuity/micro-change/macro-change. It’s called History.

1 S.I. Tucker, Protean Shape: A Study in Eighteenth-Century Vocabulary and Usage (1967), p. 104.


3 P.J. Corfield, ‘Does the Study of History “Progress” – and How does Plurilogue Help? BLOG/61 (Jan. 2016),

4 P.J. Corfield, Time and the Shape of History (2007), p. xv.

5 Ibid.

6 This assumption differs from that of a small minority of physicists and philosophers who view Time as broken, each moment sundered from the next. See e.g. J. Barbour, The End of Time: The Next Revolution in our Understanding of the Universe (1999). I might call this interpretation a case of ‘wrongaplomb’.

7 S. Griffiths, ‘The High Street as a Morphological Event’, in L. Vaughan (ed.), Suburban Urbanities: Suburbs and the Life of the High Street (2015), p. 45.

8 Corfield, Time and Shape of History, pp. 122-3, 211-16, 231, 248, 249. See also idem, ‘Time and the Historians in the Age of Relativity’, in A.C.T. Geppert and T. Kössler (eds), Obsession der Gegenwart: Zeit im 20. Jahrhundert/ Concepts of Time in the Twentieth Century (Geschichte und Gesellschaft: Sonderheft, 25, Göttingen, 2015), pp. 71-91; also available on


For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 84 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2017)

Well, why not? Why can’t we think about Space without Time? It’s been tried before. A persistent, though small, minority of philosophers and physicists deny the ‘reality’ of Time.1 True, they have not yet made much headway in winning the arguments. But it’s an intriguing challenge.

Space is so manifestly here and now. Look around at people, buildings, trees, clouds, the sun, the sky, the stars … And, after all what is Time? There is no agreed definition from physicists. No simple (or even complex) formula to announce that T = whatever? Why can’t we just banish it? Think of the advantages. No Time … so no hurry to finish an essay to a temporal deadline which does not ‘really’ exist. No Time … so no need to worry about getting older as the years unfold in a temporal sequence which isn’t ‘really’ happening. In the 1980s and 1990s – a time of intellectual doubt in some Western left-leaning philosophical circles – a determined onslaught upon the concept of Time was attempted by Jacques Derrida (1930-2004). He became the high-priest of temporal rejectionism. His cause could be registered somewhere under the postmodernist banner, since postmodernist thought was very hostile to the idea of history as a subject of study. It viewed it as endlessly malleable and subjective. That attitude was close to Derrida’s attitude to temporality, although not all postmodernist thinkers endorsed Derrida’s theories.2 His brand of ultra-subjective linguistic analysis, termed ‘Deconstruction’, sounded, as dramatist Yasmina Reza jokes in Art, as though it was a tough technique straight out of an engineering manual. In fact, it allowed for an endless play of subjective meanings.

For Derrida, Time was/is a purely ‘metaphysical’ concept – and he clearly did not intend that description as a compliment. Instead, he evoked an atemporal spatiality, named khōra (borrowing a term from Plato). This timeless state, which pervades the cosmos, is supposed to act both as a receptor and as a germinator of meanings. It is an eternal Present, into which all apparent temporality is absorbed.4 Any interim thoughts or feelings about Time on the part of humans would relate purely to a subjective illusion. Its meanings would, of course, have validity for them, but not necessarily for others.

So how should we think of this all-encompassing khōra? What would Space be like without Time? When asked in 1986, Derrida boldly sketched an image of khōra as a sort of sieve-like receptacle (see Fig.1).5 It was physical and tangible. Yet it was also intended to be fluid and open. Thus the receptacle would simultaneously catch, make and filter all the meanings of the world. The following extract from an explanatory letter by Derrida by no means recounts the full complexity of Derrida’s concept but gives some of the flavour:6

I propose then […] a gilded metallic object (there is gold in the passage from [Plato’s] Timaeus on the khōra […]), to be planted obliquely in the earth. Neither vertical, nor horizontal, a extremely solid frame that would resemble at once a web, a sieve, or a grill (grid) and a stringed musical instrument (piano, harp, lyre?): strings, stringed instrument, vocal chord, etc. As a grill, grid, etc., it would have a certain relationship with the filter (a telescope, or a photographic acid bath, or a machine, which has fallen from the sky, having photographed or X-rayed – filtered – an aerial view). …

Fig. 1 (L) Derrida’s 1986 sketch of Spatiality without Time, also (R) rendered more schematically
© Centre Canadien d’Architecture/
Canadian Centre for Architecture, Montreal.

In 1987, the cerebral American architect Peter Eisenman (1932- ), whose stark works are often described as ‘deconstructive’, launched into dialogue with Derrida. They discussed giving architectural specificity to Derrida’s khōra in a public garden in Paris.8   One cannot but admire Eisenman’s daring, given the nebulousness of the key concept. Anyway, the plan (see Fig. 2) was not realised. Perhaps there was, after all, something too metaphysical in Derrida’s own vision. Moreover, the installation, if erected, would have soon shown signs of ageing: losing its gilt, weathering, acquiring moss as well as perhaps graffiti – in other words, exhibiting the handiwork of the allegedly banished Time.2017-02-No2-Model-of-Choral-Works

Fig.2 Model of Choral Works by Peter Eisenman
© Eisenman Architects. New York

So the saga took seriously the idea of banishing Time but couldn’t do it. The very words, which Derrida enjoyed deconstructing into fragmentary components, can surely convey multiple potential messages. Yet they do so in consecutive sequences, whether spoke or written, which unfold their meanings concurrently through Time.

In fact, ever since Einstein’s conceptual break-through with his theories of Relativity, we should be thinking about Time and Space as integrally linked in one continuum. Hermann Minkowski, Einstein’s intellectual ally and former tutor, made that clear: ‘Henceforth Space by itself, and Time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality’. In practice, it’s taken the world one hundred years post-Einstein to internalise the view that propositions about Time refer to Space and vice versa. Thus had Derrida managed to abolish temporality, he would have abolished spatiality along with it. It also means that scientists should not be seeking a formula for Time alone but rather for Space-Time: S-T = whatever?

Lastly, if we do want a physical monument to either Space or Time, there’s no need for a special trip to Paris. We need only look around us. The unfolding Space-Time, in which we all live, looks exactly like the entire cosmos, or, in a detailed segment of the whole, like our local home: Planet Earth.
2017-02 No3 Earth-from-Space-Vector

Fig.3 View of Planet Earth from Space

1 For anti-Time, see J. Barbour, The End of Time: The Next Revolution in Our Understanding of the Universe (1999), esp. pp. 324-5. And the reverse in R. Healey, ‘Can Physics Coherently Deny the Reality of Time?’ in C. Callender (ed.), Time, Reality and Experience (Cambridge, 2002), pp. 293-316.

2 B. Stocker, Derrida on Deconstruction (2006); A. Weiner and S.M. Wortham (eds), Encountering Derrida: Legacies and Futures of Deconstruction (2007).

3 Line of dialogue from play by Y. Reza, Art (1994).

4 D. Wood, The Deconstruction of Time (Evanstown, Ill., 2001), pp. 260-1, 269, 270-3; J. Hodge, Derrida on Time (2007); pp. ix-x, 196-203, 205-6, 213-14.

5 R. Wilken, ‘Diagrammatology’, Electronic Book Review, 2007-05-09 (2007):

6 Letter from Derrida to Peter Eisenman, 30 May 1986, as cited in N. Leach (ed.), Rethinking Architecture: A Reader in Cultural Theory (1997), pp. 342-3. See also for formal diagram based on Derrida’s sketch, G. Bennington and J. Derrida, Jacques Derrida (1993), p. 406.

7 A.E. Taylor, A Commentary of Plato’s Timaeus (Oxford, 1928).

8 J. Derrida and P. Eisenman, Chora L Works, ed. J. Kipnis and T. Leeser (New York, 19997).

9 Cited in P.J. Corfield, Time and the Shape of History (2007), p. 9.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 74 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2016)

How will history interpret the views of millions of Tory voters who voted Leave in the 2016 referendum on the EU? It’s a good question that merits further attention. Since June, many commentators have defined the motivations of the Labour supporters who voted Leave – 37 per cent of all those who voted Labour in 20151 – as an angry rejection of the status quo by the socially and economically ‘left behind’. These electors have justified concerns about the impact of globalisation in eroding traditional industries and of immigration in undercutting working-class earnings. It’s a perception specifically acknowledged by the new PM Theresa May. At the Conservative Party Conference on 5 October 2016 she promised to remedy past injustices with the following words: ‘That means tackling unfairness and injustice, and shifting the balance of Britain decisively in favour of ordinary working-class people’.2

It’s a significant political ambition, albeit complicated somewhat by the fact that a majority of Labour voters in 2015 (63%) actually voted for Remain. May was clearly trying to shift the post-Referendum Conservative Party closer to the centre ground. And it’s a long time since any front-line British political leader spoke so plainly about social class, let alone about the workers.

But Theresa May’s pledge strangely omits to mention the rebellious Tory Leavers. After all, the majority of the national vote against the EU in 2016 came from the 58% of voters who had voted Conservative in the General Election of 2015. They voted for Leave in opposition to their then party leader and his official party policy. In the aftermath of the Referendum, many known Labour supporters, such as myself, were roundly scolded by pro-EU friends for the Labour Party’s alleged ‘failure’ to deliver the vote for Remain. But surely such wrath should have been directed even more urgently to Conservative supporters?

Either way, the Referendum vote made clear once again a basic truth that all door-step canvassers quickly discover. Electors are not so easily led. They don’t do just what their leaders or party activists tell them. Politics would be much easier (from the point of view of Westminster politicians) if they did. That brute reality was discovered all over again by David Cameron in June 2016. In simple party-political terms, the greatest ‘failure’ to deliver was indubitably that of the Conservatives. Cameron could possibly have stayed as PM had his own side remained united, even if defeated. But he quit politics, because he lost to the votes of very many Conservative rank-and-file, in alliance with UKIP and a section of Labour voters. It was ultimately the scale of grass-roots Tory hostility which killed both his career and his reputation as a lucky ‘winner’ on whom fortune smiles.

Divisions within political parties are far from new. Schematically considered, Labour in the twentieth century drew ideas, activists and votes from reform-minded voters from the professional middle class and skilled working class.3 That alliance is now seriously frayed, as is well known.

So what about the Conservatives? Their inner tensions are also hard to escape. They are already the stuff of debates in A-level Politics courses. Tory divisions are typically seen as a gulf between neo-liberal ‘modernisers’ (Cameron and Co) and ‘traditionalists’ Tory paternalists (anti-EU backbenchers). For a while, especially in the 1980s, there were also a number of self-made men (and a few women) from working-class backgrounds, who agreed politically with the ‘modernisers’, even if socially they were not fully accepted by them. It remains unclear, however, why such divisions emerged in the first place and then proved too ingrained for party discipline to eradicate.

Viewed broadly and schematically, the Conservatives in the twentieth century can be seen as a party drawing ideas, leadership and activists from an alliance of aristocrats/plutocrats with middle-class supporters, especially among the commercial middle class – all being buttressed by the long-time endorsement of a considerable, though variable, working-class vote. Common enemies, to weld these strands together, appear in the form of ‘socialism’, high taxes, and excessive state regulation.

Today, the upper-class component of Toryism typically features a number of socially grand individuals from landed and titled backgrounds. David Cameron, who is a 5th cousin of the Queen, seems a classic example. However, he also has a cosmopolitan banking and commercial ancestry, making him a plutocrat as much as an aristocrat. In that, he is characteristic of the big international financial and business interests, which are generally well served by Conservative governments. However, appeals and warnings from the political and economic establishment cut no ice with many ‘ordinary’ Tory members.

Why so? There’s a widening gap between the very wealthy and the rest. The Conservative Leave vote was predominantly based in rural and provincial England and Wales. (Scotland and Northern Ireland have different agendas, reflecting their different histories). The farming communities were vocally hostile to regulation from Brussels. And, above all, the middle-aged and older middle class voters in England’s many small and medium-sized towns were adamantly opposed to the EU and, implicitly, to recent trends in the nation’s own economic affairs.

Tory Leavers tend to be elderly conservatives with a small as well as large C. They have a strong sense of English patriotism, fostered by war-time memories and postwar 1950s culture. They may not be in dire financial straits. But they did not prosper notably in the pre-crisis banking boom. And now the commercial middle classes, typified by shopkeepers and small businessmen, do not like hollowed-out town centres, where shops are closed or closing. They don’t like small businesses collapsing through competition from discount supermarkets or on-line sales. They regret the winnowing of local post-offices, pubs, and (in the case of village residents) rural bus services. They don’t like the loss of small-town status in the shadow of expanding metropolitan centres. They don’t like bankers and they hate large corporate pay bonuses, which continue in times of poor performance as well as in booms. With everyone, they deplore the super-rich tax-avoiders, whether institutional or individual.

Plus, there is the issue of immigration, which puts a personal face on impersonal global trends of mobile capital and labour. Tory-Leavers are worried about the scale of recent immigration into Britain (though tolerant of Britons emigrating to foreign climes). It is true that many middle-class families benefit from the cheap food and services (notably within the National Health Service) provided by abundant labour. But sincere fears are expressed that too many ‘foreigners’ will change the nation’s character as well as increase demand for social welfare, which middle-class tax-payers have to fund.7

A proportion of Tory Leavers may be outright ethnicist (racist). Some may hate or reject those who look and sound different. But many Leavers are personally tolerant – and indeed a proportion of Tory Leavers are themselves descendants of immigrant families. They depict the problem as one of numbers and of social disruption rather than of ethnic origin per se.

Theresa May represents these Tory-Leavers far more easily than David Cameron ever did. She is the meritocratic daughter of a middle-ranking Anglican clergyman, who came from an upwardly mobile family of carpenters and builders. Some of her female ancestors worked as servants (not very surprisingly, since domestic service was a major source of employment for unmarried young women in the prewar economy).8 As a result, her family background means that she can say that she ‘feels the pain’ of her party activists with tolerable plausibility.

Nevertheless, May won’t find it easy to respond simultaneously to all these Leave grievances. To help the working-class in the North-East and South Wales, she will need lots more state expenditure, especially when EU subsidies are ended. Yet middle-class voters are not going to like that. They are stalwart citizens who do pay their taxes, if without great enthusiasm. They rightly resent the super-rich individuals and international businesses whose tax avoidance schemes (whether legal, borderline legal, or illegal) result in an increased tax burden for the rest. But it will take considerable time and massive concerted action from governments around the world to get to serious grips with that problem. In the meantime, there remain too many contradictory grievances in need of relief at home.

Overall, the Tory-Leavers’ general disillusionment with the British economic and political establishment indicates how far the global march of inequality is not only widening the chronic gulf between super-rich and poor but is also producing a sense of alienation between the super-rich and the middle strata of society. That’s historically new – and challenging both for the Conservative Party in particular and for British society in general. Among those feeling excluded, the mood is one of resentment, matched with defiant pride. ‘Brussels’, with its inflated costs, trans-national rhetoric, and persistent ‘interference’ in British affairs, is the first enemy target for such passions. Little wonder that, across provincial England in June 2016, the battle-cry of ‘Let’s Take Back Control’ proved so appealing.

Fig.1 Slogan projected onto White Cliffs of Dover
by Vote Leave Cross-Party Campaign Group
(June 2016).

1 See


3 What’s in a name? In US politics, the skilled and unskilled workers who broadly constitute this very large section of society are known as ‘middle class’, via a process of language inflation.

4 See A. Windscheffel, Popular Conservatism in Imperial London, 1868-1906 (Woodbridge, 2007); and M. Pugh, ‘Popular Conservatism in Britain: Continuity and Change, 1880-1987’, Journal of British Studies, 27 (1988), pp. 254-82.

5 Queen Elizabeth II is descended from the Duke of Kent, the younger brother of monarchs George IV and William IV. William IV had no legitimate offspring but his sixth illegitimate child (with the celebrated actor Dorothea Jordan) was ancestor of Enid Ages Maud Levita, David Cameron’s paternal grandmother.

6 One of Cameron’s great-great-grandfathers was Emile Levita, a German Jewish financier and banker, who became a British citizen in 1871. Another great-grandfather, Alexander Geddes, made a fortune in the Chicago grain trade in the 1880s:

7 This sort of issue encouraged a proportion of Conservative activists to join the United Kingdom Independence Party UKIP), which drew support from both Left and Right.


For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 71 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2016)

Talking of taking a long time, it took centuries for women to break the grip of traditional patriarchies. How did women manage it? In a nutshell, the historical answer was (is) that literacy was the key, education the long-term provider, and the power of persuasion by both men and women which slowly turned the key.

But let’s step back for a moment to consider why the campaign was a slow one. The answer was that it was combating profound cultural traditions. There was not one single model for the rule of men. Instead, there were countless variants of male predominance which were taken absolutely for granted. The relative subordination of women seemed to be firmly established by history, economics, family relationships, biology, theology, and state power. How to break through such a combination?

The first answer, historically, was not by attacking men. That was both bad tactics and bad ideology. It raised men’s hackles, lost support for the women’s cause, and drove a wedge between fellow-humans. Thus, while there has been (is still) much male misogyny or entrenched prejudice against women, any rival strand of female misandry or systematic hostility to men has always been much weaker as a cultural tradition. It lacks the force of affronted majesty which is still expressed in contemporary misogyny, as in anonymous comments on social media.

Certainly, for many ‘lords of creation’, who espoused traditional views, the first counter-claims on behalf of women came as a deep shock. The immediate reaction was incredulous laughter. Women who spoke out on behalf of women’s rights were caricatured as bitter, frustrated old maids. A further male response was to conjure up images of the ‘vagina dentata’ – the toothed vagina of mythology. It hinted at fear of sex and/or castration anxiety. And it certainly dashed women from any maternal pedestal: their nurturing breasts being negatived by the biting fanny.
2016-05 No1 Picasso Femme (1930)

Pablo Picasso, Femme (1930).

Accordingly, one hostile male counter-attack was to denounce feminists as no more than envious man-haters. If feminists then resisted that identification, they were pushed onto the defensive. And any denials were taken as further proof of their cunningly hidden hostility.

Historically, however, the campaigns for women’s rights were rarely presented as anti-men in intention or actuality. After all, a considerable number of men were feminists from the start, just as a certain proportion of women, as well as men, were opposed. Such complications can be seen in the suffrage campaigns in the later Victorian period. Active alongside leading suffragettes were men like George Lansbury, who in 1912 resigned as Labour MP for Bow & Bromley, to stand in a by-election on a platform of votes for women. (He lost to an opponent whose slogan was ‘No Petticoat Government’.)

Meanwhile, prominent among the opponents of the suffragettes were ladies like the educational reformer Mary Augusta Ward, who wrote novels under her married name as Mrs Humphry Ward.1 She chaired the Women’s National Anti-Suffrage League (1908-10), before it amalgamated with the Men’s National League. Yet Ward did at least consider that local government was not beyond the scope of female participation.

Such intricate cross-currents explain why the process of change was historically slow and uneven. Women in fact glided into public view, initially under the radar, through the mechanism of female literacy and then through women’s writings. In the late sixteenth century, English girls first began to take up their pens in some numbers. In well-to-do households, they learned from their brothers’ tutors or from their fathers. Protestant teachings particularly favoured the spread of basic literacy, so that true Christians could read and study the Bible, which had just been translated into the vernacular Indeed, as Eales notes, the wives and daughters of clergymen were amongst England’s first cohorts of literary ladies.2 Their achievements were not seen as revolutionary (except in the eyes of a few nervous conservatives). Education, it was believed, would make these women better wives and mothers, as well as better Christians. They were not campaigning for the vote. But they were exercising their God-given brainpower.
2016-05 No2 Eighteenth-century women's literacy

Young ladies in an eighteenth-century library, being instructed by a demure governess, under a bust of Sappho – a legendary symbol of female literary creativity.

As time elapsed, however, the diffusion of female literacy proved to be the thin end of a large wedge. Girls did indeed have brainpower – in some cases exceeding that of their brothers. Why therefore should they not have access to regular education? Given that the value of Reason was becoming ever more culturally and philosophically stressed, it seemed wise for society to utilise all its resources. That indeed was the punchiest argument later used by the feminist John Stuart Mill in his celebrated essay on The Subjection of Women (1869). Fully educating the female half of the population would have the effect, he explained, of ‘doubling the mass of mental faculties available for the higher service of humanity’. Not only society collectively but also women and men individually would gain immeasurably by accessing fresh intellectual capital.3

Practical reasoning had already become appreciated at the level of the household. Throughout the eighteenth century, more and more young women were being instructed in basic literacy skills.4 These were useful as well as polite accomplishments. One anonymous text in 1739, in the name of ‘Sophia’ [the spirit of Reason], coolly drew some logical conclusions. In an urbanising and commercialising society, work was decreasingly dependent upon brute force – and increasingly reliant upon brainpower. Hence there was/is no reason why women, with the power of Reason, should not contribute alongside men. Why should there not be female lawyers, judges, doctors, scientists, University teachers, Mayors, magistrates, politicians – or even army generals and admirals?5 After all, physical strength had long ceased to be the prime qualification for military leadership. Indeed, mere force conferred no basis for either moral or political superiority. ‘Otherwise brutes would deserve pre-eminence’.6

2016-06 No3 Woman not inferior to man titlepage
There was no inevitable chain of historical progression. But, once women took up the pen, there slowly followed successive campaigns for female education, female access to the professions, female access to the franchise, female access to boardrooms, as well as (still continuing) full female participation in government, and (on the horizon) access the highest echelons of the churches and armed forces. In the very long run, the thin wedge is working. Nonetheless, it remains wise for feminists of all stripes to argue their case with sweet reason, as there are still dark fears to allay.

1 B. Harrison, Separate Spheres: The Opposition to Women’s Suffrage in Britain (1978; 2013); J. Sutherland, Mrs Humphry Ward: Eminent Victorian, Pre-Eminent Edwardian (Oxford, 1990).

2 J. Eales, ‘Female Literacy and the Social Identity of the Clergy Family in the Seventeenth Century’, Archaeologia Cantiana, 133 (2013), pp. 67-81.

3 J.S. Mill, The Subjection of Women (1869; in Everyman edn, 1929), pp. 298-9.

4 By 1801, all women in Britain’s upper and middle classes were literate, and literacy was also spreading amongst lower-class women, especially in the growing towns.

5 Anon., Woman not Inferior to Man, by Sophia, a Person of Quality (1739), pp. 36, 38, 48.

6 Ibid., p. 51.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 65 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2016)

Is the past dead or alive? Posing such a binary question insists upon choice; but the options constitute a false dichotomy. Nonetheless, the death of the past is often proclaimed. This BLOG examines the arguments for and against; and highlights the snares of binary thinking.

Firstly, the past, dead or alive? The ‘death of the past’ is a common, possibly reassuring notion. If you have forgotten the History dates learned at school, then don’t worry, you are in good company. Most people have. In the USA there is a sad debate entitled: ‘Is History history?’ There is at least one book entitled The Death of the Past.1 In fact, that particular study laments that people forget far too much. Nonetheless, emphatic phrases circulate in popular culture. ‘Never look back. The past is dead and buried’. ‘The bad (or good) Old Days have gone’. Something or other is irrevocably past – rendering it ‘as dead as the proverbial dodo’, which was last reliably sighted in Mauritius in 1662.2016-02 No1 Frohawk_Dodo-1905

Illus. 1: The Dodo by F.W. Frohawk,
from L.W. Rothschild’s Extinct Birds (1907).

At the same time, however, there’s a rival strand of thought, which asserts that the past is very much alive. The most famous and often quoted claim to that effect comes from William Faulkner, writing in the American Deep South in 1951, where memories and resentments from Civil War times have far from disappeared. ‘The past is never dead’, he wrote. ‘It’s not even past’. 2

Another strong statement to that effect came from Karl Marx in 1851/2. He thundered at the unpastness of the past. Revolutionary activism was constantly hampered by old thinking and old ideas: ‘The tradition of all the dead generations weighs like a nightmare upon the brain of the living’.3

Opposition to old thinking was accordingly expressed by many later Communist leaders. The ‘new’ was good and revolutionary. Antiquity was the dangerous foe. Chairman Mao’s campaign against the ‘Four Olds’ – Old Customs, Old Culture, Old Habits, Old Ideas – was a striking example, at the time of his intended Cultural Revolution in 1966.4 Yet the fact that various traditional aspects of Chinese life still persist today indicates the difficulty of uprooting very deeply embedded social attitudes, even when using the resources of a totalitarian state.

For historians, meanwhile, it’s best to reject over-simplified choices. Many things in the past (both material and intangible) have died or come to an end. Yet far from everything has shared the same fate. Ideas, languages, cultures, religions persist through Time, incorporating changes alongside continuities; biological traits evolve over immensely long periods; the structure of the cosmos unfolds over many billennia (an emergent neologism) within a measurable framework.

Hence there’s nothing like a rigid divide between past and present. They are separated by no more than a nano-second between NOW and the immediate nano-second before NOW, so that legacies/contributions from the past infuse every moment as it is lived.

Secondly, thinking in terms of binary alternatives: Having to choose between bad/old/dead versus good/new/alive is a classic example of binary thought. It is an approach commonly cultivated by activists, for example in revolutionary or apocalyptic religious movements. Are you with the great cause or against it? Such attitudes can be psychologically powerful in binding groups together.

Binaries can also be useful when assessing the strength and weakness of an argument or a proposed course of action. As bimanual creatures, we can consider the pros and cons, using the formula ‘on the one hand’ … ‘on the other hand’. Indeed, when making a case, it’s always helpful to understand the arguments against your own. That way, when facing a fundamental critic, you are prepared. (Binary options also provide a good way to bully a witness on oath: Come on, answer, Yes or No! When the truthful reply might be ‘Somewhat’ or ‘Maybe’.)

It’s even been argued that some human societies are intrinsically binary in their deepest thought patterns. Russian culture is one that has been historically so identified.5 Hence binary switching may have helped to familiarise the population with the country’s dramatic twentieth-century lurches from Tsarism to Communism and, later, back to a different form of oligarchic Democracy. (Do today’s Russians agree; or perhaps, agree somewhat?)

Either way, there is no doubt that binary thought, like binary notation, has its uses. But studying History requires the capacity to grapple with complexity alongside simplicity. Is the past dead or alive? The answer is both and neither. It falls within the embrace of ever-stable ever-fluid Time, which lives and dies simultaneously.

J.H. Plumb, The Death of the Past (1969; reissued Harmondsworth, 1973; Basingstoke, 2003).

W. Faulkner, Requiem for a Nun (1951), Act 1, sc. 3.

K. Marx, The Eighteenth Brumaire of Louis Napoleon (1851/2), in D. McClellan (ed.), Karl Marx: Selected Writings (Oxford, 1977), p. 300.

P. Clark, The Chinese Cultural Revolution: A History (Cambridge, 2008); M. Gao, The Battle for China’s Past: Mao and the Cultural Revolution (2008).

Y.M. Lotman and B.A. Uspensky, ‘Binary Models in the Dynamics of Russian Culture’, in A.D. and A.S. Nakhimovsky (eds), The Semiotics of Russian Cultural History (Ithaca, NY., 1985), pp. 30-66.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 62 please click here