If citing, please kindly acknowledge copyright © Penelope J. Corfield (2015)

The ‘temporal turn’ is a grand phrase to name the current political and intellectual return to interpreting things explicitly within the very long term, otherwise known as history. It’s a new trend, which is gathering pace – and it’s an excellent one too. The name is borrowed from a phrase popularised by the American philosopher Richard Rorty in 1967. He then wrote of the ‘linguistic turn’ in twentieth-century philosophy, when fresh attention was paid to language as a factor significantly influencing or even determining meanings, rather than just conveying thought.1

Since then, an array of other analytical ‘turns’ have been announced. But none have had the same resonance – until now. The serious study of history and historical trends had not, of course, disappeared. So the ‘temporal turn’ is not news to historians. But let’s hope that it becomes a confirmed and sustained development. The ‘linguistic turn’ certainly had its merits. Much was learned about the power of language to frame and convey meaning at any given point in time. Yet the ‘linguistic turn’ was eventually overdone. Analysis of the synchronic moment was excessively privileged over the study of long-term (diachronic) history.

Such an outcome, however, proved detrimental to both perspectives, which are intertwined: ‘The synchronic is always in the diachronic’, just as ‘the diachronic is always in the synchronic’. Life is not composed just of self-contained instantaneous moments. They are linked seamlessly together. Just as well-functioning gears mesh together seamlessly in the present in synchro-mesh, so the past meshes seamlessly with the present and future in diachro-mesh. As a result, it’s really not possible to divorce analytically ‘after’ from ‘before’. While some elements of the past can be properly defined as dead and gone, plenty of others persist through time.

One example of lengthy but not eternal continuity is the human genome. It’s analysed by geneticists as composed of three billion chemical bases of DNA (deoxyribonucleic acid), which contain the biological instructions to make a human. As such, the human genome frames our collective and individual genetic make-up, providing a core pattern plus individual variability. And its longevity is matched with that of our species.2015-1 No 1 Human Genome DNA split

Living History: The Human Genome – DNA split.


Recalling the genome’s long past and immediate present provides a reminder that studying the past (whether via biology or history or any other longitudinal subject) does not require a dualistic choice between either change or continuity. They are intertwined, like History and Geography, or Time and Space.

So the ‘temporal turn’ is welcome. And there are multiplying signs of its arrival, across many disciplines. Within the study of history, micro-histories are being balanced by new macro-histories. And the macro- can be very elongated indeed. Some diachronic studies now start with the origins of human society, others with the origins of Planet Earth, while others start with the origins of the cosmos.3

In practice, it’s far from easy to research and to teach on such a wide canvas, but the International Association of Big History (founded in 2010) advises and encourages practitioners. Some of us (myself included) laugh slightly at its terminology, which has a hint of Toad of Toad Hall: ‘My history is bigger than your history’. Other terms of art are ‘Deep Time’ or, with thanks to Fernand Braudel, the ‘longue durée’. But the name is not the most important point. The history of the long-term is indeed big; and it’s good that it’s returning to a range of new agendas, in everything from zoology to art.4

Lastly, why is this trend happening now? There are three big reasons, which, separately, would have had great impact – and in conjunction are commanding. But it took a combination of macro-crises to overcome ‘presentism’ and the quest for instant gratification, which is strongly entrenched in consumer culture. Nonetheless, external circumstances are forcing a rethink. One inescapable factor is climate change, especially in the context of demographic pressure and ecological degradation. This great topic for our time requires an understanding of past and present science, future prognostications, and current politics. Historians can contribute by studying how past communities have coped with ecological changes, both for good and for ill.5 Accordingly, David Armitage and Jo Guldi have just produced a stirring trumpet-blast, calling for historians to be included in all long-term planning teams organised by governments and international bodies.6

A second factor is the heightened global confrontation over a range of political and religious issues in the twenty-first century. The 2001 attack upon New York’s Twin Towers came as a huge surprise as well as a disaster. It triggered new calls, for eminently practical reasons, to comprehend the roots of conflict and the historic prospects of any countervailing forces of cooperation. Instant power-plays without a diachronic perspective have failed badly. Thus a hubristic assertion in 2004 by a senior American policy-maker that ‘We’re an empire now and, when we act, we create our own reality’, proved to be dangerously wrong.7 History has a habit of biting back – and it is still biting all the protagonists in numerous conflicts around the globe. These all call for diachronic assessment. They haven’t happened out of the blue. And they can’t be addressed cluelessly.

Thirdly, fresh thought is required in response to the unexpected 2008/9 global economic recession, whose ramifications are still unfolding. Knowledge of synchronic structures, networks, and meanings will explain only so much. The origins, treatment and prognosis of the crisis need analysis in long-term context. A sign of the times can be seen in campaigns by some economists and many students to revamp the study of economics. That subject has since the 1970s become highly technocratic, focused upon a neo-classical model, with a strictly quantitative methodology. It might be termed a structuralist or synchronic economics. Yet there are now calls to debate moral values as well as statistical assessments. And to re-incorporate the (wrongly) underrated insights of diachronic economic history.8
So the ‘temporal turn’ is very welcome. It is quietly killing the anti-history philosophy of post-modernism, which flourished in the later twentieth century.9 At last, here is an intellectual trend which historians can welcome wholeheartedly.

1 R. Rorty (ed.), The Linguistic Turn: Recent Essays in Philosophical Method (Chicago, 1967).

2 P.J. Corfield, Time and the Shape of History (London, 2007), p. xv.

3 Examples among a burgeoning field include D. Christian, Maps of Time: An Introduction to Big History (Berkeley, Calif., 2004); and C.S. Brown, Big History: From the Big Bang to the Present (New York, 2007).

4 C. Ross, The Past is the Present; It’s the Future Too: The Temporal Turn in Contemporary Art (London, 2014).

5 See M. Levene et al. (eds), History at the End of the World? History, Climate Change and the Possibility of Closure (Penrith, 2010); or J.L. Brooke, Climate Change and the Course of Global History: A Rough History (New York, 2014).

6 J. Guldi and D. Armitage, The History Manifesto (Cambridge, 2014).

7 Attributed to Karl Rove, George Bush’s Deputy Chief of Staff (2004-7). See M. Danner, ‘Words in a Time of War: On Rhetoric, Truth and Power’, in A. Szántó (ed.), What Orwell Didn’t Know: Propaganda and the New Face of American Politics (New York, 2007), p. 17.

8 See e.g. D. North, The Economic Crisis and the Return of History (Oak Park, Mich., 2011); T. Piketty, Capital in the Twenty-First Century, transl. by A. Goldhammer (Cambridge, Mass., 2014), pp. 31-3, 573-7; J. Madrick, Seven Bad Ideas: How Mainstream Economists have Damaged America and the World (New York, 2014); and students’ calls for reform, led by Manchester University’s Post-Crash Economics Society: see

9 For more, see P.J. Corfield, ‘History and the Temporal Turn: Returning to Causes, Effects and Diachronic Trends’, in J-F. Dunyach (ed.), Périodisations de l’histoire des mondes Britanniques: reflectures critiques (forthcoming Paris, 2015).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 49 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

 It really wasn’t done – for centuries. Women, respectable women especially, did not speak in public from public platforms. They do sometimes, anachronistically, in period films. So the script-writer of The Duchess (dir: Sam Dibb, 2008) decided that the famous eighteenth-century Duchess of Devonshire (played by Keira Knightley) should indicate her political commitment to the Whig reform cause by speaking at the public hustings for the 1784 Westminster election.

But the scene is a flat pancake. That’s no doubt partly because it never happened, giving the script-writer no historical documentation from which to work. The film is good at revealing the extent to which, as an aristocratic woman in the public eye, the Duchess is constrained by her social position. And then suddenly, she appears on a public balcony in her furs and feathers, delivering an impassioned election speech in favour of democracy to the London masses. There’s no sensation. No shock. There’s not even an angry husband, ordering her to desist. [See Fig.1a]

However, the script-writer knows, from evidence discussed in other scenes, that the Duchess was heavily satirised for her political affiliations. In 1784 she undertook the much milder action of canvassing in the Westminster constituency. She was young, charming, rich, high-ranking and a leader of fashion. Yet even she could not get away with it. She was socially pilloried in graphic prints which accused her of lewdly selling kisses to brutish plebeians for votes (see Fig.1b). Not only did the Duchess never venture publicly into politics again, but nor did other high-born ladies. They stuck to behind-the-scenes roles as political hostesses – not without influence, but not in the censorious public eye.

Fig.1a (L) The Duchess of Devonshire as imagined (2008) on the Westminster hustings Fig.1b (R) The Duchess as satirised in 1784 for canvassing the Westminster electors, in a print entitled ‘A New Way to Secure a Majority’

The reasons for this self-effacement were deeply rooted in Christian tradition. Women were seen as domestic helpmeets. They were expected to be modest, docile and, in public, silent. After all, St Paul enjoined that: ‘Let your women keep silence in the churches; for it is not permitted unto them to speak. But they are commanded to be under obedience … And if they will learn anything, let them ask their husbands at home.’1 And he further explained: ‘I suffer not a woman to teach, nor to usurp authority over the man, but to be in silence. For Adam was first formed, then Eve.’2 Christian feminist scholars today debate about St Paul’s own personal attitudes. But the point was not so much his original intention but the meanings internalised by his followers over time. Women, formed from ‘Adam’s rib’, were subordinate beings. Like children, they should be ‘seen but not heard’.

This social convention began to dissipate only slowly in the later nineteenth century, with the campaign for the female franchise. As a result, it is hard to find any major speeches by a British woman on a public platform (especially an outdoor public platform), before the twentieth century. Queen Elizabeth’s speech to her troops at Tilbury docks (August 1588) is the one great exception; and that famous event was legitimated not just by her royal status but by fears of imminent invasion at the time of the Armada.

Of course, there were daring women who did sometimes break with convention. Particularly in times of social tension and political upheaval, there was greater scope for direct action. It was not uncommon for women preachers, often from lower-class backgrounds, to emerge in radical religious movements, such as in the 1640s. If the spirit moved someone to ‘bear witness’, a sincere belief in divine calling could override the Pauline proscription. So early Methodism, which stressed the teachings of the heart, saw many women lay preachers playing an independent role in the 1780s and 1790s.3 One of them was Elizabeth Tomlinson. She became aunt by marriage to the novelist George Eliot, who later drew a highly sympathetic pen-portrait of a Methodist female evangelist in the form of Dinah Morris in Adam Bede (1859). However, the novel ends with Dinah’s withdrawal from public preaching. And the same happened in many real-life cases as nineteenth-century Methodism became more institutionalised and conservative.4

Nonetheless, radical religion and politics remained possible outlets for women speakers. John Wesley himself had expressed the view that treating women only as ‘agreeable playthings’ constituted ‘the deepest unkindness … horrid cruelty … mere Turkish barbarity’.5 By the later nineteenth century, with the spread of literacy and further education, increasing numbers of women began to reject the subordinate role. It was still notable, however that a number of doughty feminists in the early days of the suffragette campaigns continued to express trepidation at speaking on public platforms. One who had no qualms was Charlotte Despard, shown in Fig.2 addressing a mass meeting in Trafalgar Square. She was, however, an exceptional person, emboldened not only by her Anglo-Irish upper-crust background but also, by the 1930s, by her venerable age, doughty personality and long political experience.6

Charlotte Despard at the age of 89, speaking at an anti-fascist rally in Trafalgar Square, 12 June 1933. Photo: James Jarché. © Daily Herald Archive, 1983-5236/11073 One reason for the continuing trepidation was because the art of public speaking does not depend solely on the nerve of the speaker. Successful oratory depends upon an unstated but very real reciprocity. The audience has to be prepared to listen and to respond. If those present are unwilling, then the result can be anything from hostile shouting, jeers, catcalls, obscenities, the throwing of missiles – or simply turning away. Social conventions, in other words, are policed not so much by law (though it may contribute) but by widely-shared conventional beliefs.

Before the twentieth century, the only example known to me of a real-life young woman who spoke publicly at a political rally occurred at the Norwich Guildhall in 1794. The orator was Amelia Alderson (later Opie), the daughter of a respected local physician and a social star among the radical intelligentsia. Her speech was reported in a private letter by a disapproving (if reluctantly admiring) older female witness, Sarah Scott.7 She herself was the author of Millennium Hall (1762), which advocated an elegant female-only community as a means of helping women to escape from domestic subordination. But even a proto-feminist like Scott disapproved of Alderson’s actions. Hence getting both men and women to accept female public speaking remains essential to achieve equality on the soap-box – and (a long-running good cause still not fully resolved today) in the pulpit. Down with biblical literalism! Speak up, everyone, and listen too!

1 Holy Bible, St Paul 1 Corinthians, 14: 34-35.

2 Holy Bible, 1 Timothy, 2: 12-13.

3 See D. Valenze, Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England (Princeton, 1985).

4 P.J. Corfield, Power and the Professions in Britain, 1700-1850 (1995), pp. 105-8.

5 See John Wesley’s Sermon 98: On Visiting the Sick (1786), sect. III, 7: ‘There is neither male nor female in Christ Jesus’: in

6 For Charlotte Despard, née French (1844-1939), see M. Mulvihill, Charlotte Despard: A Biography (1989).

7 J. Spencer, ‘Introduction’, in Sarah Scott, Millennium Hall (1762), ed. J. Spencer (1986), pp. ix-x, citing R. Blunt (ed.), Mrs Montagu, ‘Queen of the Blues’: Her Letters and Friendships from 1762 to 1800 (1923), Vol. 2, p. 304.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 47 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

Not everyone shakes hands. But those who do are expressing an egalitarian relationship. As a form of greeting, the handshake differs completely in meaning from the bow or curtsey, which display deference from the ‘lowly’ to those on ‘high’. In one Jane Austen novel, a fearlessly ‘modern’ young woman extends her hand to a young man at a crowded party. Of course, it is Marianne Dashwood, the embodiment of ‘sensibility’. She has just re-encountered the errant Willoughly, long after he has ended their unofficial courtship. Marianne immediately holds out her hand, claiming him as an intimate friend. But he avoids her gesture. Marianne then exclaims ‘in a voice of the greatest emotion: “Good God! Willoughby, what is the meaning of this? … Will you not shake hands with me?”’. He cannot avoid doing so, but drops her hand quickly. After a few short exchanges, Willoughby then leaves ‘with a slight bow’.2  He has dropped her. Their body language says it all.

There is a particular poignancy in this scene. In this era, men and women who were not related to one another would not ordinarily touch hands as a form of greeting. But, of course, lovers might do so. No wonder that a mere touch was so powerful when it was so rare. (And it retains its appeal today in romantic mythology and countless pop songs: I Wanna hold your Hand!)3  Shakespeare, as ever, had known the scene. Romeo understands the intimacy implied when he takes Juliet’s hand in a dance, as does she: ‘And palm to palm is like holy palmer’s kiss’.4

Even more definitively, a couple would touch hands in a marriage ceremony (even allowing for the many varieties of ritual associated with weddings).5 The wording was clear. ‘Taking someone’s hand in marriage’ is an ultimate symbol of good faith, along with the exchange of rings which remain visible on the hand. These are public signs of personal commitment. An earlier poetic expression also offered an endgame variant, in the form of a final handshake. Michael Dayton’s Sonnet LXI (1594) which starts ‘Since there’s no help, come let us kiss and part’ invites the parting lovers to: ‘shake hands for ever, cancel all our vows’.

At the same time, a close handshake also has a set of commercial connotations. When two traders agree upon a contract, they may indicate the same by a handshake. However unequal they may be in wealth and commercial status, for the purposes of the deal they are equals, both pledging to fulfil the bargain. It constitutes a ‘gentleman’s agreement’ – upheld by personal honour. The same etiquette applies in making a bet.

Hence reneging upon a wager or deal sealed with a personal handshake is viewed as particularly heinous. The loser may even litigate for redress. Today the American Sports World News reports rumours that Charles Wang, the majority owner of the New York Islanders ice-hockey team, is being sued for $10 million by hedge-fund manager Andrew Barroway. Wang’s crime? He had allegedly reneged on a handshake pact to sell his Islanders franchise to Barroway.6
Typically, a handshake is a brief and routine affair, usually but not invariably with the right hand. True, there are variants. The prolonged handshake plus a clasp of the recipient’s upper arm by the shaker’s other hand is a gesture of special warmth – stereotypically undertaken by gregarious American politicians.7

Or there is the Masonic handshake. It gives a secret signal, allowing members of a separate society to identify one another. Apparently, there are many variants of the Masonic handshake, denoting differences in rank within the organisation. That information is rather depressing, since the handshake is, in principle, egalitarian. Nonetheless, it shows the potential for stylistic variation, from the firm muscular grip to the fleeting touch-and-drop.

Variations in styles of shaking hands are here caricatured as two gentlemen are almost dancing their mutual greetings; from, consulted 11 Oct. 2014. Gradually, routine British styles of greeting began to incorporate the handshake. It was most common among civilian men of similar middle-class standing. By contrast, the toffs stuck with their traditional bowing and curtseying. Meanwhile, hand-shaking was rare among workers in ‘dirty’ trades and industries, because people in unavoidably grimy jobs usually tried to contain rather than to spread the dirt. The emblem of two clasped hands nonetheless appeared proudly on various trade union banners, as a pledge of solidarity.

The advent of the social handshake was thus not uniform across all periods and classes. But it could be found, between close male friends, in Britain from at least Shakespeare’s time. Yet its subsequent spread has taken a long time a-coming. For example, in 1828 the anonymous author of A Critique of the Follies and Vices of the Age was still expressing displeasure at the new popularity of the handshake, including between men and women.8

One reason for some snobbish hostility, among polite society in Britain, was the association of this custom with the republican USA, where its usage became increasingly common after American independence. There were also connotations of support for the hand-shaking citizens of republican France from 1793 onwards. English visitors to the USA like the novelist and social commentator Frances Trollope thus waxed somewhat critical of the local mores. In 1832, she deplored the habit of hand-shaking between both sexes and all classes (albeit excluding the non-free).For her, this form of greeting was too bodily intimate, especially as ‘the near approach of the gentleman [ironically] was always redolent of whiskey and tobacco’.9

Ultimately, however, the snobs were routed. Old-style bowing and curtseying has generally disappeared, although hat wearers may still doff their hats to ladies. However, the twentieth century also produced another twist in the tale. Just as the hand-shake was becoming quite widely adopted in Britain by the 1970s, it was suddenly challenged by a new custom, imported from overseas. It is the continental kiss, in the form of a light clasp of the upper arms and a peck on the cheek (or, for the physically fastidious, an air-kiss). Such a manoeuvre would give good scope to a later Marianne Dashwood, who might grip an errant Willoughby in order to kiss him warmly. Nonetheless, be warned: whatever the greeting style, body language always provides ways of signalling the rejection as well as the offering of friendship.

1  See P.J. Corfield, previous monthly BLOG 45 ‘Doffing One’s Hat’. And for fuller discussion, see PJC, ‘Dress for Deference & Dissent: Hats and the Decline of Hat Honour’, Costume: Journal of the Costume Society, 23 (1989), pp. 64-79; also transl. in K.Gerteis (ed.), Zum Wandel von Zeremoniell und Gesellschaftsritualen: Aufklärung, 6 (1991), pp. 5-18; and posted on PJC personal website as Pdf/8.

2  J. Austen, Sense and Sensibility (1st pub. London, 1811): chapter 28.

3 The Beatles (1963).

4  W. Shakespeare, Romeo and Juliet (written mid 1590s; 1597), Act 1, sc. 5. A palmer was a successful pilgrim, returning from the Holy Land bearing palms as a sign that the journey had been achieved.

5  A traditional ritual of ‘hand-fasting’, announcing a solemn public engagement, has also been updated for use today in pagan marriage ceremonies.

6 Sports World News on-line 12.Aug. 2014, at, consulted 11 Oct. 2014.

7  See e.g. John Travolta’s film portrayal of a notably touchy-feely American presidential candidate, based upon Bill Clinton, in Primary Colors (dir. Mike Nichols, 1998).

8  Anon., Something New on Men and Manners: A Critique of the Follies and Vices of the Age … (Hailsham, Sussex, 1828), p. 174.

F. Trollope, Domestic Manners of the Americans (1832), ed. R. Mullen (Oxford, 1984), p. 83.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 46 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

TV’s Pride and Prejudice (1995) provided many memorable images, not least Colin Firth as Mr Darcy diving into a pool to emerge reborn as a feeling, empathetic human being. This transformation gains extra impact when contrasted with the intense formality of his general deportment. When, after some months of absence, Darcy and Bingley re-enter the Bennet family home at Longbourn, they bow deeply in unison, whilst Mrs Bennet and all her daughters rise as one and bend their heads in synchronised response. Audiences may well sigh, admiringly or critically according to taste. What a contrast with our own casual manners. It satisfies a sense that the past must have been different – like a ‘foreign country’, in a much-cited phrase from L.P. Hartley.1

But did people in Georgian polite society actually greet each other like that on a day-to-day basis? There is good evidence for the required formality (and dullness) of Hanoverian court life on ceremonial occasions. A fashionable ball or high society dinner might also require exceptional courtesies. But ordinary life, even among the elite of Britain’s landed aristocrats and commercial plutocrats, was not lived strictly according to the etiquette books.

Instead, the eighteenth century saw an attenuation of the lavish old-style formalities, which were known as ‘hat honour’. In theory, men when meeting their social superiors made a deep bow, removing their headgear, with a visible flourish. Gentlemen greeting a ‘lady’ would also remove their hats with a courteous nod. For women, the comparable requirement was the low curtsey from the ‘inferior’ to the ‘superior’. Those who held their heads highest (and hatted in the case of men) the more socially elevated, since lowering the head always signalled deference. This understanding underpins the custom of addressing monarchs as ‘Your Highness’.

Illustration 1 ‘The Hopes of the Family’ (1799) shows a young man being interviewed for University admission. A don presides, wearing his mortar board, whilst the nervous applicant and his eager father, an old-fashioned country gentleman, have both doffed their hats, which they carry under their arms. An undergraduate in his gown looks on nonchalantly, his hands in pockets. Yet he too remains bare-headed in the presence of a senior member of his College. Only the applicant’s mother, who is subject to the different rules of etiquette for women, covers her head with a rustic bonnet.

V0040710 A school master is sitting at a table pointing at some books

Illus 1: A gentle satire by Henry William Bunbury, entitled

The Hopes of the Family (1799) – © The Welcome Library.

In accordance with this etiquette, King Charles I on trial before Parliament in 1648 wore a high black hat throughout the proceedings. It was a signal that, as the head of state, he would not uncover for any lower authority. The answer of his republican opponents was radical. Charles I was found guilty of warfare against his own people, as a ‘tyrant, traitor and murderer’. He was decapitated, beheading the old power structure very literally and publicly.

After the Restoration of the monarchy in 1660, there was some return to the old formalities. (Or at least hopes of the same). For example, in October 1661 the naval official and MP Samuel Pepys recorded his displeasure at what he considered to be the undue pride of his manservant, who kept his hat on in the house.2 Pepys expected deference from his ‘inferiors’, whilst being ready to accord it to his own ‘superiors’. But it was not always easy to judge. In July 1663, Pepys worried that he may have offended the Duke of York, by not uncovering when the two men were walking in sight of each other in St James’s Park.3 It was a tricky decision. Failure, to doff one’s hat, when close at hand, would be rude, yet uncovering from too far away would seem merely servile.

Over the very long term, however, all these formalities began to attenuate. With the advent of brick buildings and roaring coal-fires, the habitual wearing of hats indoors generally disappeared – mob-caps and night-caps excepted. And in public, the old gestures continued but in an attenuated form. With commercial growth came the advent of many people of middling status. It was hard for them to calculate the precise gradations of status between one individual and another. The old-style mannerisms were also too slow for a fast-moving and urbanising world.

As a result, between men the deep bow began to change into a nod of the head. The elaborate flourish of the hat gradually turned into a quick lifting or pulling. And the respectful long tug of the forelock, on the part of those too poor to have any headgear, turned into a briefer touch to the head.4

A notable example of the abbreviation of hat honour was the codification of the military salute. It was impractical for rank-and-file soldiers to remove their headgear whenever encountering their officers. On the other hand, military discipline required the respecting of ranks. The answer was a symbolic gesture. ‘Inferiors’ greeted their ‘superiors’ by touching the hand to the head. Different regiments evolved their own traditions. Only in 1917 (well into World War I) did the British army decide that all salutes should be given right-handedly.

Meanwhile, the female greeting in the form of a low curtsey, holding out the dress, also evolved into a briefer bob or half-curtsey. It was expected from all lower-status women when meeting ‘superiors’. But hat honour was confined to men. On public occasions, women retained their hats, bonnets and feathers. Even in church, they did not copy men in baring their heads but respected St Paul’s Biblical dictum that it was not ‘comely’ for women to pray to God uncovered.5

These etiquette rules delight TV- and film-makers. In reality, however, the conventions were always in evolution. Rules were broken and/or fudged, as well as followed. Moreover, by the later eighteenth-century in Britain a new form of interpersonal greeting had arrived. It was the egalitarian hand-shake. Jane Austen’s characters not only bowed and curtsied to each other. They also, in certain circumstances, shook hands. In one Austen novel, a fearlessly ‘modern’ young woman extends her hand to shake that of a young man at a public assembly. Anyone know the reference? Answer follows in next month’s BLOG on Handshaking.

1 L.P. Hartley, The Go-Between (1943, p. 1: ‘the past is a foreign country – they do things differently there’.

2 R. Latham and W. Matthews (eds), The Diary of Samuel Pepys, Vol. II: 1661 (1970), p 199.

3 Ibid., Vol. IV: 1663 (1971), p. 252.

4 P.J. Corfield, ‘Dress for Deference & Dissent: Hats and the Decline of Hat Honour’, Costume: Journal of the Costume Society, 23 (1989), pp. 64-79; also transl. in K.Gerteis (ed.), Zum Wandel von Zeremoniell und Gesellschaftsritualen: Aufklärung, 6 (1991), pp. 5-18. Also posted on PJC personal website as Pdf/8.

5 Holy Bible, 1 Corinthians, 11:13.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 45 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

What does it take for individuals to disappear from recorded history? Most people manage it. How is it done? The first answer is to die young. That deed has been achieved by far too many historic humans, especially in eras of highly infectious diseases. Any death before the age of (say) 21 erases immense quantities of potential ability.

After all, how many child prodigies or Wunderkinder have there been? Very few whose fame has outlasted the immediate fuss in their own day. A number of chess-masters and mathematicians have shown dramatic early abilities. But the prodigy of all prodigies is Wolfgang Amadeus Mozart, who began composing at the age of five and continued prolifically for the remaining thirty years of his life. His music is now more famous and more widely performed than it ever was in his own day. Mozart is, however, very much the exception – and his specialist field, music, is also distinctive in its ability to appeal across time and cultures.

A second way of avoiding the attentions of history is to live and die before the invention of writing. Multiple generations of humans did that, so that all details of their lifestyles, as inferred by archaeologists and palao-anthropologists, pertain to the generality rather than to individuals. Oblivion is particularly guaranteed when corpses have been cremated or have been buried in conditions that lead to total decay.

As it happens, a number of frozen, embalmed, or bog-mummified bodies from pre-literate times have survived for many thousands of years. Scholars can then study their way of life and death in unparalleled and fascinating detail. One example is Ötzi the Iceman, found in a high glacier on the Italian/Austrian border in 1991, and now on dignified display in the South Tyrol Museum of Archaeology at Bolzano, Italy. His clothing and weaponry reveal much about the technological abilities of Alpine hunters from over five thousand years ago, just as his bodily remains are informative about his diet, health, death, and genetic inheritance.1 Nonetheless, the world-view of individuals like Ötzi are matters of inference only. And the number of time-survivors from pre-literate eras are very few.22014-5-Pic1 OtzitheIceman

Ötzi the Iceman, over 5000 years old but initially thought to be a recent cadaver when discovered in 1991:
now in the South Tyrol Museum of Archaeology, Bolzano, Italy.

The third way of avoiding historical attention is to live a quiet and secluded life, whether willingly or unwillingly. Most people in every generation constitute the rank-and-file of history. Their deeds might well be important, especially collectively. Yet they remain unknown individually. That oblivion applies especially to those who remain illiterate, even if they live in an era when reading and writing are known.

‘Full many a flower is born to blush unseen/ And waste its sweetness on the desert air’, as Thomas Gray put it eloquently in 1751 (in context, talking about humans, not horticulture).3 One might take his elegiac observation to constitute an oblique call for universal education (though he didn’t). Yet even in eras of widening or general literacy, it remains difficult for every viewpoint to be recorded and to survive. In nineteenth-century Britain, when more people than ever were writing personal letters, diaries and autobiographies, those who did so remained a minority. And most of their intimate communications, especially if unpublished, have been lost or destroyed.

Of course, past people were also known by many other forms of surviving evidence. The current vogue in historical studies (in which I participate) is to encourage the analysis of all possible data about as many as possible individuals, whether ‘high’ or ‘lowly’, by making the information available and searchable on-line.4 Nonetheless, historians, however determined and assiduous, cannot recover everybody. Nor can they make all recovered information meaningful. Sometimes past data is too fragmented or cryptic to have great resonance. It can also be difficult to link imperfect items of information together, with attendant risks: on the one hand, of making false linkages and, on the other hand, of missing real ones.

Moreover, there are still many people, even in well documented eras, whose lives left very little evidence. They were the unknowns who, in George Eliot’s much-quoted passage at the end of Middlemarch (1871/2): ‘lived faithfully a hidden life, and rest in unvisited tombs’.5 She did not intend to slight such blushing violets. On the contrary, Eliot hailed their quiet importance. ‘The growing good of the world is partly dependent on unhistoric acts’, she concluded. A realist might add that the same is true of the ‘bad of the world’ too. But again many lives remain hidden from historic record, even if the long-term impact of their collective actions and inactions has not.

Finally, there is concealment. Plenty of people then and now have reasons for hiding evidence – for example, pertaining to illegitimacy, adultery, addiction, crime, criminal conviction, or being on the losing side in warfare. And many people will have succeeded, despite the best efforts of subsequent scholar-sleuths. Today, however, those seeking to erase their public footprint face an uphill task. The replicating powers of the electronic media mean that evidence removed from one set of files returns, unbidden, in other versions or lurks in distant master files. ‘Delete’ does not mean absolute deletion.

Concluding the saga of The Mayor of Casterbridge (1886), the bipolar anti-hero Michael Henchard seeks to become a non-person after his death, leaving a savage will demanding ‘That I be not buried in consecrated ground & That no sexton be asked to toll the bell … & That no flowers be planted on my grave & That no man remember me’.6
2014-5-Pic2 Non-person

Non-Person © (2014)

Today: yes, people can still be forgotten; or even fall through the administrative cracks and become a non-person. But to disappear from the record entirely is far from easy. Future historians of on-line societies are going to face the problems not of evidential dearth but of massive electronic glut. Still, don’t stop writing BLOGs, tweets, texts, emails, letters, books, graffiti. If we can’t disappear from the record, then everyone – whether famous, infamous, or unknown – can take action and ‘bear witness’.

1 For Ötzi, see:

2 See P.V. Glob, The Bog People: Iron-Age Man Preserved, transl. R. Bruce-Mitford (London, 1969); D.R. Brothwell, The Bog Man and the Archaeology of People (London, 1986).

3 T. Gray, ‘Elegy Written in a Country Churchyard’ (1751), lines 55-6.

4 See e.g. Proceedings of the Old Bailey Online, 1674-1913:; London Lives, 1690-1800:; Clergy of the Church of England Database, 1540-1835:; London Electoral History, 1700-1850:

5 G. Eliot [Mary Ann Evans], Middlemarch: A Study of Provincial Life (1871/2), ed. W.J. Harvey (Harmondsworth, 1969), p. 896.

6 T. Hardy, The Mayor of Casterbridge: The Life and Death of a Man of Character (1886), ed. K. Wilson (London, 2003), p. 321.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 41 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

What does it take to get a long-surviving reputation? The answer, rather obviously, is somehow to get access to a means of endurance through time. To hitch a lift with history.

People in sports and the performing arts, before the advent of electronic storage/ replay media, have an intrinsic problem. Their prowess is known at the time but is notoriously difficult to recapture later. The French actor Sarah Bernhardt (1844-1923), playing Hamlet on stage when she was well into her 70s and sporting an artificial limb after a leg amputation, remains an inspiration for all public performers, whatever their field.1  Yet performance glamour, even in legend, still fades fast.

Bernhardt in the 1880s as a romantic HamletWhat helps to keep a reputation well burnished is an organisation that outlasts an individual. A memorable preacher like John Wesley, the founder of Methodism, impressed many different audiences, as he spoke at open-air and private meetings across eighteenth-century Britain. Admirers said that his gaze seemed to pick out each person individually. Having heard Wesley in 1739, one John Nelson, who later became a fellow Methodist preacher, recorded that effect: ‘I thought his whole discourse was aimed at me’.2

Yet there were plenty of celebrated preachers in Georgian Britain. What made Wesley’s reputation survive was not only his assiduous self-chronicling, via his journals and letters, but also the new religious organisation that he founded. Of course, the Methodist church was dedicated to spreading his ideas and methods for saving Christian souls, not to the enshrining of the founder’s own reputation. It did, however, forward Wesley’s legacy into successive generations, albeit with various changes over time. Indeed, for true longevity, a religious movement (or a political cause, come to that) has to have permanent values that outlast its own era but equally a capacity for adaptation.

There are some interesting examples of small, often millenarian, cults which survive clandestinely for centuries. England’s Muggletonians, named after the London tailor Lodovicke Muggleton, were a case in point. Originating during the mid-seventeenth-century civil wars, the small Protestant sect never recruited publicly and never grew to any size.  But the sect lasted in secrecy from 1652 to 1979 – a staggering trajectory. It seems that the clue was a shared excitement of cultish secrecy and a sense of special salvation, in the expectation of the imminent end of the world. Muggleton himself was unimportant. And finally the movement’s secret magic failed to remain transmissible.3

In fact, the longer that causes survive, the greater the scope for the imprint of very many different personalities, different social demands, different institutional roles, and diverse, often conflicting, interpretations of the core theology. Throughout these processes, the original founders tend quickly to become ideal-types of mythic status, rather than actual individuals. It is their beliefs and symbolism, rather than their personalities, that live.

As well as beliefs and organisation, another reputation-preserver is the achievement of impressive deeds, whether for good or ill. Notorious and famous people alike often become national or communal myths, adapted by later generations to fit later circumstances. Picking through controversies about the roles of such outstanding figures is part of the work of historians, seeking to offer not anodyne but judicious verdicts on those ‘world-historical individuals’ (to use Hegel’s phrase) whose actions crystallise great historical moments or forces. They embody elements of history larger than themselves.

Hegel himself had witnessed one such giant personality, in the form of the Emperor Napoleon. It was just after the battle of Jena (1806), when the previously feared Prussian army had been routed by the French. The small figure of Napoleon rode past Hegel, who wrote: ‘It is indeed a wonderful sensation to see such an individual, who, concentrated here at a single point, astride a horse, reaches out over the world and masters it’.4

(L) The academic philosopher G.W.F. Hegel (1770-1831) and (R) the man of action, Emperor Napoleon (1769-1821),  both present at Jena in October 1806The means by which Napoleon’s posthumous reputation has survived are interesting in themselves. He did not found a long-lasting dynasty, so neither family piety nor institutionalised authority could help. He was, of course, deposed and exiled, dividing French opinion both then and later. Nonetheless, Napoleon left numerous enduring things, such as codes of law; systems of measurement; structures of government; and many physical monuments. One such was Paris’s Jena Bridge, built to celebrate the victorious battle.

Monuments, if sufficiently durable, can certainly long outlast individuals. They effortlessly bear diachronic witness to fame. Yet, at the same time, monuments can crumble or be destroyed. Or, even if surviving, they can outlast the entire culture that built them. Today a visitor to Egypt may admire the pyramids, without knowing the names of the pharaohs they commemorated, let alone anything specific about them. Shelley caught that aspect of vanished grandeur well, in his poem to the ruined statue of Ozymandias: the quondam ‘king of kings’, lost and unknown in the desert sands.6

So lastly what about words? They can outlast individuals and even cultures, provided that they are kept in a transmissible format. Even lost languages can be later deciphered, although experts have not yet cracked the ancient codes from Harappa in the Punjab.7  Words, especially in printed or nowadays digital format, have immense potential for endurance. Not only are they open to reinterpretation over time; but, via their messages, later generations can commune mentally with earlier ones.

In Jena, the passing Napoleon (then aged 37) was unaware of the watching academic (then aged 36), who was formulating his ideas about revolutionary historical changes through conflict. Yet, through the endurance of his later publications, Hegel, who was unknown in 1806, has now become the second notable personage who was present at the scene. Indeed, via his influence upon Karl Marx, it could even be argued that the German philosopher has become the historically more important figure of those two individuals in Jena on 13 October 1806. On the other hand, Marx’s impact, having been immensely significant in the twentieth century, is also fast fading.

Who from the nineteenth century will be the most famous in another century’s time? Napoleon? Hegel? Marx? (Shelley’s Ozymandias?) Time not only ravages but provides the supreme test.

1  R. Gottlieb, Sarah: The Life of Sarah Bernhardt (New Haven, 2010).

R.P. Heitzenrater, ‘John Wesley’s Principles and Practice of Preaching’, Methodist History, 37 (1999), p. 106. See also R. Hattersley, A Brand from the Burning: The Life of John Wesley (London, 2002).

3  W. Lamont, Last Witnesses: The Muggletonian History, 1652-1979 (Aldershot, 2006); C. Hill, B. Reay and W. Lamont, The World of the Muggletonians (London, 1983); E.P. Thompson, Witness against the Beast: William Blake and the Moral Law (Cambridge, 1993).

G.W.F. Hegel to F.I. Neithammer, 13 Oct. 1806, in C. Butler (ed.), The Letters: Georg Wilhelm Friedrich Hegel, 1770-1831 (Bloomington, 1984); also transcribed in, 2005.


6  P.B. Shelley (1792-1822), Ozymandias (1818).

For debates over the language or communication system in the ancient Indus Valley culture, see:

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 40 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

A growing number of historians, myself included, want students to study long-term narratives as well in-depth courses.1 More on (say) the peopling of Britain since Celtic times alongside (say) life in Roman Britain or (say) medicine in Victorian times or (say) the ordinary soldier’s experiences in the trenches of World War I. We do in-depth courses very well. But long-term studies are also vital to provide frameworks.2

Put into more abstract terms, we need more diachronic (long-term) analysis, alongside synchronic (short-term) immersion. These approaches, furthermore, do not have to be diametrically opposed. Courses, like books, can do both.

That was my aim in an undergraduate programme, devised at Royal Holloway, London University.3  It studied the long and the short of one specific period. The choice fell upon early nineteenth-century British history, because it’s well documented and relatively near in time. In that way, the diachronic aftermath is not too lengthy for students to assess within a finite course of study.

Integral to the course requirements were two long essays, both on the same topic X. There were no restrictions, other than analytical feasiblity. X could be a real person; a fictional or semi-fictionalised person (like Dick Turpin);4  an event; a place; or anything that lends itself to both synchronic and diachronic analysis. Students chose their own, with advice as required. One essay of the pair then focused upon X’s reputation in his/her/its own day; the other upon X’s long-term impact/reputation in subsequent years.

There was also an examination attached to the course. One section of the paper contained traditional exam questions; the second just one compulsory question on the chosen topic X. Setting that proved a good challenge for the tutor, thinking of ways to compare and contrast short- and long-term reputations. And of course, the compulsory question could not allow a simple regurgitation of the coursework essays; and it had to be equally answerable by all candidates.

Most students decided to examine famous individuals, both worthies and unworthies: Beau Brummell; Mad Jack Mytton; Queen Caroline; Charles Dickens; Sir Robert Peel; Earl Grey; the Duke of Wellington; Harriette Wilson; Lord Byron; Mary Shelley; Ada Lovelace; Charles Darwin; Harriet Martineau; Robert Stephenson; Michael Faraday; Augustus Pugin; Elizabeth Gaskell; Thomas Arnold; Mary Seacole; to name only a few. Leading politicians and literary figures tended to be the first choices. A recent book shows what can be done in the case of the risen (and rising still further) star of Jane Austen.5 In addition, a minority preferred big events, such as the Battle of Waterloo; or the Great Exhibition. None in fact chose a place or building; but it could be done, provided the focus is kept sharp (the Palace of Westminster, not ‘London’.)

Studying contemporary reputations encouraged a focus upon newspaper reports, pamphlets, letters, public commemorations, and so forth. In general, students assumed that synchronic reputation would be comparatively easy to research. Yet they were often surprised to find that initial responses to X were confused. It takes time for reputations to become fixed. In particular, where the personage X had a long life, there might well be significant fluctuations during his or her lifetime. The radical John Thelwall, for example, was notorious in 1794, when on trial for high treason, yet largely forgotten at his death in 1834. 6

By contrast, students often began by feeling fussed and unhappy about studying X’s diachronic reputation. There were no immediate textbooks to offer guidance. Nonetheless, they often found that studying long-term changes was good fun, because more off-the-wall. The web is particularly helpful, as wikipedia often lists references to X in film(s), TV, literature, song(s) and popular culture. Of course, all wiki-leads need to be double-checked. There are plenty of errors and omissions out there.

Nonetheless, for someone wishing to study the long-term reputation of (say) Beau Brummell (1778-1840), wikipedia offers extensive leads, providing many references to Brummell in art, literature, song, film, and sundry stylistic products making use of his name, as well as a bibliography. 7

Beau Brummell (1778-1840) from L to R: as seen in his own day; as subject of enquiry for Virginia Woolf (1882-1941); and as portrayed by Stewart Granger in Curtis Bernhardt’s film (1954).Plus it is crucial to go beyond wikipedia. For example, a search for relevant publications would reveal an unlisted offering. In 1925, Virginia Woolf, no less, published a short tract on Beau Brummell.8 The student is thus challenged to explore what the Bloomsbury intellectual found of interest in the Regency Dandy. Of course, the tutor/ examiner also has to do some basic checks, to ensure that candidates don’t miss the obvious. On the other hand, surprise finds, unanticipated by all parties, proved part of the intellectual fun.

Lastly, the exercise encourages reflections upon posthumous reputations. People in the performing arts and sports, politicians, journalists, celebrities, military men, and notorious criminals are strong candidates for contemporary fame followed by subsequent oblivion, unless rescued by some special factor. In the case of the minor horse-thief Dick Turpin, he was catapulted from conflicted memory in the eighteenth century into dashing highwayman by the novel Rookwood (1834). That fictional boost gave his romantic myth another 100 years before starting to fade again.

Conversely, a tiny minority can go from obscurity in their lifetime to later global renown. But it depends crucially upon their achievements being transmissable to successive generations. The artist and poet William Blake (1757-1827) is a rare and cherished example. Students working on the long-and-the-short of the early nineteenth century were challenged to find another contemporary with such a dramatic posthumous trajectory. They couldn’t.

But they and I enjoyed the quest and discovery of unlikely reactions, like Virginia Woolf dallying with Beau Brummell. It provided a new way of thinking about the long-term – not just in terms of grand trends (‘progress’; ‘economic stages’) but by way of cultural borrowings and transmutations between generations. When and why? There are trends but no infallible rules.

1 ‘Teaching History’s Big Pictures: Including Continuity as well as Change’, Teaching History: Journal of the Historical Association, 136 (Sept. 2009), pp. 53-9; and PJC website Pdf/3.

2 My own answers in P.J. Corfield, Time and the Shape of History (2007).

3 RH History Course HS2246: From Rakes to Respectability? Conflict and Consensus in Britain 1815-51 (content now modified).

4 Well shown by J. Sharpe, Dick Turpin: The Myth of the Highwayman (London, 2004).

5 C. Harman, Jane’s Fame: How Jane Austen Conquered the World (Edinburgh, 2009).

6 Two PJC essays on John Thelwall (1764-1834) are available in PJC website, Pdf/14 and Pdf/22.

7 See

8 See V. Woolf, Beau Brummell (1925; reissued by Folcroft Library, 1972); and

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 39 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2012)

It was fascinating to meet with twenty-three others on a humid June afternoon to debate what might appear to be abstruse questions of Law & Historical Periodisation. We were attending a special conference at Birkbeck College, London University – an institution (founded in 1823 as the London Mechanics Institute) committed as always to extending the boundaries of knowledge. The participants came from the disciplines of law, history, philosophy, and literary studies. And many were students, including, laudably, some interested undergraduates who were attending in the vacation.

At stake was not the question of whether we can generalise about different and separate periods of the past. Obviously we can and must to some extent. Even the most determined advocate of history as ‘one and indivisible’ has to accept some sub-divisions for operative purposes, whether in terms of days, years, centuries or millennia.

But the questions really coalesce about temporal ‘stages’, such as the ‘mediaeval’ era. Are such concepts relevant and helpful? Is history rightly divided into successive stages? and do they follow in regular sequence in different countries, even if at different times? Or is there a danger of reifying these epochs – turning them into something more substantive and distinctive than was actually the case?

Studies like H.O. Taylor’s The Medieval Mind (1919 and many later edns), Benedicta Ward’s Miracles and the Medieval Mind (1982), William Manchester’s The Medieval Mind and the Renaissance (Boston, 1992), and Stephen Currie’s Miracles, Saints and Superstition: The Medieval Mind (2006), all imply that there were common properties to the mind-sets of millions of Europeans who lived between (roughly) the fifth-century fall of Rome and the fifteenth-century discovery of the New World – and that these mindsets differed sharply from the ‘modern mind’. Yet are these historians justified in choosing this formula within their titles? Or partly justified? or absolutely misleading? Are there common features within human consciousness and experiences that refute these periodic cut-off points? Do we want to go to the other end of the spectrum, to endorse the view of those Evolutionary Psychologists who aver that human mentalities have not changed since the Stone Age? Forever he, whether Tarzan, Baldric or Kevin? forever she, whether Jane, Elwisia or Tracey?

Two papers by Kathleen Davis (University of Rhode Island) and Peter Fitzpatrick (Birkbeck College) formed the core of the conference, both focusing upon the culture of jurisprudence and its standard definition of the medieval. Both give stimulating critiques of conventional legal assumptions, based upon stark dichotomies. In bare summary, the ‘medieval’ is supposed to be Christianised, feudal, and customary, while the ‘modern’ is supposedly secular, rights-based, and centred around the sovereign state. For good measure, the former is by implication backward and oppressive, while the latter is progressive and enlightened. Yet the long history of legal pluralism goes against any such dichotomy in practice. Historians like Helen Cam, who in 1941 wrote What of Medieval England is Alive in England Today? would have rejoiced at these papers, and at the sharp questions from the conference participants.

For my part, I was asked to give a final summary, based upon my position as a critic of all simple stage theories of history.1 My first point was to stress again how difficult it is to rethink periodisation, because so many cardinal assumptions are built not only into academic language but also into academic structures. Many specialists name themselves after their periods – as ‘medievalists’, ‘modernists’ or whatever. Those who call themselves just ‘historians’ are seen as too vague – or suffering from folie de grandeur. There are mutterings about the fate of Arnold Toynbee, once hailed as the twentieth-century’s greatest historian-philosopher – now virtually forgotten. Academic posts within departments of History and Literary Studies are generally defined by timespans. So are examination papers; many academic journals; many conferences; and so forth. Publishers in particular, who pay great attention to book titles, often endorse traditional nomenclature and stage divisions.

True, there are now increasing calls for change. My second point therefore highlights the new diversity. Conferences and seminars are held not only across disciplinary boundaries but also across epochal divisions. An increasing number of books are published with unusual start and end dates; and the variety of dates attached to the traditional periods continues to multiply, often confusingly. In addition, some scholars now study ‘big’ (long-term) history from the start of the world, or at least from the start of human history. Their approaches do not always manage to avoid traditional schema but the aim is to encourage a new diachronic sweep. And other pressures for change are coming from scholars in new fields of history, such as women’s history or (not the same thing) the history of sexuality.

Shedding the old period terminology is mentally liberating. So the Italian historian Massimo Montanari, previously a ‘medievalist’, wrote in 1994 of the happiness that followed his discarding of all the labels of ‘ancient’, ‘medieval’ and ‘modern: ‘In the end, I felt freed as from a restrictive and artificial scaffolding …’2

Lastly, then, what of the future? The aim is not to replace one set of period terms and dates with another. Any rival set will run into the same difficulties of detecting precise cut-off points and the risk of stereotyping the different cultures and societies on either side of a period boundary. It is another example of dichotomous thinking, which glosses over the complexities of the past. Above all, all stage theories fail to incorporate the elements of deep continuity within history (see my November 2010 discussion-point).

We need a new way of thinking about the intertwining of persistence and change within history. It is chiefly a matter of understanding. But it will also entail a change of language. I don’t personally endorse the Foucauldian view that language actually determines consciousness. For me, primacy in the relationship is the other way round. A changing consciousness can ultimately change language. Yet I do recognise the confining effects of existing concepts and terminology upon patterns of thought. Such an impact is another example of the power of continuity. With several bounds, however, historians can become free. With a new language, we can talk about epochs and continuities, intertwined and interacting in often changing ways. It’s fun to try and also fun to try to convince others. Medievalists, arise. You have nothing to lose but an old name, which survives through inertia. There are more than three steps between ancient – middle – modern, even in European history – let alone around the world. Try a different name to shake the stereotypes. And tell the lawyers too.

1 P.J. Corfield, Time and the Shape of History (2007) and P.J. Corfield, POST-Medievalism/ Modernity/ Postmodernity? Rethinking History, Vol. 14/3 (Sept. 2010), pp. 379-404; also available on publishers’ website Taylor & Francis; and personal website

2 M. Montanari, The Culture of Food (transl. C. Ipsen (Oxford, 1994), p. xii.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 21 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2012)

Try living without it. In healthy humans, memory works non-stop from birth to death. That means that it can work, unprompted, for over a century. Memory automatically tells us who we are (short of mental illness or accident). It simultaneously supplies us with our personal back-story and locates us within a broad framework picture of the world in which we have lived to date. Our capacity to think through time, and to remember things that happened long ago, constitutes a major characteristic of what it means to be human.

As such, the power of memory is an ancient, not to say primaeval, capacity. It’s entwined with consciousness. But it also operates at instinctual levels, as in muscle memory. With its multiple resources, memory is notably multi-layered. It can be cultivated consciously. A host of mnemonic systems, some very ancient, offer systems to help the mind in storing and retrieving a huge ragbag of ideas and information.1  Giulio Camillo’s beautiful Theatre of Memory (shown in Fig. 1) is but one example.2  It’s a nice imaginary prospect of the inside of the human cranium.march005
march006Alongside conscious efforts of memory cultivation, many framework recollections – such as knowledge of one’s native language – are usually accumulated unwittingly and almost effortlessly. Deep memory systems constitute a form of long-term storage. With their aid, people who are suffering from progressive memory loss often continue to speak grammatically for a long way into their illness. Or, strikingly, songs learned in childhood, aided by the wordless mnemonic power of rhythm and music, may remain in the repertoire of the seriously memory-impaired even after regular speech has long gone.

Given its primaeval origins, the human capacity to remember notably predates the invention of calendars. Such time-measuring and time-referencing devices are the products, not the first framers, of memory. As a result, we don’t habitually remember by reference to precise dates and times, with the exception of special events or consciously learned information. Nor do we retain everything. Forgetting selectively is as much a human capacity as remembering. Too much and we’d suffer from information overload.

The combination of remembering and forgetting, both individually and collectively, has some significant implications. Not only does memory fade but, unkindly, it also plays tricks. Details that we think we remember with great confidence can turn out to be false. My own deceitful memory has just given me a shock, which I’ve taken to heart since I pride myself on my powers of recollection. One of my clear recollections of the student protests in 1969 (which I wrote about in my January discussion-piece) has turned out to be erroneous, at least in one significant detail. At a lunch-time protest meeting at Bedford College in 1969 or early 1970, an ardent young postgraduate urged those present to capture the Principal’s office today, in order to overthrow capitalism tomorrow. I am certain that the event took place and that the speech was greeted with cheers (and some silent scepticism – mine included).

However, my memory has over time fabricated an erroneous identity for the speaker. I met the person in question last week – now a Labour peer in the House of Lords – and reminded her of the episode, expecting some shared laughter at the ambitious scope of youthful ideals. But she did not attend Bedford College nor had she ever visited it. Moreover, she had always shared my critique of the student utopianism of the later 1960s. I was wrong on a central point, which I’d convinced myself was correct. Could I even be sure that the protest meeting took place at all? Collapse of stout party – myself.
march007And I am not alone. Discovering faults in memory is a common experience. It’s a salutary warning not to be too cocky. Had I been relying upon my unchecked memory when speaking in the witness box, this central error would have discredited my entire evidence. Falsus in uno, falsus in omnibus, as the Roman legal tag has it: wrong in one thing, wrong in all. In fact, the dictum is exaggerated. Errors in some areas may be counter-balanced by truths elsewhere. Nonetheless, I have drawn one personal conclusion from my mortifying discovery. If I’m ever again invited to give testimony on oath or in an on-the-record interview, I will do my homework thoroughly beforehand.

A second lesson is that human gossip and chatter is an essential part of the process of checking and cross-checking memories. Such retrospective discussions (‘She said … ; and then I said … ; and then she replied …’) often seem rambling and inconsequential. They are, however, consolidating the stuff of memory. It works for communities as well as for individuals. Indeed, talking, taking stock, and remembering together is helpful, particularly after experiences of disasters which should not be forgotten in silence. Vera Schwarcz’s powerful study Bridge across Broken Time makes that point in its title.3 Memory, with all its faults, allows for the possibility of understanding the past and overcoming traumas. Conversely, the negative effects of buried memories for starkly dislocated communities reverberate through successive generations.

So my final point: here come the historians. The fallibility of unvarnished memory encouraged the first production of memory aids, such as written and numerical records, and calendrical calculations. And over time humans have generated an immeasurable cornucopia of data and documentation, which is far beyond the capacity of any individual mind to store. It is now a collective resource. Historians don’t replicate human memory. Indeed, they share its fallibilities. But, collectively, they join the task of storing, cross-checking, correcting, ordering, and evaluating a past that goes beyond individual memory.

1  For a stirring analysis, ranging from classical Greece to the European Renaissance, consult the classic by Frances A. Yates, The Art of Memory (1966). A recent contribution to the memory bug is also provided by Joshua Foer, Moonwalking with Einstein: The Art and Science of Remembering Everything (2011).

2  For the philosopher Giulio Camillo (c.1480-1544), see K. Robinson, A Search for the Source of the Whirlpool of Artifice: The Cosmology of Giulio Camillo (Edinburgh, 2006).

3  Vera Schwarcz, Bridge across Broken Time: Chinese and Jewish Cultural Memory (New Haven, 1998).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 18 please click here


If citing, please kindly acknowledge copyright © Penelope J. Corfield (2011)

People often imagine that class barriers were more rigid in the past, notwithstanding historical fluctuations in social attitudes. As a result, it is always assumed that cross-class marriages were especially rare. Yet matters were never so simple. Among the many individuals in the past, who had sexual relationships across class boundaries (a comparatively frequent occurrence), there were always some who were bold enough to marry across them.

One case, among several aristocratic examples from the eighteenth century, was the marriage of the 5th Earl of Berkeley to Mary Cole, the daughter of a Gloucester butcher. She made a dignified wife, living down the social sneers. The Berkeleys began to live together in 1785 and did not marry publicly until 1796, although the Earl claimed that there had been an earlier ceremony.
october001This confusion led to a succession dispute. Eventually, the sons born before the public wedding were disbarred from inheriting the title, which went to their legitimate younger brother. Here the difficulty was not the mother’s comparatively ‘lowly’ status but the status of the parental marriage. It affected the succession to a noble title, which entitled its holder to attend the House of Lords. But the disbarred older siblings did not become social outcasts. Two of the technically illegitimate sons, born before the public marriage, went on to become MPs in the House of Commons, while the legitimate 6th Earl modestly declined to take his seat as a legislator.

Another example, this time from the nineteenth century, was that of Sir Harry Fetherstonhaugh. He was the wealthy owner of Uppark House (Sussex), who in 1825 married for the first time, aged 70. His bride was the 21-year-old Mary Ann Bullock, his dairymaid’s assistant. She inherited his estate, surviving him for many years. Everything at Uppark was kept as it was in Sir Harry’s day. The estate then went to her unmarried sister who, as ‘her leddyship’ in her very old age, appeared to epitomise the old landed society – so much did outcomes triumph over origins. The young H.G. Wells, whose mother was housekeeper at Uppark, mused accordingly:1

In that English countryside of my boyhood every human being had a ‘place’. It belonged to you from your birth like the colour of your eyes, it was inextricably your destiny. Above you were your betters, below you were your inferiors…

The social conventions, within such a hierarchy, did allow for some mobility. High-ranking men raised their wives to a matching status, giving aristocratic men some room for manoeuvre. Against that, noble families generally did their best to ensure that heirs to grand titles did not run away with someone entirely ‘unsuitable’.

A tabulation of the first-marriage choices of 826 English peers, made between 1600 and 1800, showed that, in sober reality, most (73 percent) chose a bride from an equally or nearly equally titled background.2 The homogeneity of the elite was generally preserved.

Interestingly, however, just over one quarter (27 percent) of these English peers – a far from negligible proportion – were more socially venturesome. Their wives from ‘lower’ social backgrounds tended to be daughters of professional men or of merchants. In particular, a splendid commercial fortune was an ideal contribution in terms of bridal dowry; and, in such circumstances, aristocratic families found themselves willing to accept theoretically humbler connections with businessmen ‘in trade’.

Marriages like that of Sir Harry were ‘outliers’ in terms of the social distance between bride and groom. But his matrimonial decision to leap over conventions of social distance was not unique.

For women of high rank, meanwhile, things were more complicated. By marrying ‘down’, they lost social status; and their off-spring, however well connected on the mother’s side, took their ‘lower’ social rank from the father.

Nonetheless, it was far from unknown for high-born women to flout convention. In particular, wealthy widows might follow their own choice in a second marriage, having followed convention in the first. One notable example was Hester Lynch Salusbury, from a Welsh landowning family. She married, firstly, Henry Thrale, a wealthy brewer, with whom she had 12 children, and then in 1784 – three years after Thrale’s death – Gabriele Piozzi, an Italian music teacher and a Catholic to boot.3

Scandal ensued. Her children were affronted. And Dr Johnson, a frequent house-guest at the Thrale’s Streatham mansion, was decidedly not amused. Undaunted, Hester Lynch Piozzi and her husband retired to her estates in north Wales, where they lived in a specially built Palladian villa, Brynbella.
october002So little was damage done to the family’s long-term status that her (estranged) oldest daughter married a Viscount. Furthermore, the Piozzis’ adopted son, an Italian nephew of Gabriele Piozzi, inherited the Salusbury estates, taking the compound name Sir John Salusbury Piozzi Salusbury.

If, after the initial fuss, the partners in a cross-class union lived respectably enough, the wider society tended sooner or later to condone the ‘mésalliance’. Feelings were soothed by respect for marriage as an institution. And the wider social stability was ultimately served by absorbing such dynastic shocks rather than by highlighting them.

Little wonder that many a novel dilated on the excitements and tensions of matrimonial choice. Not only was there the challenge of finding a satisfactory partner among social peer-groups but there was always some lurking potential for an unconventional match instead of a conventional union.

Such possibilities – complete with hazards – applied at all levels of society. In the early twentieth century, the family of D.H. Lawrence epitomised a different set of cross-class tensions. His father was a scarcely literate miner from Eastwood, near Nottingham, while his mother was a former assistant teacher with strong literary interests, who disdained the local dialect, and prided herself on her ‘good old burgher family’. From the start, they were ill-assorted.
october003In his youth, D.H. Lawrence was his mother’s partisan and despised his father as feckless and ‘common’. Later, however, he switched his theoretical allegiance. Lawrence felt that his mother’s puritan gentility had warped him. Instead, he yearned for his father’s male sensuousness and frank hedonism, though the father and son never became close.4

Out of such tensions came Lawrence’s preoccupation with man/woman conflict and with unorthodox sex and love. His parent’s strife was also more than mirrored in his own turbulent relationship with Frieda von Richtofen, the daughter of a Silesian aristocrat, who was, when they met, married to a respected Nottingham University professor.

Initial social distance between a married couple could lend enchantment – or the reverse. Cross-class relationships have been frequent enough for there to have been many cases, both successful and the reverse. Later generations always underestimate their number. But we should not ignore the potential for cultural punch (positive or negative) when couples from different backgrounds marry, even in times when class barriers are less than rigid. Nor should we underestimate society’s long-term ability to absorb such shocks, which would have to happen in great numbers before a classless society might be achieved.

1 H.G. Wells, Tono-Bungay (1909; in 1994 edn), pp. 10-11. For more about the Fe(a)therstonhaugh marriage and the context of Sussex landowning society, see A. Warner, ‘Finding the Aristocracy, 1780-1880: A Case Study of Rural Sussex’ (unpub. typescript, 2011; copyright A. Warner, who can be contacted via PJC).

2 Figures calculated from data in J. Cannon, Aristocratic Century: The Peerage of Eighteenth-Century England (Cambridge, 1984), p. 85: Table 20. Note that the social status of each bride is derived from the rank of her father, so possibly obscuring a more variegated background in terms of her maternal inheritance.

3 Details of their courtship and Hester Thrale’s meditations on their disparities in rank are available on the website:

4 R. Aldington, Portrait of a Genius but …: The Life of D.H. Lawrence (1950), pp. 3-5, 8-9, 13, 15, 334.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 13 please click here