MONTHLY BLOG 46, THE HISTORY OF THE HAND-SHAKE

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

Not everyone shakes hands. But those who do are expressing an egalitarian relationship. As a form of greeting, the handshake differs completely in meaning from the bow or curtsey, which display deference from the ‘lowly’ to those on ‘high’. In one Jane Austen novel, a fearlessly ‘modern’ young woman extends her hand to a young man at a crowded party. Of course, it is Marianne Dashwood, the embodiment of ‘sensibility’. She has just re-encountered the errant Willoughly, long after he has ended their unofficial courtship. Marianne immediately holds out her hand, claiming him as an intimate friend. But he avoids her gesture. Marianne then exclaims ‘in a voice of the greatest emotion: “Good God! Willoughby, what is the meaning of this? … Will you not shake hands with me?”’. He cannot avoid doing so, but drops her hand quickly. After a few short exchanges, Willoughby then leaves ‘with a slight bow’.2  He has dropped her. Their body language says it all.

There is a particular poignancy in this scene. In this era, men and women who were not related to one another would not ordinarily touch hands as a form of greeting. But, of course, lovers might do so. No wonder that a mere touch was so powerful when it was so rare. (And it retains its appeal today in romantic mythology and countless pop songs: I Wanna hold your Hand!)3  Shakespeare, as ever, had known the scene. Romeo understands the intimacy implied when he takes Juliet’s hand in a dance, as does she: ‘And palm to palm is like holy palmer’s kiss’.4

Even more definitively, a couple would touch hands in a marriage ceremony (even allowing for the many varieties of ritual associated with weddings).5 The wording was clear. ‘Taking someone’s hand in marriage’ is an ultimate symbol of good faith, along with the exchange of rings which remain visible on the hand. These are public signs of personal commitment. An earlier poetic expression also offered an endgame variant, in the form of a final handshake. Michael Dayton’s Sonnet LXI (1594) which starts ‘Since there’s no help, come let us kiss and part’ invites the parting lovers to: ‘shake hands for ever, cancel all our vows’.

At the same time, a close handshake also has a set of commercial connotations. When two traders agree upon a contract, they may indicate the same by a handshake. However unequal they may be in wealth and commercial status, for the purposes of the deal they are equals, both pledging to fulfil the bargain. It constitutes a ‘gentleman’s agreement’ – upheld by personal honour. The same etiquette applies in making a bet.

Hence reneging upon a wager or deal sealed with a personal handshake is viewed as particularly heinous. The loser may even litigate for redress. Today the American Sports World News reports rumours that Charles Wang, the majority owner of the New York Islanders ice-hockey team, is being sued for $10 million by hedge-fund manager Andrew Barroway. Wang’s crime? He had allegedly reneged on a handshake pact to sell his Islanders franchise to Barroway.6
Typically, a handshake is a brief and routine affair, usually but not invariably with the right hand. True, there are variants. The prolonged handshake plus a clasp of the recipient’s upper arm by the shaker’s other hand is a gesture of special warmth – stereotypically undertaken by gregarious American politicians.7

Or there is the Masonic handshake. It gives a secret signal, allowing members of a separate society to identify one another. Apparently, there are many variants of the Masonic handshake, denoting differences in rank within the organisation. That information is rather depressing, since the handshake is, in principle, egalitarian. Nonetheless, it shows the potential for stylistic variation, from the firm muscular grip to the fleeting touch-and-drop.

Variations in styles of shaking hands are here caricatured as two gentlemen are almost dancing their mutual greetings; from www.etiquipedia.blogspot.co.uk/2013/10, consulted 11 Oct. 2014. Gradually, routine British styles of greeting began to incorporate the handshake. It was most common among civilian men of similar middle-class standing. By contrast, the toffs stuck with their traditional bowing and curtseying. Meanwhile, hand-shaking was rare among workers in ‘dirty’ trades and industries, because people in unavoidably grimy jobs usually tried to contain rather than to spread the dirt. The emblem of two clasped hands nonetheless appeared proudly on various trade union banners, as a pledge of solidarity.

The advent of the social handshake was thus not uniform across all periods and classes. But it could be found, between close male friends, in Britain from at least Shakespeare’s time. Yet its subsequent spread has taken a long time a-coming. For example, in 1828 the anonymous author of A Critique of the Follies and Vices of the Age was still expressing displeasure at the new popularity of the handshake, including between men and women.8

One reason for some snobbish hostility, among polite society in Britain, was the association of this custom with the republican USA, where its usage became increasingly common after American independence. There were also connotations of support for the hand-shaking citizens of republican France from 1793 onwards. English visitors to the USA like the novelist and social commentator Frances Trollope thus waxed somewhat critical of the local mores. In 1832, she deplored the habit of hand-shaking between both sexes and all classes (albeit excluding the non-free).For her, this form of greeting was too bodily intimate, especially as ‘the near approach of the gentleman [ironically] was always redolent of whiskey and tobacco’.9

Ultimately, however, the snobs were routed. Old-style bowing and curtseying has generally disappeared, although hat wearers may still doff their hats to ladies. However, the twentieth century also produced another twist in the tale. Just as the hand-shake was becoming quite widely adopted in Britain by the 1970s, it was suddenly challenged by a new custom, imported from overseas. It is the continental kiss, in the form of a light clasp of the upper arms and a peck on the cheek (or, for the physically fastidious, an air-kiss). Such a manoeuvre would give good scope to a later Marianne Dashwood, who might grip an errant Willoughby in order to kiss him warmly. Nonetheless, be warned: whatever the greeting style, body language always provides ways of signalling the rejection as well as the offering of friendship.

1  See P.J. Corfield, previous monthly BLOG 45 ‘Doffing One’s Hat’. And for fuller discussion, see PJC, ‘Dress for Deference & Dissent: Hats and the Decline of Hat Honour’, Costume: Journal of the Costume Society, 23 (1989), pp. 64-79; also transl. in K.Gerteis (ed.), Zum Wandel von Zeremoniell und Gesellschaftsritualen: Aufklärung, 6 (1991), pp. 5-18; and posted on PJC personal website as Pdf/8.

2  J. Austen, Sense and Sensibility (1st pub. London, 1811): chapter 28.

3 The Beatles (1963).

4  W. Shakespeare, Romeo and Juliet (written mid 1590s; 1597), Act 1, sc. 5. A palmer was a successful pilgrim, returning from the Holy Land bearing palms as a sign that the journey had been achieved.

5  A traditional ritual of ‘hand-fasting’, announcing a solemn public engagement, has also been updated for use today in pagan marriage ceremonies.

6 Sports World News on-line 12.Aug. 2014, at www.sportsworldnews.com/articles, consulted 11 Oct. 2014.

7  See e.g. John Travolta’s film portrayal of a notably touchy-feely American presidential candidate, based upon Bill Clinton, in Primary Colors (dir. Mike Nichols, 1998).

8  Anon., Something New on Men and Manners: A Critique of the Follies and Vices of the Age … (Hailsham, Sussex, 1828), p. 174.

F. Trollope, Domestic Manners of the Americans (1832), ed. R. Mullen (Oxford, 1984), p. 83.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 46 please click here

MONTHLY BLOG 45, DOFFING ONE’S HAT

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

TV’s Pride and Prejudice (1995) provided many memorable images, not least Colin Firth as Mr Darcy diving into a pool to emerge reborn as a feeling, empathetic human being. This transformation gains extra impact when contrasted with the intense formality of his general deportment. When, after some months of absence, Darcy and Bingley re-enter the Bennet family home at Longbourn, they bow deeply in unison, whilst Mrs Bennet and all her daughters rise as one and bend their heads in synchronised response. Audiences may well sigh, admiringly or critically according to taste. What a contrast with our own casual manners. It satisfies a sense that the past must have been different – like a ‘foreign country’, in a much-cited phrase from L.P. Hartley.1

But did people in Georgian polite society actually greet each other like that on a day-to-day basis? There is good evidence for the required formality (and dullness) of Hanoverian court life on ceremonial occasions. A fashionable ball or high society dinner might also require exceptional courtesies. But ordinary life, even among the elite of Britain’s landed aristocrats and commercial plutocrats, was not lived strictly according to the etiquette books.

Instead, the eighteenth century saw an attenuation of the lavish old-style formalities, which were known as ‘hat honour’. In theory, men when meeting their social superiors made a deep bow, removing their headgear, with a visible flourish. Gentlemen greeting a ‘lady’ would also remove their hats with a courteous nod. For women, the comparable requirement was the low curtsey from the ‘inferior’ to the ‘superior’. Those who held their heads highest (and hatted in the case of men) the more socially elevated, since lowering the head always signalled deference. This understanding underpins the custom of addressing monarchs as ‘Your Highness’.

Illustration 1 ‘The Hopes of the Family’ (1799) shows a young man being interviewed for University admission. A don presides, wearing his mortar board, whilst the nervous applicant and his eager father, an old-fashioned country gentleman, have both doffed their hats, which they carry under their arms. An undergraduate in his gown looks on nonchalantly, his hands in pockets. Yet he too remains bare-headed in the presence of a senior member of his College. Only the applicant’s mother, who is subject to the different rules of etiquette for women, covers her head with a rustic bonnet.

V0040710 A school master is sitting at a table pointing at some books

Illus 1: A gentle satire by Henry William Bunbury, entitled

The Hopes of the Family (1799) – © The Welcome Library.

In accordance with this etiquette, King Charles I on trial before Parliament in 1648 wore a high black hat throughout the proceedings. It was a signal that, as the head of state, he would not uncover for any lower authority. The answer of his republican opponents was radical. Charles I was found guilty of warfare against his own people, as a ‘tyrant, traitor and murderer’. He was decapitated, beheading the old power structure very literally and publicly.

After the Restoration of the monarchy in 1660, there was some return to the old formalities. (Or at least hopes of the same). For example, in October 1661 the naval official and MP Samuel Pepys recorded his displeasure at what he considered to be the undue pride of his manservant, who kept his hat on in the house.2 Pepys expected deference from his ‘inferiors’, whilst being ready to accord it to his own ‘superiors’. But it was not always easy to judge. In July 1663, Pepys worried that he may have offended the Duke of York, by not uncovering when the two men were walking in sight of each other in St James’s Park.3 It was a tricky decision. Failure, to doff one’s hat, when close at hand, would be rude, yet uncovering from too far away would seem merely servile.

Over the very long term, however, all these formalities began to attenuate. With the advent of brick buildings and roaring coal-fires, the habitual wearing of hats indoors generally disappeared – mob-caps and night-caps excepted. And in public, the old gestures continued but in an attenuated form. With commercial growth came the advent of many people of middling status. It was hard for them to calculate the precise gradations of status between one individual and another. The old-style mannerisms were also too slow for a fast-moving and urbanising world.

As a result, between men the deep bow began to change into a nod of the head. The elaborate flourish of the hat gradually turned into a quick lifting or pulling. And the respectful long tug of the forelock, on the part of those too poor to have any headgear, turned into a briefer touch to the head.4

A notable example of the abbreviation of hat honour was the codification of the military salute. It was impractical for rank-and-file soldiers to remove their headgear whenever encountering their officers. On the other hand, military discipline required the respecting of ranks. The answer was a symbolic gesture. ‘Inferiors’ greeted their ‘superiors’ by touching the hand to the head. Different regiments evolved their own traditions. Only in 1917 (well into World War I) did the British army decide that all salutes should be given right-handedly.

Meanwhile, the female greeting in the form of a low curtsey, holding out the dress, also evolved into a briefer bob or half-curtsey. It was expected from all lower-status women when meeting ‘superiors’. But hat honour was confined to men. On public occasions, women retained their hats, bonnets and feathers. Even in church, they did not copy men in baring their heads but respected St Paul’s Biblical dictum that it was not ‘comely’ for women to pray to God uncovered.5

These etiquette rules delight TV- and film-makers. In reality, however, the conventions were always in evolution. Rules were broken and/or fudged, as well as followed. Moreover, by the later eighteenth-century in Britain a new form of interpersonal greeting had arrived. It was the egalitarian hand-shake. Jane Austen’s characters not only bowed and curtsied to each other. They also, in certain circumstances, shook hands. In one Austen novel, a fearlessly ‘modern’ young woman extends her hand to shake that of a young man at a public assembly. Anyone know the reference? Answer follows in next month’s BLOG on Handshaking.

1 L.P. Hartley, The Go-Between (1943, p. 1: ‘the past is a foreign country – they do things differently there’.

2 R. Latham and W. Matthews (eds), The Diary of Samuel Pepys, Vol. II: 1661 (1970), p 199.

3 Ibid., Vol. IV: 1663 (1971), p. 252.

4 P.J. Corfield, ‘Dress for Deference & Dissent: Hats and the Decline of Hat Honour’, Costume: Journal of the Costume Society, 23 (1989), pp. 64-79; also transl. in K.Gerteis (ed.), Zum Wandel von Zeremoniell und Gesellschaftsritualen: Aufklärung, 6 (1991), pp. 5-18. Also posted on PJC personal website as Pdf/8.

5 Holy Bible, 1 Corinthians, 11:13.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 45 please click here

MONTHLY BLOG 41, HISTORICAL REPUTATIONS: DISAPPEARING FROM HISTORY

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

What does it take for individuals to disappear from recorded history? Most people manage it. How is it done? The first answer is to die young. That deed has been achieved by far too many historic humans, especially in eras of highly infectious diseases. Any death before the age of (say) 21 erases immense quantities of potential ability.

After all, how many child prodigies or Wunderkinder have there been? Very few whose fame has outlasted the immediate fuss in their own day. A number of chess-masters and mathematicians have shown dramatic early abilities. But the prodigy of all prodigies is Wolfgang Amadeus Mozart, who began composing at the age of five and continued prolifically for the remaining thirty years of his life. His music is now more famous and more widely performed than it ever was in his own day. Mozart is, however, very much the exception – and his specialist field, music, is also distinctive in its ability to appeal across time and cultures.

A second way of avoiding the attentions of history is to live and die before the invention of writing. Multiple generations of humans did that, so that all details of their lifestyles, as inferred by archaeologists and palao-anthropologists, pertain to the generality rather than to individuals. Oblivion is particularly guaranteed when corpses have been cremated or have been buried in conditions that lead to total decay.

As it happens, a number of frozen, embalmed, or bog-mummified bodies from pre-literate times have survived for many thousands of years. Scholars can then study their way of life and death in unparalleled and fascinating detail. One example is Ötzi the Iceman, found in a high glacier on the Italian/Austrian border in 1991, and now on dignified display in the South Tyrol Museum of Archaeology at Bolzano, Italy. His clothing and weaponry reveal much about the technological abilities of Alpine hunters from over five thousand years ago, just as his bodily remains are informative about his diet, health, death, and genetic inheritance.1 Nonetheless, the world-view of individuals like Ötzi are matters of inference only. And the number of time-survivors from pre-literate eras are very few.22014-5-Pic1 OtzitheIceman

Ötzi the Iceman, over 5000 years old but initially thought to be a recent cadaver when discovered in 1991:
now in the South Tyrol Museum of Archaeology, Bolzano, Italy.

The third way of avoiding historical attention is to live a quiet and secluded life, whether willingly or unwillingly. Most people in every generation constitute the rank-and-file of history. Their deeds might well be important, especially collectively. Yet they remain unknown individually. That oblivion applies especially to those who remain illiterate, even if they live in an era when reading and writing are known.

‘Full many a flower is born to blush unseen/ And waste its sweetness on the desert air’, as Thomas Gray put it eloquently in 1751 (in context, talking about humans, not horticulture).3 One might take his elegiac observation to constitute an oblique call for universal education (though he didn’t). Yet even in eras of widening or general literacy, it remains difficult for every viewpoint to be recorded and to survive. In nineteenth-century Britain, when more people than ever were writing personal letters, diaries and autobiographies, those who did so remained a minority. And most of their intimate communications, especially if unpublished, have been lost or destroyed.

Of course, past people were also known by many other forms of surviving evidence. The current vogue in historical studies (in which I participate) is to encourage the analysis of all possible data about as many as possible individuals, whether ‘high’ or ‘lowly’, by making the information available and searchable on-line.4 Nonetheless, historians, however determined and assiduous, cannot recover everybody. Nor can they make all recovered information meaningful. Sometimes past data is too fragmented or cryptic to have great resonance. It can also be difficult to link imperfect items of information together, with attendant risks: on the one hand, of making false linkages and, on the other hand, of missing real ones.

Moreover, there are still many people, even in well documented eras, whose lives left very little evidence. They were the unknowns who, in George Eliot’s much-quoted passage at the end of Middlemarch (1871/2): ‘lived faithfully a hidden life, and rest in unvisited tombs’.5 She did not intend to slight such blushing violets. On the contrary, Eliot hailed their quiet importance. ‘The growing good of the world is partly dependent on unhistoric acts’, she concluded. A realist might add that the same is true of the ‘bad of the world’ too. But again many lives remain hidden from historic record, even if the long-term impact of their collective actions and inactions has not.

Finally, there is concealment. Plenty of people then and now have reasons for hiding evidence – for example, pertaining to illegitimacy, adultery, addiction, crime, criminal conviction, or being on the losing side in warfare. And many people will have succeeded, despite the best efforts of subsequent scholar-sleuths. Today, however, those seeking to erase their public footprint face an uphill task. The replicating powers of the electronic media mean that evidence removed from one set of files returns, unbidden, in other versions or lurks in distant master files. ‘Delete’ does not mean absolute deletion.

Concluding the saga of The Mayor of Casterbridge (1886), the bipolar anti-hero Michael Henchard seeks to become a non-person after his death, leaving a savage will demanding ‘That I be not buried in consecrated ground & That no sexton be asked to toll the bell … & That no flowers be planted on my grave & That no man remember me’.6
2014-5-Pic2 Non-person

Non-Person © www.idam365.com (2014)

Today: yes, people can still be forgotten; or even fall through the administrative cracks and become a non-person. But to disappear from the record entirely is far from easy. Future historians of on-line societies are going to face the problems not of evidential dearth but of massive electronic glut. Still, don’t stop writing BLOGs, tweets, texts, emails, letters, books, graffiti. If we can’t disappear from the record, then everyone – whether famous, infamous, or unknown – can take action and ‘bear witness’.

1 For Ötzi, see: http://en.wikipedia.org/wiki/%C3%96tzi.

2 See P.V. Glob, The Bog People: Iron-Age Man Preserved, transl. R. Bruce-Mitford (London, 1969); D.R. Brothwell, The Bog Man and the Archaeology of People (London, 1986).

3 T. Gray, ‘Elegy Written in a Country Churchyard’ (1751), lines 55-6.

4 See e.g. Proceedings of the Old Bailey Online, 1674-1913: www.oldbaileyonline.org; London Lives, 1690-1800: www.londonlives.org; Clergy of the Church of England Database, 1540-1835: www.theclergydatabase.org; London Electoral History, 1700-1850: www.londonelectoralhistory.com.

5 G. Eliot [Mary Ann Evans], Middlemarch: A Study of Provincial Life (1871/2), ed. W.J. Harvey (Harmondsworth, 1969), p. 896.

6 T. Hardy, The Mayor of Casterbridge: The Life and Death of a Man of Character (1886), ed. K. Wilson (London, 2003), p. 321.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 41 please click here

MONTHLY BLOG 40, HISTORICAL REPUTATIONS THROUGH TIME

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

What does it take to get a long-surviving reputation? The answer, rather obviously, is somehow to get access to a means of endurance through time. To hitch a lift with history.

People in sports and the performing arts, before the advent of electronic storage/ replay media, have an intrinsic problem. Their prowess is known at the time but is notoriously difficult to recapture later. The French actor Sarah Bernhardt (1844-1923), playing Hamlet on stage when she was well into her 70s and sporting an artificial limb after a leg amputation, remains an inspiration for all public performers, whatever their field.1  Yet performance glamour, even in legend, still fades fast.

Bernhardt in the 1880s as a romantic HamletWhat helps to keep a reputation well burnished is an organisation that outlasts an individual. A memorable preacher like John Wesley, the founder of Methodism, impressed many different audiences, as he spoke at open-air and private meetings across eighteenth-century Britain. Admirers said that his gaze seemed to pick out each person individually. Having heard Wesley in 1739, one John Nelson, who later became a fellow Methodist preacher, recorded that effect: ‘I thought his whole discourse was aimed at me’.2

Yet there were plenty of celebrated preachers in Georgian Britain. What made Wesley’s reputation survive was not only his assiduous self-chronicling, via his journals and letters, but also the new religious organisation that he founded. Of course, the Methodist church was dedicated to spreading his ideas and methods for saving Christian souls, not to the enshrining of the founder’s own reputation. It did, however, forward Wesley’s legacy into successive generations, albeit with various changes over time. Indeed, for true longevity, a religious movement (or a political cause, come to that) has to have permanent values that outlast its own era but equally a capacity for adaptation.

There are some interesting examples of small, often millenarian, cults which survive clandestinely for centuries. England’s Muggletonians, named after the London tailor Lodovicke Muggleton, were a case in point. Originating during the mid-seventeenth-century civil wars, the small Protestant sect never recruited publicly and never grew to any size.  But the sect lasted in secrecy from 1652 to 1979 – a staggering trajectory. It seems that the clue was a shared excitement of cultish secrecy and a sense of special salvation, in the expectation of the imminent end of the world. Muggleton himself was unimportant. And finally the movement’s secret magic failed to remain transmissible.3

In fact, the longer that causes survive, the greater the scope for the imprint of very many different personalities, different social demands, different institutional roles, and diverse, often conflicting, interpretations of the core theology. Throughout these processes, the original founders tend quickly to become ideal-types of mythic status, rather than actual individuals. It is their beliefs and symbolism, rather than their personalities, that live.

As well as beliefs and organisation, another reputation-preserver is the achievement of impressive deeds, whether for good or ill. Notorious and famous people alike often become national or communal myths, adapted by later generations to fit later circumstances. Picking through controversies about the roles of such outstanding figures is part of the work of historians, seeking to offer not anodyne but judicious verdicts on those ‘world-historical individuals’ (to use Hegel’s phrase) whose actions crystallise great historical moments or forces. They embody elements of history larger than themselves.

Hegel himself had witnessed one such giant personality, in the form of the Emperor Napoleon. It was just after the battle of Jena (1806), when the previously feared Prussian army had been routed by the French. The small figure of Napoleon rode past Hegel, who wrote: ‘It is indeed a wonderful sensation to see such an individual, who, concentrated here at a single point, astride a horse, reaches out over the world and masters it’.4

(L) The academic philosopher G.W.F. Hegel (1770-1831) and (R) the man of action, Emperor Napoleon (1769-1821),  both present at Jena in October 1806The means by which Napoleon’s posthumous reputation has survived are interesting in themselves. He did not found a long-lasting dynasty, so neither family piety nor institutionalised authority could help. He was, of course, deposed and exiled, dividing French opinion both then and later. Nonetheless, Napoleon left numerous enduring things, such as codes of law; systems of measurement; structures of government; and many physical monuments. One such was Paris’s Jena Bridge, built to celebrate the victorious battle.

Monuments, if sufficiently durable, can certainly long outlast individuals. They effortlessly bear diachronic witness to fame. Yet, at the same time, monuments can crumble or be destroyed. Or, even if surviving, they can outlast the entire culture that built them. Today a visitor to Egypt may admire the pyramids, without knowing the names of the pharaohs they commemorated, let alone anything specific about them. Shelley caught that aspect of vanished grandeur well, in his poem to the ruined statue of Ozymandias: the quondam ‘king of kings’, lost and unknown in the desert sands.6

So lastly what about words? They can outlast individuals and even cultures, provided that they are kept in a transmissible format. Even lost languages can be later deciphered, although experts have not yet cracked the ancient codes from Harappa in the Punjab.7  Words, especially in printed or nowadays digital format, have immense potential for endurance. Not only are they open to reinterpretation over time; but, via their messages, later generations can commune mentally with earlier ones.

In Jena, the passing Napoleon (then aged 37) was unaware of the watching academic (then aged 36), who was formulating his ideas about revolutionary historical changes through conflict. Yet, through the endurance of his later publications, Hegel, who was unknown in 1806, has now become the second notable personage who was present at the scene. Indeed, via his influence upon Karl Marx, it could even be argued that the German philosopher has become the historically more important figure of those two individuals in Jena on 13 October 1806. On the other hand, Marx’s impact, having been immensely significant in the twentieth century, is also fast fading.

Who from the nineteenth century will be the most famous in another century’s time? Napoleon? Hegel? Marx? (Shelley’s Ozymandias?) Time not only ravages but provides the supreme test.

1  R. Gottlieb, Sarah: The Life of Sarah Bernhardt (New Haven, 2010).

R.P. Heitzenrater, ‘John Wesley’s Principles and Practice of Preaching’, Methodist History, 37 (1999), p. 106. See also R. Hattersley, A Brand from the Burning: The Life of John Wesley (London, 2002).

3  W. Lamont, Last Witnesses: The Muggletonian History, 1652-1979 (Aldershot, 2006); C. Hill, B. Reay and W. Lamont, The World of the Muggletonians (London, 1983); E.P. Thompson, Witness against the Beast: William Blake and the Moral Law (Cambridge, 1993).

G.W.F. Hegel to F.I. Neithammer, 13 Oct. 1806, in C. Butler (ed.), The Letters: Georg Wilhelm Friedrich Hegel, 1770-1831 (Bloomington, 1984); also transcribed in www.Marxists.org, 2005.

See http://napoleon-monuments.eu/Napoleon1er.

6  P.B. Shelley (1792-1822), Ozymandias (1818).

For debates over the language or communication system in the ancient Indus Valley culture, see: http://en.wikipedia.org/

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 40 please click here

MONTHLY BLOG 39, STUDYING THE LONG AND THE SHORT OF HISTORY

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

A growing number of historians, myself included, want students to study long-term narratives as well in-depth courses.1 More on (say) the peopling of Britain since Celtic times alongside (say) life in Roman Britain or (say) medicine in Victorian times or (say) the ordinary soldier’s experiences in the trenches of World War I. We do in-depth courses very well. But long-term studies are also vital to provide frameworks.2

Put into more abstract terms, we need more diachronic (long-term) analysis, alongside synchronic (short-term) immersion. These approaches, furthermore, do not have to be diametrically opposed. Courses, like books, can do both.

That was my aim in an undergraduate programme, devised at Royal Holloway, London University.3  It studied the long and the short of one specific period. The choice fell upon early nineteenth-century British history, because it’s well documented and relatively near in time. In that way, the diachronic aftermath is not too lengthy for students to assess within a finite course of study.

Integral to the course requirements were two long essays, both on the same topic X. There were no restrictions, other than analytical feasiblity. X could be a real person; a fictional or semi-fictionalised person (like Dick Turpin);4  an event; a place; or anything that lends itself to both synchronic and diachronic analysis. Students chose their own, with advice as required. One essay of the pair then focused upon X’s reputation in his/her/its own day; the other upon X’s long-term impact/reputation in subsequent years.

There was also an examination attached to the course. One section of the paper contained traditional exam questions; the second just one compulsory question on the chosen topic X. Setting that proved a good challenge for the tutor, thinking of ways to compare and contrast short- and long-term reputations. And of course, the compulsory question could not allow a simple regurgitation of the coursework essays; and it had to be equally answerable by all candidates.

Most students decided to examine famous individuals, both worthies and unworthies: Beau Brummell; Mad Jack Mytton; Queen Caroline; Charles Dickens; Sir Robert Peel; Earl Grey; the Duke of Wellington; Harriette Wilson; Lord Byron; Mary Shelley; Ada Lovelace; Charles Darwin; Harriet Martineau; Robert Stephenson; Michael Faraday; Augustus Pugin; Elizabeth Gaskell; Thomas Arnold; Mary Seacole; to name only a few. Leading politicians and literary figures tended to be the first choices. A recent book shows what can be done in the case of the risen (and rising still further) star of Jane Austen.5 In addition, a minority preferred big events, such as the Battle of Waterloo; or the Great Exhibition. None in fact chose a place or building; but it could be done, provided the focus is kept sharp (the Palace of Westminster, not ‘London’.)

Studying contemporary reputations encouraged a focus upon newspaper reports, pamphlets, letters, public commemorations, and so forth. In general, students assumed that synchronic reputation would be comparatively easy to research. Yet they were often surprised to find that initial responses to X were confused. It takes time for reputations to become fixed. In particular, where the personage X had a long life, there might well be significant fluctuations during his or her lifetime. The radical John Thelwall, for example, was notorious in 1794, when on trial for high treason, yet largely forgotten at his death in 1834. 6

By contrast, students often began by feeling fussed and unhappy about studying X’s diachronic reputation. There were no immediate textbooks to offer guidance. Nonetheless, they often found that studying long-term changes was good fun, because more off-the-wall. The web is particularly helpful, as wikipedia often lists references to X in film(s), TV, literature, song(s) and popular culture. Of course, all wiki-leads need to be double-checked. There are plenty of errors and omissions out there.

Nonetheless, for someone wishing to study the long-term reputation of (say) Beau Brummell (1778-1840), wikipedia offers extensive leads, providing many references to Brummell in art, literature, song, film, and sundry stylistic products making use of his name, as well as a bibliography. 7

Beau Brummell (1778-1840) from L to R: as seen in his own day; as subject of enquiry for Virginia Woolf (1882-1941); and as portrayed by Stewart Granger in Curtis Bernhardt’s film (1954).Plus it is crucial to go beyond wikipedia. For example, a search for relevant publications would reveal an unlisted offering. In 1925, Virginia Woolf, no less, published a short tract on Beau Brummell.8 The student is thus challenged to explore what the Bloomsbury intellectual found of interest in the Regency Dandy. Of course, the tutor/ examiner also has to do some basic checks, to ensure that candidates don’t miss the obvious. On the other hand, surprise finds, unanticipated by all parties, proved part of the intellectual fun.

Lastly, the exercise encourages reflections upon posthumous reputations. People in the performing arts and sports, politicians, journalists, celebrities, military men, and notorious criminals are strong candidates for contemporary fame followed by subsequent oblivion, unless rescued by some special factor. In the case of the minor horse-thief Dick Turpin, he was catapulted from conflicted memory in the eighteenth century into dashing highwayman by the novel Rookwood (1834). That fictional boost gave his romantic myth another 100 years before starting to fade again.

Conversely, a tiny minority can go from obscurity in their lifetime to later global renown. But it depends crucially upon their achievements being transmissable to successive generations. The artist and poet William Blake (1757-1827) is a rare and cherished example. Students working on the long-and-the-short of the early nineteenth century were challenged to find another contemporary with such a dramatic posthumous trajectory. They couldn’t.

But they and I enjoyed the quest and discovery of unlikely reactions, like Virginia Woolf dallying with Beau Brummell. It provided a new way of thinking about the long-term – not just in terms of grand trends (‘progress’; ‘economic stages’) but by way of cultural borrowings and transmutations between generations. When and why? There are trends but no infallible rules.

1 ‘Teaching History’s Big Pictures: Including Continuity as well as Change’, Teaching History: Journal of the Historical Association, 136 (Sept. 2009), pp. 53-9; and PJC website Pdf/3.

2 My own answers in P.J. Corfield, Time and the Shape of History (2007).

3 RH History Course HS2246: From Rakes to Respectability? Conflict and Consensus in Britain 1815-51 (content now modified).

4 Well shown by J. Sharpe, Dick Turpin: The Myth of the Highwayman (London, 2004).

5 C. Harman, Jane’s Fame: How Jane Austen Conquered the World (Edinburgh, 2009).

6 Two PJC essays on John Thelwall (1764-1834) are available in PJC website, Pdf/14 and Pdf/22.

7 See http://en.wikipedia.org/wiki/Beau_Brummell.

8 See V. Woolf, Beau Brummell (1925; reissued by Folcroft Library, 1972); and http://www.dandyism.net/woolfs-beau-brummell/.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 39 please click here

MONTHLY BLOG 21, HISTORICAL PERIODISATION – PART 1

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2012)

It was fascinating to meet with twenty-three others on a humid June afternoon to debate what might appear to be abstruse questions of Law & Historical Periodisation. We were attending a special conference at Birkbeck College, London University – an institution (founded in 1823 as the London Mechanics Institute) committed as always to extending the boundaries of knowledge. The participants came from the disciplines of law, history, philosophy, and literary studies. And many were students, including, laudably, some interested undergraduates who were attending in the vacation.

At stake was not the question of whether we can generalise about different and separate periods of the past. Obviously we can and must to some extent. Even the most determined advocate of history as ‘one and indivisible’ has to accept some sub-divisions for operative purposes, whether in terms of days, years, centuries or millennia.

But the questions really coalesce about temporal ‘stages’, such as the ‘mediaeval’ era. Are such concepts relevant and helpful? Is history rightly divided into successive stages? and do they follow in regular sequence in different countries, even if at different times? Or is there a danger of reifying these epochs – turning them into something more substantive and distinctive than was actually the case?

Studies like H.O. Taylor’s The Medieval Mind (1919 and many later edns), Benedicta Ward’s Miracles and the Medieval Mind (1982), William Manchester’s The Medieval Mind and the Renaissance (Boston, 1992), and Stephen Currie’s Miracles, Saints and Superstition: The Medieval Mind (2006), all imply that there were common properties to the mind-sets of millions of Europeans who lived between (roughly) the fifth-century fall of Rome and the fifteenth-century discovery of the New World – and that these mindsets differed sharply from the ‘modern mind’. Yet are these historians justified in choosing this formula within their titles? Or partly justified? or absolutely misleading? Are there common features within human consciousness and experiences that refute these periodic cut-off points? Do we want to go to the other end of the spectrum, to endorse the view of those Evolutionary Psychologists who aver that human mentalities have not changed since the Stone Age? Forever he, whether Tarzan, Baldric or Kevin? forever she, whether Jane, Elwisia or Tracey?

Two papers by Kathleen Davis (University of Rhode Island) and Peter Fitzpatrick (Birkbeck College) formed the core of the conference, both focusing upon the culture of jurisprudence and its standard definition of the medieval. Both give stimulating critiques of conventional legal assumptions, based upon stark dichotomies. In bare summary, the ‘medieval’ is supposed to be Christianised, feudal, and customary, while the ‘modern’ is supposedly secular, rights-based, and centred around the sovereign state. For good measure, the former is by implication backward and oppressive, while the latter is progressive and enlightened. Yet the long history of legal pluralism goes against any such dichotomy in practice. Historians like Helen Cam, who in 1941 wrote What of Medieval England is Alive in England Today? would have rejoiced at these papers, and at the sharp questions from the conference participants.

For my part, I was asked to give a final summary, based upon my position as a critic of all simple stage theories of history.1 My first point was to stress again how difficult it is to rethink periodisation, because so many cardinal assumptions are built not only into academic language but also into academic structures. Many specialists name themselves after their periods – as ‘medievalists’, ‘modernists’ or whatever. Those who call themselves just ‘historians’ are seen as too vague – or suffering from folie de grandeur. There are mutterings about the fate of Arnold Toynbee, once hailed as the twentieth-century’s greatest historian-philosopher – now virtually forgotten. Academic posts within departments of History and Literary Studies are generally defined by timespans. So are examination papers; many academic journals; many conferences; and so forth. Publishers in particular, who pay great attention to book titles, often endorse traditional nomenclature and stage divisions.

True, there are now increasing calls for change. My second point therefore highlights the new diversity. Conferences and seminars are held not only across disciplinary boundaries but also across epochal divisions. An increasing number of books are published with unusual start and end dates; and the variety of dates attached to the traditional periods continues to multiply, often confusingly. In addition, some scholars now study ‘big’ (long-term) history from the start of the world, or at least from the start of human history. Their approaches do not always manage to avoid traditional schema but the aim is to encourage a new diachronic sweep. And other pressures for change are coming from scholars in new fields of history, such as women’s history or (not the same thing) the history of sexuality.

Shedding the old period terminology is mentally liberating. So the Italian historian Massimo Montanari, previously a ‘medievalist’, wrote in 1994 of the happiness that followed his discarding of all the labels of ‘ancient’, ‘medieval’ and ‘modern: ‘In the end, I felt freed as from a restrictive and artificial scaffolding …’2

Lastly, then, what of the future? The aim is not to replace one set of period terms and dates with another. Any rival set will run into the same difficulties of detecting precise cut-off points and the risk of stereotyping the different cultures and societies on either side of a period boundary. It is another example of dichotomous thinking, which glosses over the complexities of the past. Above all, all stage theories fail to incorporate the elements of deep continuity within history (see my November 2010 discussion-point).

We need a new way of thinking about the intertwining of persistence and change within history. It is chiefly a matter of understanding. But it will also entail a change of language. I don’t personally endorse the Foucauldian view that language actually determines consciousness. For me, primacy in the relationship is the other way round. A changing consciousness can ultimately change language. Yet I do recognise the confining effects of existing concepts and terminology upon patterns of thought. Such an impact is another example of the power of continuity. With several bounds, however, historians can become free. With a new language, we can talk about epochs and continuities, intertwined and interacting in often changing ways. It’s fun to try and also fun to try to convince others. Medievalists, arise. You have nothing to lose but an old name, which survives through inertia. There are more than three steps between ancient – middle – modern, even in European history – let alone around the world. Try a different name to shake the stereotypes. And tell the lawyers too.

1 P.J. Corfield, Time and the Shape of History (2007) and P.J. Corfield, POST-Medievalism/ Modernity/ Postmodernity? Rethinking History, Vol. 14/3 (Sept. 2010), pp. 379-404; also available on publishers’ website Taylor & Francis www.tandfonline.com; and personal website www.penelopejcorfield.co.uk.

2 M. Montanari, The Culture of Food (transl. C. Ipsen (Oxford, 1994), p. xii.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 21 please click here

MONTHLY BLOG 18, IN PRAISE OF MEMORY

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2012)

Try living without it. In healthy humans, memory works non-stop from birth to death. That means that it can work, unprompted, for over a century. Memory automatically tells us who we are (short of mental illness or accident). It simultaneously supplies us with our personal back-story and locates us within a broad framework picture of the world in which we have lived to date. Our capacity to think through time, and to remember things that happened long ago, constitutes a major characteristic of what it means to be human.

As such, the power of memory is an ancient, not to say primaeval, capacity. It’s entwined with consciousness. But it also operates at instinctual levels, as in muscle memory. With its multiple resources, memory is notably multi-layered. It can be cultivated consciously. A host of mnemonic systems, some very ancient, offer systems to help the mind in storing and retrieving a huge ragbag of ideas and information.1  Giulio Camillo’s beautiful Theatre of Memory (shown in Fig. 1) is but one example.2  It’s a nice imaginary prospect of the inside of the human cranium.march005
march006Alongside conscious efforts of memory cultivation, many framework recollections – such as knowledge of one’s native language – are usually accumulated unwittingly and almost effortlessly. Deep memory systems constitute a form of long-term storage. With their aid, people who are suffering from progressive memory loss often continue to speak grammatically for a long way into their illness. Or, strikingly, songs learned in childhood, aided by the wordless mnemonic power of rhythm and music, may remain in the repertoire of the seriously memory-impaired even after regular speech has long gone.

Given its primaeval origins, the human capacity to remember notably predates the invention of calendars. Such time-measuring and time-referencing devices are the products, not the first framers, of memory. As a result, we don’t habitually remember by reference to precise dates and times, with the exception of special events or consciously learned information. Nor do we retain everything. Forgetting selectively is as much a human capacity as remembering. Too much and we’d suffer from information overload.

The combination of remembering and forgetting, both individually and collectively, has some significant implications. Not only does memory fade but, unkindly, it also plays tricks. Details that we think we remember with great confidence can turn out to be false. My own deceitful memory has just given me a shock, which I’ve taken to heart since I pride myself on my powers of recollection. One of my clear recollections of the student protests in 1969 (which I wrote about in my January discussion-piece) has turned out to be erroneous, at least in one significant detail. At a lunch-time protest meeting at Bedford College in 1969 or early 1970, an ardent young postgraduate urged those present to capture the Principal’s office today, in order to overthrow capitalism tomorrow. I am certain that the event took place and that the speech was greeted with cheers (and some silent scepticism – mine included).

However, my memory has over time fabricated an erroneous identity for the speaker. I met the person in question last week – now a Labour peer in the House of Lords – and reminded her of the episode, expecting some shared laughter at the ambitious scope of youthful ideals. But she did not attend Bedford College nor had she ever visited it. Moreover, she had always shared my critique of the student utopianism of the later 1960s. I was wrong on a central point, which I’d convinced myself was correct. Could I even be sure that the protest meeting took place at all? Collapse of stout party – myself.
march007And I am not alone. Discovering faults in memory is a common experience. It’s a salutary warning not to be too cocky. Had I been relying upon my unchecked memory when speaking in the witness box, this central error would have discredited my entire evidence. Falsus in uno, falsus in omnibus, as the Roman legal tag has it: wrong in one thing, wrong in all. In fact, the dictum is exaggerated. Errors in some areas may be counter-balanced by truths elsewhere. Nonetheless, I have drawn one personal conclusion from my mortifying discovery. If I’m ever again invited to give testimony on oath or in an on-the-record interview, I will do my homework thoroughly beforehand.

A second lesson is that human gossip and chatter is an essential part of the process of checking and cross-checking memories. Such retrospective discussions (‘She said … ; and then I said … ; and then she replied …’) often seem rambling and inconsequential. They are, however, consolidating the stuff of memory. It works for communities as well as for individuals. Indeed, talking, taking stock, and remembering together is helpful, particularly after experiences of disasters which should not be forgotten in silence. Vera Schwarcz’s powerful study Bridge across Broken Time makes that point in its title.3 Memory, with all its faults, allows for the possibility of understanding the past and overcoming traumas. Conversely, the negative effects of buried memories for starkly dislocated communities reverberate through successive generations.

So my final point: here come the historians. The fallibility of unvarnished memory encouraged the first production of memory aids, such as written and numerical records, and calendrical calculations. And over time humans have generated an immeasurable cornucopia of data and documentation, which is far beyond the capacity of any individual mind to store. It is now a collective resource. Historians don’t replicate human memory. Indeed, they share its fallibilities. But, collectively, they join the task of storing, cross-checking, correcting, ordering, and evaluating a past that goes beyond individual memory.

1  For a stirring analysis, ranging from classical Greece to the European Renaissance, consult the classic by Frances A. Yates, The Art of Memory (1966). A recent contribution to the memory bug is also provided by Joshua Foer, Moonwalking with Einstein: The Art and Science of Remembering Everything (2011).

2  For the philosopher Giulio Camillo (c.1480-1544), see K. Robinson, A Search for the Source of the Whirlpool of Artifice: The Cosmology of Giulio Camillo (Edinburgh, 2006).

3  Vera Schwarcz, Bridge across Broken Time: Chinese and Jewish Cultural Memory (New Haven, 1998).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 18 please click here

MONTHLY BLOG 13, CROSS-CLASS MARRIAGE IN HISTORY

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2011)

People often imagine that class barriers were more rigid in the past, notwithstanding historical fluctuations in social attitudes. As a result, it is always assumed that cross-class marriages were especially rare. Yet matters were never so simple. Among the many individuals in the past, who had sexual relationships across class boundaries (a comparatively frequent occurrence), there were always some who were bold enough to marry across them.

One case, among several aristocratic examples from the eighteenth century, was the marriage of the 5th Earl of Berkeley to Mary Cole, the daughter of a Gloucester butcher. She made a dignified wife, living down the social sneers. The Berkeleys began to live together in 1785 and did not marry publicly until 1796, although the Earl claimed that there had been an earlier ceremony.
october001This confusion led to a succession dispute. Eventually, the sons born before the public wedding were disbarred from inheriting the title, which went to their legitimate younger brother. Here the difficulty was not the mother’s comparatively ‘lowly’ status but the status of the parental marriage. It affected the succession to a noble title, which entitled its holder to attend the House of Lords. But the disbarred older siblings did not become social outcasts. Two of the technically illegitimate sons, born before the public marriage, went on to become MPs in the House of Commons, while the legitimate 6th Earl modestly declined to take his seat as a legislator.

Another example, this time from the nineteenth century, was that of Sir Harry Fetherstonhaugh. He was the wealthy owner of Uppark House (Sussex), who in 1825 married for the first time, aged 70. His bride was the 21-year-old Mary Ann Bullock, his dairymaid’s assistant. She inherited his estate, surviving him for many years. Everything at Uppark was kept as it was in Sir Harry’s day. The estate then went to her unmarried sister who, as ‘her leddyship’ in her very old age, appeared to epitomise the old landed society – so much did outcomes triumph over origins. The young H.G. Wells, whose mother was housekeeper at Uppark, mused accordingly:1

In that English countryside of my boyhood every human being had a ‘place’. It belonged to you from your birth like the colour of your eyes, it was inextricably your destiny. Above you were your betters, below you were your inferiors…

The social conventions, within such a hierarchy, did allow for some mobility. High-ranking men raised their wives to a matching status, giving aristocratic men some room for manoeuvre. Against that, noble families generally did their best to ensure that heirs to grand titles did not run away with someone entirely ‘unsuitable’.

A tabulation of the first-marriage choices of 826 English peers, made between 1600 and 1800, showed that, in sober reality, most (73 percent) chose a bride from an equally or nearly equally titled background.2 The homogeneity of the elite was generally preserved.

Interestingly, however, just over one quarter (27 percent) of these English peers – a far from negligible proportion – were more socially venturesome. Their wives from ‘lower’ social backgrounds tended to be daughters of professional men or of merchants. In particular, a splendid commercial fortune was an ideal contribution in terms of bridal dowry; and, in such circumstances, aristocratic families found themselves willing to accept theoretically humbler connections with businessmen ‘in trade’.

Marriages like that of Sir Harry were ‘outliers’ in terms of the social distance between bride and groom. But his matrimonial decision to leap over conventions of social distance was not unique.

For women of high rank, meanwhile, things were more complicated. By marrying ‘down’, they lost social status; and their off-spring, however well connected on the mother’s side, took their ‘lower’ social rank from the father.

Nonetheless, it was far from unknown for high-born women to flout convention. In particular, wealthy widows might follow their own choice in a second marriage, having followed convention in the first. One notable example was Hester Lynch Salusbury, from a Welsh landowning family. She married, firstly, Henry Thrale, a wealthy brewer, with whom she had 12 children, and then in 1784 – three years after Thrale’s death – Gabriele Piozzi, an Italian music teacher and a Catholic to boot.3

Scandal ensued. Her children were affronted. And Dr Johnson, a frequent house-guest at the Thrale’s Streatham mansion, was decidedly not amused. Undaunted, Hester Lynch Piozzi and her husband retired to her estates in north Wales, where they lived in a specially built Palladian villa, Brynbella.
october002So little was damage done to the family’s long-term status that her (estranged) oldest daughter married a Viscount. Furthermore, the Piozzis’ adopted son, an Italian nephew of Gabriele Piozzi, inherited the Salusbury estates, taking the compound name Sir John Salusbury Piozzi Salusbury.

If, after the initial fuss, the partners in a cross-class union lived respectably enough, the wider society tended sooner or later to condone the ‘mésalliance’. Feelings were soothed by respect for marriage as an institution. And the wider social stability was ultimately served by absorbing such dynastic shocks rather than by highlighting them.

Little wonder that many a novel dilated on the excitements and tensions of matrimonial choice. Not only was there the challenge of finding a satisfactory partner among social peer-groups but there was always some lurking potential for an unconventional match instead of a conventional union.

Such possibilities – complete with hazards – applied at all levels of society. In the early twentieth century, the family of D.H. Lawrence epitomised a different set of cross-class tensions. His father was a scarcely literate miner from Eastwood, near Nottingham, while his mother was a former assistant teacher with strong literary interests, who disdained the local dialect, and prided herself on her ‘good old burgher family’. From the start, they were ill-assorted.
october003In his youth, D.H. Lawrence was his mother’s partisan and despised his father as feckless and ‘common’. Later, however, he switched his theoretical allegiance. Lawrence felt that his mother’s puritan gentility had warped him. Instead, he yearned for his father’s male sensuousness and frank hedonism, though the father and son never became close.4

Out of such tensions came Lawrence’s preoccupation with man/woman conflict and with unorthodox sex and love. His parent’s strife was also more than mirrored in his own turbulent relationship with Frieda von Richtofen, the daughter of a Silesian aristocrat, who was, when they met, married to a respected Nottingham University professor.

Initial social distance between a married couple could lend enchantment – or the reverse. Cross-class relationships have been frequent enough for there to have been many cases, both successful and the reverse. Later generations always underestimate their number. But we should not ignore the potential for cultural punch (positive or negative) when couples from different backgrounds marry, even in times when class barriers are less than rigid. Nor should we underestimate society’s long-term ability to absorb such shocks, which would have to happen in great numbers before a classless society might be achieved.

1 H.G. Wells, Tono-Bungay (1909; in 1994 edn), pp. 10-11. For more about the Fe(a)therstonhaugh marriage and the context of Sussex landowning society, see A. Warner, ‘Finding the Aristocracy, 1780-1880: A Case Study of Rural Sussex’ (unpub. typescript, 2011; copyright A. Warner, who can be contacted via PJC).

2 Figures calculated from data in J. Cannon, Aristocratic Century: The Peerage of Eighteenth-Century England (Cambridge, 1984), p. 85: Table 20. Note that the social status of each bride is derived from the rank of her father, so possibly obscuring a more variegated background in terms of her maternal inheritance.

3 Details of their courtship and Hester Thrale’s meditations on their disparities in rank are available on the website: www.thrale.com.

4 R. Aldington, Portrait of a Genius but …: The Life of D.H. Lawrence (1950), pp. 3-5, 8-9, 13, 15, 334.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 13 please click here

MONTHLY BLOG 6, RECONSIDERING REVOLUTIONS

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2011)

Revolution – metamorphosis – transformation – disjunction – diagenesis – dialectical leap forward – paradigm shift. Marvellous long words and phrases, such as these (and many more), collectively express the sense of drastic upheaval that is contained within the concept of macro-change.

And yes, great turbulent upheavals occur not only in the natural world (earthquakes, volcanoes, tempests, floods, fires) but in human societies too. Not surprising, really. We are part of the whole and so subject to the same intricate mix of continuities/gradual changes/ and macro-changes that interact seamlessly throughout the cosmos.

But, talking of great transformations, three distinctions can be made.

Firstly, language. The term ‘revolution’ is far too often over-used. It has become tired, lacking the punch and clarity that such a concept should retain. So we need a smarter vocabulary to differentiate between the different categories of radical upheaval.

My own advice is to reserve ‘revolution’ for violent and/or transformational upheavals of systems of government. (Here the reference is to something more drastic than a coup, which changes the leadership without changing the regime). Political revolutions are distinctive. They are characterised by mass action, which aims at rejecting, with violence if need be, an established system of rule with its associated power structures, and at installing something qualitatively different. Political revolutions accordingly differ from other forms of macro-change.

After all, is it analytically helpful to name the process of industrialisation as the Industrial Revolution, when it unfolded over decades, even centuries? The shift from a human- and animal-powered economy to one dependent upon mechanical power was truly epic. But its advent incorporated both dramatic innovations and slower evolutionary adaptations. So why not call it a Technological Transformation? Such a name acknowledges the magnitude of change but does not confine change to one revolutionary moment or movement.

For example, the first steam-powered cotton-looms were truly remarkable. They dramatically increased productivity as well as changed patterns of working, as the male handloom weavers in their homes were ousted by machine-minders in large factories [shown below in an early nineteenth-century illustration]. Yet the clerical inventor Edmund Cartwright (1743-1823), who patented his steam-powered loom in 1785, failed financially. It took decades for his pioneering invention to be adopted, adapted and further upgraded; and centuries for mechanical power to become so essential in so many human activities world-wide, as it is today. Technological transformations need therefore to be analysed with a different set of terms and concepts.
march004Secondly: political revolutions also need to be located within a spectrum of different sorts and degrees of change. It is very rarely, if ever, that everything is transformed all at once. The rhetoric of dramatic metamorphosis is both fearful and hopeful: ‘All changed, changed utterly;/ A terrible beauty is born’, as Yeats saluted the Irish Easter Rising in 1916. Yet, when the dust dies down, continuity turns out to have dragged at the heels of revolution after all. What is known as admirable heritage to its fans is deplorable inertia to its critics. Thus Karl Marx once denounced with righteous passion: ‘the tradition of all the dead generations [that] weighs like a nightmare on the brains of the living’.

There are other forces within history as well as the desire for radical change. Accordingly, theories of history which assume revolution to be the sole mechanism of change are one-sided and need correction. That criticism applies both to Hegel’s dialectical combustion of conflicting ideals, which each time led to the emergence of a new historical stage; and to the Marxist version of revolutionary conflict in the form of dialectical materialism. For Karl Marx and his loyal co-thinker Friedrich Engels the growing tensions from class conflict would eventually ignite great political revolutions, each one propelling a new social class into power.
march002Yet no. Not only does fundamental change frequently develop via evolutionary rather than revolutionary means; but revolutions do not always introduce macro-change. They can fail, abort, halt, recede, fudge, muddle, diverge, transmute and/or provoke counter-revolutions. The complex failures and mutations of the communist revolutions, which were directly inspired in the twentieth century by the historical philosophy of Marx and Engels, make that point historically, as well as theoretically.

Thirdly, therefore: revolutions are not all the same and are not all automatically successful. Drastic upheaval through direct action is sometimes the only way to effect change.
1revollusion

A youthful enthusiast at the Berlin Wall before its fall –
trying some revolutionary spelling for good measure.
Copyright© NasanTur 2008

The concept can exert a radical charm all its own, especially in prospect – before any bloodshed. ‘Let a thousand flowers bloom’. ‘O brave new world’. Yet rosy dreams may turn to horror. Brightness can turn to night. ‘Musing on roses and revolutions,/ I saw night close down on the earth like a great dark wing …/ And I heard the lamentations of a million hearts’, as the African American poet Dudley Randall wrote sombrely in 1968, aware that radical hopes would not easily transform the long after-history of African slavery.

So within the revolution, remember that it is easier to unite against what is not wanted than to agree on what is wanted instead. When the old regime has gone, it is important to keep talking rather than to switch to fighting one’s own side. Don’t let the revolution consume its own children. Don’t let the new regime mimic the faults of its predecessor. Use the great heroic power of revolutionary transformation to break from violence into new dialogue and new construction, taking time to engage with evolution and to tame old continuities.Celebrations-TahrirSquare

Celebrations in Cairo’s Tahrir Square on 12 February 2011 after the resignation of Hosni Mubarak as Egypt’s President. Copyright ©nebedaay’s photostream 2011

Lastly, is there a periodicity to political revolutions? Do they come in any predictable pattern? In fact, again no. History would be tidier and easier to understand if it were so. Nonetheless, there is often a chance (not an inevitability) of a political uprising, even under the most repressive regimes, with each bold new generation of young people – every twenty years or so. We are currently witnessing the opportunity for real political transformations in the Arab world. Let it be beauty and not terror that forthcomes.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 6 please click here

MONTHLY BLOG 4, ON THE SUBTLE POWER OF GRADUALISM

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2011)

Currently, fashionable chatter on the new political Right refers approvingly to the case for ‘chaos’. That view is voiced by Nick Boles, Conservative MP for Grantham and Stamford, author of Which Way’s Up? The Future for Coalition Britain (2010). Out of institutional turmoil, financial cuts, and the end of central planning there will – supposedly – emerge a benign new localised order, freed from the shackles of the contemporary state. The premise is that things should not be as they are. So the uncertainty of chaos is needed to encourage change. But history provides no guarantees that the outcome will be the one desired.

Rather more traditionally, the hard political Left also hopes for ‘revolution’. It’s not the same as ‘chaos’, though it contains the same hope. Out of upheaval will emerge the desired socio-political transformation. In fact, it has proved difficult to achieve such sweeping changes. It takes a total crisis to offer a revolutionary opportunity (as in Russia in 1917). But, even after that, it remains hard to keep a revolutionary regime in power against internal and external enemies, without compromising the original ideals that animated the revolution in the first place. Soviet Russia offers a sad example.

So why the apparent enthusiasm for chaos or revolution? Such attitudes mark an impatience with the strong forces of tradition and vested interests (see my October Discussion-point). And, in certain circumstances, there may be no alternative to drastic action.

But what about the case for the subtle power of gradual change? Perhaps slow transformation seems simply too tardy for today’s politicians. After all, they are constantly beset by demands for instant headlines and instant results. Belief in gradual change has also been tainted by its past association an infallible and unstoppable ‘Progress’. The horrendous experiences of the twentieth century – in terms of world wars, genocides, mass famines, and killer epidemics – have discouraged any easy belief that things are slowly getting better across the board.

On the other hand, there are still some things to be said in favour of gradualism. It marks gentle ‘progression’ rather than inevitable ‘Progress’.

As a political method, it works by trying to convince people. They are to be wooed, not bludgeoned. ‘Slow but sure’ runs the adage. Festina lente – ‘make haste slowly’. ‘More haste, less speed’. Follow the example of the Roman consul Fabius Maximus. Avoid battle or direct confrontation, especially when likely to lose. Play for the long term. But don’t give up either. Fabianism is no excuse for inertia but an invitation to join the ‘long march’.

Historically, there are many examples of how patient advocacy over time can change social attitudes. Once majority opinion in many cultures held that human slavery was permissible and acceptable – even necessary and justifiable – in certain circumstances. The first few campaigners against the practice were condemned for their utter unrealism. Now, however, world-wide opinion holds that slavery is a social evil, even though various forms of personal unfreedom still – shamingly – exist in practice. Official beliefs have changed, collectively and gradually. Even those who covertly disagree find that they have to endorse the new line publicly. And there are reasons to hope that, eventually, the practices of covert slavery will also be stopped, in line with the reversal of world opinion from pro- to anti-slavery.

In fact, cultural attitudes, languages, and ideas are characteristically aspects of human life where transformations occur slowly and gradually. Individuals may often find that they have changed their views imperceptibly over some particular issue – without remembering particularly when and how the change happened. One common, though not invariable, pattern is a shift from youthful radicalism to an older hostility to innovation. Or it could be a move from earlier pacifism to later bellicosity. Of course, sudden and explicit conversions are also known. But gradual adaptations are very characteristic.

Slow evolution, after all, is a regular part of the physical world, of which humans form part. In biology, micro-change is the characteristic form of species adaptation through natural selection over time. That pattern was convincingly demonstrated in the mid-nineteenth century by Alfred Russel Wallace and, most famously, by Charles Darwin. His field observations substantiated the classic dictum: Natura non facit saltumNature does not proceed by sudden leaps and bounds.

The precise mechanisms of change remain debated; and the possibility of natural catastrophes are also canvassed. Nonetheless, the biological centrality of gradual change remains undoubted. And individuals, who find themselves imperceptibly ageing, know the process at first hand. It happened to Charles Darwin (1809-82), as shown in these likenesses of him aged 31 in 1840 (Left); aged 45 in 1854 (Right); and aged 60 in 1869 (Bottom Left).
charlesdickensLastly, for historians, it is also not surprising to find that gradual change is a powerful force in human history. There are many long-term trends that are slow and relatively imperceptible at the time. One example is the world-wide spread of literacy since circa 1700. Certainly there have been oscillations in the trend; but it is unlikely to be reversed, short of global catastrophe.

Another long-term development post-1700 is the process of global urbanisation, with a continuing growth in the proportion of the world’s population living and working in towns. In addition, the numbers living in great cities of 1,000,000+ has also expanded dramatically. Again, this trend has not been linear. But it is highly unlikely to be reversed – again short of catastrophe.

And finally, what about the contemporary state? It has not arrived out of the blue, as an imposition upon its citizens. Instead, it has emerged slowly, along complex routes – from its origins in monarchical society to its officially democratised version today. Sure, there is much more to do by way of making popular participation in politics more systematic and more effective. Sure, too, there are continuing areas for debate as to how much the state can do and should do. But, again, the emergence of orderly government and a collective sustaining of the rule of law is a trend that has long emerged – is still emerging in some lawless parts of the world – and ought to be encouraged.

With collective urbanisation has come the need for effective governance. With the spread of literacy has come the pressures to democratise – with further steps yet required. And with global population growth has come the collective need to manage the planet for the survival of humans and our fellow species.

‘Chaos’ in the full sense means destruction, not salvation. It means running against the grain of historical trends. So let politicians have a sense of modesty about their own roles and aims. Gradual change is more natural, more sustainable, and socially more pleasant. Progress may have been an ideal too far. But steady progression marks how things actually work – and ought to work.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 4 please click here