Tag Archive for: students

MONTHLY BLOG 83, SEX AND THE ACADEMICS

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2017)

Appreciating sex means appreciating the spark of life. Educating numbers of bright, interesting, lively young adults is a sexy occupation. The challenge for academics therefore is to keep the appreciation suitably abstract, so that it doesn’t overwhelm normal University business – and absolutely without permitting it to escalate into sexual harassment of students who are the relatively powerless ones in the educational/power relationship.

It’s long been known that putting admiring young people with admirable academics, as many are, can generate erotic undertones. Having a crush on one’s best teacher is a common youthful experience; and at least a few academics have had secret yearnings to receive a wide-eyed look of rapt attention from some comely youngster.1 There is a spectrum of behaviour at University classes and social events, from banter, stimulating repartee and mild flirtation (ok as long as not misunderstood), all the way across to heavy power-plays and cases of outright harassment (indefensible).
2017-11 No1 Hogarth_lecture_1736

Fig.1 Hogarth’s Scholars at a Lecture (1736) satirises both don and students, demonstrating that bad teaching can have a positively anti-aphrodisiac effect.

If academics don’t have the glamour, wealth and power of successful film producers, an eminent ‘don’ can still have a potent intellectual authority. I have known cases of charismatic senior authority figures imposing themselves sexually upon the gullible young, although I believe (perhaps mistakenly – am I being too optimistic here?) that such scenarios are less common today. That change has taken place partly because University expansion and grade escalation has created so many professors that they no longer have the same rarity value that once they did. It’s also worth noting that single academics don’t hold supreme power over individual student’s careers. Examination grades, prizes, appointments, and so forth are all dealt with by boards or panels, and vetted by committees.

Moreover, there’s been a social change in the composition of the professoriat itself. It’s no longer exclusively a domain of older heterosexual men (or gay men pretending publicly to be heterosexual, before the law was liberalised). No doubt, the new breed of academics have their own faults. But the transformation of the profession during the past forty years has diluted the old sense of hierarchy and changed the everyday atmosphere.

For example, when I began teaching in the early 1970s, it was not uncommon to hear some older male profs (not the junior lecturers) commenting regularly on the physical attributes of the female students, even in business meetings. It was faintly embarrassing, rather than predatory. Perhaps it was an old-fashioned style of senior male bonding. But it was completely inappropriate. Eventually the advent of numerous female and gay academics stopped the practice.

Once in an examination meeting, when I was particularly annoyed by hearing lascivious comments about the ample breasts of a specific female student, I tried a bit of direct action by reversing the process. In a meaningful tone, I offered a frank appreciation of the physique of a handsome young male student, with reference specifically to his taut buttocks. (This comment was made in the era of tight trousers, not as a result of any personal exploration). My words produced a deep, appalled silence. It suggested that the senior male profs had not really thought about what they were saying. They were horrified at hearing such words from a ‘lady’ – words which struck them not as ‘harmless’ good fun (as they viewed their own comments) but as unpleasantly crude.

Needless to say, I don’t claim that my intervention on its own changed the course of history. Nonetheless, today academic meetings are much more businesslike, even more perfunctory. Less time is spent discussing individual students, who are anyway much more numerous – with the result that the passing commentary on students’ physiques seems also to have stopped. (That’s a social gain on the gender frontier; but there have been losses as well, as today’s bureaucratised meetings are – probably unavoidably – rather tedious).

One important reason for the changed atmosphere is that more specific thought has been given these days to the ethical questions raised by physical encounters between staff and students. It’s true that some relationships turn out to be sincere and meaningful. It’s not hard to find cases of colleagues who have embarked upon long, happy marriages with former students. (I know a few). And there is one high-profile example on the international scene today: Brigitte Trogneux, the wife of France’s President Emmanuel Macron, first met her husband, 25 years her junior, when she was a drama teacher and he was her 15-year old student. They later married, despite initial opposition from his parents, and seem happy together.

But ethical issues have to take account of all possible scenarios; and can’t be sidelined by one or two happy outcomes. There’s an obvious risk academic/student sexual relationships (or solicitation for sexual relationships) can lead to harassment, abuse, exploitation and/or favouritism. Such outcomes are usually experienced very negatively by students, and can be positively traumatic. There’s also the possibility of anger and annoyance on the part of other students, who resent the existence of a ‘teacher’s pet’. In particular, if the senior lover is also marking examination papers written by the junior lover, there’s a risk that the impartial integrity of the academic process may be jeopardised and that student confidence in the system be undermined. (Secret lovers generally believe that their trysts remain unknown to those around them; but are often wrong in that belief).

As far as I know, many Universities don’t have official policies on these matters, though I have long thought they should. Now that current events, especially the shaming of Harvey Weinstein, have reopened the public debates, it’s time to institute proper professional protocols. The broad principles should include an absolute ban of all forms of sexual abuse, harassment or pressurising behaviour; plus, equally importantly, fair and robust procedures for dealing with accusations about such abusive behaviour, bearing in mind the possibility of false claims.

There should also be a very strong presumption that academic staff should avoid having consensual affairs with students (both undergraduate and postgraduate) while the students are registered within the same academic institution and particularly within the specific Department, Faculty or teaching unit, where the academic teaches.

Given human frailty, it must be expected that the ban on consensual affairs will sometimes be breached. It’s not feasible to expect all such encounters to be reported within each Department or Faculty (too hard to enforce). But it should become an absolute policy that academics should excuse themselves from examining students with whom they are having affairs. Or undertaking any roles where a secret partisan preference could cause injustice (such as making nominations for prizes). No doubt, Departments/Faculties will have to devise discreet mechanisms to operate such a policy; but so be it.

Since all institutions make great efforts to ensure that their examination processes are fairly and impartially operated, it’s wrong to risk secret sex warping the system. Ok, we are all flawed humans. But over the millennia humanity has learned – and is still learning – how to cope with our flaws. In these post-Weinstein days, all Universities now need a set of clear professional protocols with reference to sex and the academics.
2017-11 No2 Educating Rita

Fig.2 Advertising still for Educating Rita (play 1980; film 1983), which explores how a male don and his female student learn, non-amorously, from one another.

1 Campus novels almost invariably include illicit affairs: two witty exemplars include Alison Lurie’s The War between the Tates (1974) and Malcolm Bradbury’s The History Man (1975). Two plays which also explore educational/personal tensions between a male academic and female student are Willy Russell’s wry but gentle Educating Rita (1990) and David Mamet’s darker Oleanna (1992).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 83 please click here

MONTHLY BLOG 82, WRITING PERSONAL REFERENCES

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2017)

2017-10 No1 AUTHOR THINKING

What do today’s academics spend their time doing? Next to marking essays and planning research applications, one of the most common tasks is writing personal references for past and present students (and sometimes for colleagues too). Happily, such evaluations are not presented anonymously.1 Yet that makes writing them all the more testing.

The aim is to do full justice to the person under consideration, whilst playing fair with the organisation which is receiving the recommendation. Sometimes those aims can be in conflict. Should you recommend someone for a job for which they are not suitable, even if the candidate pleads with you to do so? The answer must be: No.

Actually I can remember one example, some years ago, when an excellent postgraduate wanted to apply for a new post which demanded skills in quantitative economic history. Since she did not have those special skills, I hesitated. She implored me to write on her behalf – it was in an era when new academic posts were rare – and, reluctantly, I did so. However, I told her that my reference would explain that she did not have the required skills, although she would be a great appointment if the University in question decided to waive those preconditions. (It was theoretically possible). In the event, she did not get the job. For the future, I resolved not to waste everyone’s time by writing references in unsuitable cases. A polite refusal does sometimes upset applicants. But it’s best to be frank from the start – and certainly better than writing a thumbs-down reference. (I decline to act if I can’t find anything positive to say).

Truth with tact is the motto. When writing, it’s good to dwell on the candidate’s best qualities, in terms of past attainments and future potential. But it’s seriously unwise to go over the top. Referees who praise everyone unreservedly to the skies quickly lose credibility. What is written should strive to match the best qualities of the person under discussion. Candidates often get called for interview; and it undoubtedly helps interview panels if the candidates broadly resemble their references. (It is ok, by the way, to warn panels in advance in cases of exceptionally nervous interviewees, who may need help to ‘unfreeze’).

Equally, when writing in support of candidates, it’s seriously wrong to go not over but under the top. There used to be an old-fashioned style of wry deprecation. It had a certain period charm. Yet in recent decades there’s been a definite inflation of rhetoric. Wry self-deprecation is still ok, when used in front of those who understand the English art of meiosis or ironic understatement. But deprecatory assessments, or even deprecatory asides, about other people are distinctly unhelpful in today’s competitive climate. Even one passing put-down can harm a candidate, when competing against rivals who are described in completely flattering terms.

Again, I remember a case at my University, where the venerable referee – a punctilious scholar of the old school – was warm but could not resist adding a critical aside. The candidate in question was much the best. Yet she lost out in the final choice, on the grounds that even her friendly referee had doubts about her. Really annoying. She went on to have a distinguished career – but elsewhere. We lost a great colleague.

Some months later I had a chance to talk with the venerable referee, who expressed bafflement that his candidate did not get the job. He was blithely unaware that he had, unintentionally, stabbed her in the back. It was a complete conflict between different generational styles of writing references. Later, I advised the candidate not to press me for further details (since these things are all confidential) but simply to change her referees, which she did. Such stylistic inter-generational contrasts still continue to an extent, although they take a somewhat different form these days. Either way, the moral is that balanced assessments of candidates are fine; shafts of sardonic humour or any form of deprecatory remarks aimed at an absent candidate are not.

Then there’s the question of different international cultures of writing references. Academics in some countries prefer a lyrical rhetoric of flowery but imprecise praise which can be very hard to interpret. (Is it secret humour?) By contrast, other references from a different stylistic culture can be very terse and factual, saying little beyond the public record. (Do they reflect secret boredom or indifference?) My advice in all cases is for candidates to choose referees from their own linguistic/academic/cultural traditions, so that recipients will know how to decode the references. Or, in the case of international applications, then to choose a good range of referees from different countries, hoping to balance the contrasting styles.

So there we are. Refereeing is an art, not a precise science. Truth with tact. Every reference takes thought and time, trying to capture the special qualities of each individual candidate. But, a final thought: there’s always one exception to the rule. The hapless Philip Swallow in David Lodge’s brilliant campus novel Changing Places (1975) encounters this problem, in the form of the former student demanding references – who never goes away. The requests pile up relentlessly. ‘Sometimes he [the former student] aimed absurdly high, sometimes grotesquely low. … If [he] was appointed to any of these posts, he evidently failed to hold them for very long, for the stream of enquiries never ran dry’. Eventually, Swallow realises that he is facing a lifetime commitment. He therefore generates an ‘unblushing all-purpose panegyric’, which is kept on permanent file in the Departmental Office.2 It’s just what every referee secretly craves, for use in emergencies. Just make sure that there are no flowery passages, no hyperbole, no ambiguities, no accidental put-downs, no coded messages, no brusque indifference, no sardonic asides, no joking. Writing personal references, on the record, is utterly serious and time-consuming business. Thank goodness for deadlines.

1 For my comments on writing anonymous assessments, see BLOG/80 (Aug. 2017) and on receiving anonymous assessments of my own work, see BLOG/81 (Sept. 2017).

2 David Lodge, Changing Places: A Tale of Two Campuses (1975), pp. 28-9.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 82 please click here

MONTHLY BLOG 66, WHAT’S SO GREAT ABOUT HISTORICAL EVIDENCE?

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2016)

‘Evidence, evidence: I hate that word’, a vehement colleague in the English Department once hissed at me, when I had, all unawares, invoked the word in the course of an argument. I was surprised at his vehemence but put it down to a touch of dyspepsia, aggravated by an overdose of (then) ultra-fashionable postmodernist doubt. What on earth was he teaching his students? To disregard evidence and invent things as the passing mood dictated? To apply theory arbitrarily? No need to bother about dates, precision or details. No need to check one’s hunches against any external data or criterion of judgment. And certainly no need to analyse anything unpleasant or inconvenient or complexly difficult about the past.1

But I thought my colleague’s distaste for evidence was no more than a passing fad. (The date was sometime in the later 1990s). And indeed intellectual postmodernism, which was an assertive philosophy of doubt (a bit of a contradiction in terms, since a philosophy of doubt should be suitably doubtful), has faded even faster than the postmodernist style of architectural whimsy has been absorbed into the architectural lexicon.2

Fig. 1 The Rašin Building, Prague, known as the Dancing House, designed by V. Milunić and F. Gehry (completed 1996) – challenging classical symmetry and modernist order yet demanding absolute confidence in the conventional solidity of its building materials. Image by © Paul Seheult/Eye Ubiquitous/Corbis

Fig. 1 The Rašin Building, Prague, known as the Dancing House, designed by V. Milunić and F. Gehry (completed 1996) – challenging classical symmetry and modernist order yet demanding absolute confidence in the conventional solidity of its building materials.
Image by © Paul Seheult/Eye Ubiquitous/Corbis

Then, just a week ago, I was talking to a History postgraduate on the same theme. Again to my surprise, he was, if not quite as hostile, at least as hesitant about the value of evidence. Oh really? Of course, the myriad forms of evidence do not ‘speak for themselves’. They are analysed and interpreted by historians, who often disagree. But that’s the point. The debates are then reviewed and redebated, with reference again to the evidence – including, it may be, new evidence.

These arguments continue not only between historians and students, but across the generations. The stock of human knowledge is constantly being created and endlessly adjusted as it is transmitted through time. And debates are ultimately decided, not by reference to one expert authority (X says this; Y says that) but to the evidence, as collectively shared, debated, pummelled, assessed and reassessed.

So let’s argue the proposition the other way round. Let’s laud to the skies the infinite value of evidence, without which historians would just be sharing our prejudices and comparing our passing moods. But ok, let’s also clarify. What we are seeking is not just ‘evidence’ A, B or C in the cold abstract. That no more resolves anything than does the unsupported testimony of historian X, Y or Z. What we need is critically assessed evidence – and lots of it, so that different forms of evidence can be tested against each other and debated together.

For historians, anything and everything is grist to the mill. If there was a time when we studied nothing but written documents, that era has long gone. Any and every legacy from the past is potential evidence: fragments of pottery, swatches of textiles, collections of bones, DNA records, rubbish tips, ruined or surviving buildings, ground plans, all manufactured objects (whether whole or in parts), paintings from cave to canvas, photos, poems, songs, sayings, myths, fairy tales, jokes … let alone all evidence constructed or reconstructed by historians, including statistics, graphs, databases, interpretative websites … and so forth. Great. That list sounds exhausting but it’s actually exhilarating.

However, the diversity of these potential sources, and the nebulousness of some forms of evidence (jokes, fairy tales), indicate one vital accompaniment. Historians should swear not only by the sources but by a rigorous source critique. After asking: what are your sources? the next question should be: how good are your sources, for whatever purpose you intend to deploy them? (These stock questions or variants upon them, keep many an academic seminar going).

Source auditing: here are three opening questions to pose, with reference to any potential source or set of sources. Firstly: Provenance. Where does the source come from? How has it survived from its original state through to the present day? How well authenticated is it? Has it been amended or changed over time? (There are numerous technical tests that can be used to check datings and internal consistency). No wonder that historians appreciate using sources that have been collected in museums, archives or other repositories, because usually these institutions have already done the work of authenticating. But it’s always well to double-check.

Secondly: Reliability of Sources and/or Methodology. A source or group of sources may be authentic but not necessarily reliable, in the sense of being precise or accurate. Evidence from the past has no duty to be anything other than what it is. A song about ‘happy times’ is no proof that there were past happy times. Only that there was a song to that effect. But that’s fine. That tells historians something about the history of songs – a fruitful field, provided that the lyrics are not taken as written affidavits.3 All sources have their own intrinsic characteristics and special nature, including flaws, biases, and omissions. These need to be understood before the source is deployed in argument. The general rule is that: problems don’t matter too much, as long as they are fully taken into account. (Though it does depend upon the nature of the problem. Fake and forged documents are evidence for the history of fakery and forgery, not for whatever instance or event they purport to illuminate).

One example of valid material that needs to be used with due caution is the case of edited texts whose originals have disappeared, or are no longer available for consultation. That difficulty applies to quite a number of old editions of letters and diaries, which cannot now be checked. For the most part, historians have to take on trust the accuracy of the editorial work. Yet we often don’t know what, if anything, has been omitted. So it is rash to draw conclusions based upon silences in the text – since the original authors may have been quietly censored by later editors.4

When auditing sources, it also follows that a related test should also be addressed to any methodology used in processing sources: is the methodology valid and reliable? Does it augment or diminish the value of the original(s)? Indeed, is the basic evidence solid enough to bear the weight of the analytical superstructure?

Thirdly: Typicality. With every source or group of sources, it’s also helpful to pose the question as to whether it is likely to be commonplace or highly unusual? Again, it doesn’t matter which it is, as long as the historian is fully aware of the implications. Otherwise, there is a danger of generalising from something that is in fact a rarity. Assessing typicality is not always easy, especially in the case of obscure, fragmentary or fugitive sources. Yet it’s always helpful to bear this question in mind.

detectives

Overall, the greater the range and variety of sources that can be identified and assessed the better. Everything (to repeat) is grist to the mill. Sources can be compared and contrasted. Different kinds of evidence can be used in a myriad of ways. The potential within every source is thrilling. Evidence is invaluable – not to be dismissed, on the grounds that some evidence is fallible, but to be savoured with full critical engagement, as vital for knowledge. That state of affairs does include knowing what we don’t (currently) know as well as what we do. Scepticism fine. Corrosive, dismissive, and ultimately boring know-nothingism, no way!

*NB: Having found and audited sources, the following stages of source analysis will be considered in next month’s BLOG.

1 BLOG dedicated to all past students on the Core Course of Royal Holloway (London University)’s MA in Modern History: Power, Culture, Society, for fertile discussions, week in, week out.

2 For the fading of philosophical postmodernism, see various studies on After- or Post-Postmodernism, including C.K. Brooks (ed.), Beyond Postmodernism: On to the Post-Contemporary (Newcastle upon Tyne, 2013); and G. Myerson, Ecology and the End of Postmodernism (Cambridge, 2001), p. 74: with prescient comment ‘it [Postmodernism] is slipping into the strange history of those futures that did not materialise’.

3 See e.g. R. Palmer, The Sound of History: Songs and Social Comment (Oxford, 1988).

4 A classic case was the excision of religious fervour from the seventeenth-century Memoirs of Edmund Ludlow by eighteenth-century editors, giving the Memoirs a secular tone which was long, but wrongly, accepted as authentic: see B. Worden, Roundhead Reputations: The English Civil Wars and the Passions of Posterity (2002).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 66 please click here

MONTHLY BLOG 40, HISTORICAL REPUTATIONS THROUGH TIME

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

What does it take to get a long-surviving reputation? The answer, rather obviously, is somehow to get access to a means of endurance through time. To hitch a lift with history.

People in sports and the performing arts, before the advent of electronic storage/ replay media, have an intrinsic problem. Their prowess is known at the time but is notoriously difficult to recapture later. The French actor Sarah Bernhardt (1844-1923), playing Hamlet on stage when she was well into her 70s and sporting an artificial limb after a leg amputation, remains an inspiration for all public performers, whatever their field.1  Yet performance glamour, even in legend, still fades fast.

Bernhardt in the 1880s as a romantic HamletWhat helps to keep a reputation well burnished is an organisation that outlasts an individual. A memorable preacher like John Wesley, the founder of Methodism, impressed many different audiences, as he spoke at open-air and private meetings across eighteenth-century Britain. Admirers said that his gaze seemed to pick out each person individually. Having heard Wesley in 1739, one John Nelson, who later became a fellow Methodist preacher, recorded that effect: ‘I thought his whole discourse was aimed at me’.2

Yet there were plenty of celebrated preachers in Georgian Britain. What made Wesley’s reputation survive was not only his assiduous self-chronicling, via his journals and letters, but also the new religious organisation that he founded. Of course, the Methodist church was dedicated to spreading his ideas and methods for saving Christian souls, not to the enshrining of the founder’s own reputation. It did, however, forward Wesley’s legacy into successive generations, albeit with various changes over time. Indeed, for true longevity, a religious movement (or a political cause, come to that) has to have permanent values that outlast its own era but equally a capacity for adaptation.

There are some interesting examples of small, often millenarian, cults which survive clandestinely for centuries. England’s Muggletonians, named after the London tailor Lodovicke Muggleton, were a case in point. Originating during the mid-seventeenth-century civil wars, the small Protestant sect never recruited publicly and never grew to any size.  But the sect lasted in secrecy from 1652 to 1979 – a staggering trajectory. It seems that the clue was a shared excitement of cultish secrecy and a sense of special salvation, in the expectation of the imminent end of the world. Muggleton himself was unimportant. And finally the movement’s secret magic failed to remain transmissible.3

In fact, the longer that causes survive, the greater the scope for the imprint of very many different personalities, different social demands, different institutional roles, and diverse, often conflicting, interpretations of the core theology. Throughout these processes, the original founders tend quickly to become ideal-types of mythic status, rather than actual individuals. It is their beliefs and symbolism, rather than their personalities, that live.

As well as beliefs and organisation, another reputation-preserver is the achievement of impressive deeds, whether for good or ill. Notorious and famous people alike often become national or communal myths, adapted by later generations to fit later circumstances. Picking through controversies about the roles of such outstanding figures is part of the work of historians, seeking to offer not anodyne but judicious verdicts on those ‘world-historical individuals’ (to use Hegel’s phrase) whose actions crystallise great historical moments or forces. They embody elements of history larger than themselves.

Hegel himself had witnessed one such giant personality, in the form of the Emperor Napoleon. It was just after the battle of Jena (1806), when the previously feared Prussian army had been routed by the French. The small figure of Napoleon rode past Hegel, who wrote: ‘It is indeed a wonderful sensation to see such an individual, who, concentrated here at a single point, astride a horse, reaches out over the world and masters it’.4

(L) The academic philosopher G.W.F. Hegel (1770-1831) and (R) the man of action, Emperor Napoleon (1769-1821),  both present at Jena in October 1806The means by which Napoleon’s posthumous reputation has survived are interesting in themselves. He did not found a long-lasting dynasty, so neither family piety nor institutionalised authority could help. He was, of course, deposed and exiled, dividing French opinion both then and later. Nonetheless, Napoleon left numerous enduring things, such as codes of law; systems of measurement; structures of government; and many physical monuments. One such was Paris’s Jena Bridge, built to celebrate the victorious battle.

Monuments, if sufficiently durable, can certainly long outlast individuals. They effortlessly bear diachronic witness to fame. Yet, at the same time, monuments can crumble or be destroyed. Or, even if surviving, they can outlast the entire culture that built them. Today a visitor to Egypt may admire the pyramids, without knowing the names of the pharaohs they commemorated, let alone anything specific about them. Shelley caught that aspect of vanished grandeur well, in his poem to the ruined statue of Ozymandias: the quondam ‘king of kings’, lost and unknown in the desert sands.6

So lastly what about words? They can outlast individuals and even cultures, provided that they are kept in a transmissible format. Even lost languages can be later deciphered, although experts have not yet cracked the ancient codes from Harappa in the Punjab.7  Words, especially in printed or nowadays digital format, have immense potential for endurance. Not only are they open to reinterpretation over time; but, via their messages, later generations can commune mentally with earlier ones.

In Jena, the passing Napoleon (then aged 37) was unaware of the watching academic (then aged 36), who was formulating his ideas about revolutionary historical changes through conflict. Yet, through the endurance of his later publications, Hegel, who was unknown in 1806, has now become the second notable personage who was present at the scene. Indeed, via his influence upon Karl Marx, it could even be argued that the German philosopher has become the historically more important figure of those two individuals in Jena on 13 October 1806. On the other hand, Marx’s impact, having been immensely significant in the twentieth century, is also fast fading.

Who from the nineteenth century will be the most famous in another century’s time? Napoleon? Hegel? Marx? (Shelley’s Ozymandias?) Time not only ravages but provides the supreme test.

1  R. Gottlieb, Sarah: The Life of Sarah Bernhardt (New Haven, 2010).

R.P. Heitzenrater, ‘John Wesley’s Principles and Practice of Preaching’, Methodist History, 37 (1999), p. 106. See also R. Hattersley, A Brand from the Burning: The Life of John Wesley (London, 2002).

3  W. Lamont, Last Witnesses: The Muggletonian History, 1652-1979 (Aldershot, 2006); C. Hill, B. Reay and W. Lamont, The World of the Muggletonians (London, 1983); E.P. Thompson, Witness against the Beast: William Blake and the Moral Law (Cambridge, 1993).

G.W.F. Hegel to F.I. Neithammer, 13 Oct. 1806, in C. Butler (ed.), The Letters: Georg Wilhelm Friedrich Hegel, 1770-1831 (Bloomington, 1984); also transcribed in www.Marxists.org, 2005.

See http://napoleon-monuments.eu/Napoleon1er.

6  P.B. Shelley (1792-1822), Ozymandias (1818).

For debates over the language or communication system in the ancient Indus Valley culture, see: http://en.wikipedia.org/

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 40 please click here

MONTHLY BLOG 39, STUDYING THE LONG AND THE SHORT OF HISTORY

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

A growing number of historians, myself included, want students to study long-term narratives as well in-depth courses.1 More on (say) the peopling of Britain since Celtic times alongside (say) life in Roman Britain or (say) medicine in Victorian times or (say) the ordinary soldier’s experiences in the trenches of World War I. We do in-depth courses very well. But long-term studies are also vital to provide frameworks.2

Put into more abstract terms, we need more diachronic (long-term) analysis, alongside synchronic (short-term) immersion. These approaches, furthermore, do not have to be diametrically opposed. Courses, like books, can do both.

That was my aim in an undergraduate programme, devised at Royal Holloway, London University.3  It studied the long and the short of one specific period. The choice fell upon early nineteenth-century British history, because it’s well documented and relatively near in time. In that way, the diachronic aftermath is not too lengthy for students to assess within a finite course of study.

Integral to the course requirements were two long essays, both on the same topic X. There were no restrictions, other than analytical feasiblity. X could be a real person; a fictional or semi-fictionalised person (like Dick Turpin);4  an event; a place; or anything that lends itself to both synchronic and diachronic analysis. Students chose their own, with advice as required. One essay of the pair then focused upon X’s reputation in his/her/its own day; the other upon X’s long-term impact/reputation in subsequent years.

There was also an examination attached to the course. One section of the paper contained traditional exam questions; the second just one compulsory question on the chosen topic X. Setting that proved a good challenge for the tutor, thinking of ways to compare and contrast short- and long-term reputations. And of course, the compulsory question could not allow a simple regurgitation of the coursework essays; and it had to be equally answerable by all candidates.

Most students decided to examine famous individuals, both worthies and unworthies: Beau Brummell; Mad Jack Mytton; Queen Caroline; Charles Dickens; Sir Robert Peel; Earl Grey; the Duke of Wellington; Harriette Wilson; Lord Byron; Mary Shelley; Ada Lovelace; Charles Darwin; Harriet Martineau; Robert Stephenson; Michael Faraday; Augustus Pugin; Elizabeth Gaskell; Thomas Arnold; Mary Seacole; to name only a few. Leading politicians and literary figures tended to be the first choices. A recent book shows what can be done in the case of the risen (and rising still further) star of Jane Austen.5 In addition, a minority preferred big events, such as the Battle of Waterloo; or the Great Exhibition. None in fact chose a place or building; but it could be done, provided the focus is kept sharp (the Palace of Westminster, not ‘London’.)

Studying contemporary reputations encouraged a focus upon newspaper reports, pamphlets, letters, public commemorations, and so forth. In general, students assumed that synchronic reputation would be comparatively easy to research. Yet they were often surprised to find that initial responses to X were confused. It takes time for reputations to become fixed. In particular, where the personage X had a long life, there might well be significant fluctuations during his or her lifetime. The radical John Thelwall, for example, was notorious in 1794, when on trial for high treason, yet largely forgotten at his death in 1834. 6

By contrast, students often began by feeling fussed and unhappy about studying X’s diachronic reputation. There were no immediate textbooks to offer guidance. Nonetheless, they often found that studying long-term changes was good fun, because more off-the-wall. The web is particularly helpful, as wikipedia often lists references to X in film(s), TV, literature, song(s) and popular culture. Of course, all wiki-leads need to be double-checked. There are plenty of errors and omissions out there.

Nonetheless, for someone wishing to study the long-term reputation of (say) Beau Brummell (1778-1840), wikipedia offers extensive leads, providing many references to Brummell in art, literature, song, film, and sundry stylistic products making use of his name, as well as a bibliography. 7

Beau Brummell (1778-1840) from L to R: as seen in his own day; as subject of enquiry for Virginia Woolf (1882-1941); and as portrayed by Stewart Granger in Curtis Bernhardt’s film (1954).Plus it is crucial to go beyond wikipedia. For example, a search for relevant publications would reveal an unlisted offering. In 1925, Virginia Woolf, no less, published a short tract on Beau Brummell.8 The student is thus challenged to explore what the Bloomsbury intellectual found of interest in the Regency Dandy. Of course, the tutor/ examiner also has to do some basic checks, to ensure that candidates don’t miss the obvious. On the other hand, surprise finds, unanticipated by all parties, proved part of the intellectual fun.

Lastly, the exercise encourages reflections upon posthumous reputations. People in the performing arts and sports, politicians, journalists, celebrities, military men, and notorious criminals are strong candidates for contemporary fame followed by subsequent oblivion, unless rescued by some special factor. In the case of the minor horse-thief Dick Turpin, he was catapulted from conflicted memory in the eighteenth century into dashing highwayman by the novel Rookwood (1834). That fictional boost gave his romantic myth another 100 years before starting to fade again.

Conversely, a tiny minority can go from obscurity in their lifetime to later global renown. But it depends crucially upon their achievements being transmissable to successive generations. The artist and poet William Blake (1757-1827) is a rare and cherished example. Students working on the long-and-the-short of the early nineteenth century were challenged to find another contemporary with such a dramatic posthumous trajectory. They couldn’t.

But they and I enjoyed the quest and discovery of unlikely reactions, like Virginia Woolf dallying with Beau Brummell. It provided a new way of thinking about the long-term – not just in terms of grand trends (‘progress’; ‘economic stages’) but by way of cultural borrowings and transmutations between generations. When and why? There are trends but no infallible rules.

1 ‘Teaching History’s Big Pictures: Including Continuity as well as Change’, Teaching History: Journal of the Historical Association, 136 (Sept. 2009), pp. 53-9; and PJC website Pdf/3.

2 My own answers in P.J. Corfield, Time and the Shape of History (2007).

3 RH History Course HS2246: From Rakes to Respectability? Conflict and Consensus in Britain 1815-51 (content now modified).

4 Well shown by J. Sharpe, Dick Turpin: The Myth of the Highwayman (London, 2004).

5 C. Harman, Jane’s Fame: How Jane Austen Conquered the World (Edinburgh, 2009).

6 Two PJC essays on John Thelwall (1764-1834) are available in PJC website, Pdf/14 and Pdf/22.

7 See http://en.wikipedia.org/wiki/Beau_Brummell.

8 See V. Woolf, Beau Brummell (1925; reissued by Folcroft Library, 1972); and http://www.dandyism.net/woolfs-beau-brummell/.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 39 please click here

MONTHLY BLOG 38, WHY IS THE LANGUAGE OF ‘RACE’ HOLDING ON SO LONG WHEN IT’S BASED ON A PSEUDO-SCIENCE?

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

Of course, most people who continue to use the language of ‘race’ believe that it has a genuine meaning – and a meaning, moreover, that resonates for them. It’s not just an abstract thing but a personal way of viewing the world. I’ve talked to lots of people about giving up ‘race’ and many respond with puzzlement. The terminology seems to reflect nothing more than the way things are.

But actually, it doesn’t. It’s based upon a pseudo-science that was once genuinely believed but has long since been shown as erroneous by geneticists. So why is this language still used by people who would not dream of insisting that the earth is flat, or the moon made of blue cheese.

Part of the reason is no doubt the power of tradition and continuity – a force of history that is often under-appreciated.1 It’s still possible to hear references to people having the ‘bump of locality’, meaning that they have a strong topographical/spatial awareness and can find their way around easily. The phrase sounds somehow plausible. Yet it’s derived from the now-abandoned study of phrenology. This approach, first advanced in 1796 by the German physician F.J. Gall, sought to analyse people’s characteristics via the contours of the cranium.2  It fitted with the ‘lookism’ of our species. We habitually scrutinise one another to detect moods, intentions, characters. So it may have seemed reasonable to measure skulls for the study of character.

Phrenologist’s view of the human skull: point no. 31 marks the bump of locality, just over the right eyebrow.Yet, despite confident Victorian publications explaining The Science of Phrenology3  and advice manuals on How to Read Heads,4  these theories turned out to be no more than a pseudo-science. The critics were right after all. Robust tracts like Anti-Phrenology: Or a Chapter on Humbug won the day. Nevertheless, some key phrenological phrases linger on.5  My own partner in life has an exceptionally strong sense of topographical orientation. So sometimes I joke about his ‘bump of locality’, even though there’s no protrusion on his right forehead. It’s a just linguistic remnant of vanished views.

That pattern may apply similarly in the language of race, which is partly based upon a simple ‘lookism’. People who look like us are assumed to be part of ‘our tribe’. Those who do not seem to be ‘a race apart’ (except that they are not). The survival of the phrasing is thus partly a matter of inertia.

Another element may also spring, paradoxically, from opponents of ‘racial’ divisions. They are properly dedicated to ‘anti-racism’. Yet they don’t oppose the core language itself. That’s no doubt because they want to confront prejudices directly. They accept that humans are divided into separate races but insist that all races should be treated equally. It seems logical therefore that the opponent of a ‘racist’ should be an ‘anti-racist’. Statistics of separate racial groups are collected in order to ensure that there is no discrimination.

Yet one sign of the difficulty in all official surveys remains the utter lack of consistency as to how many ‘races’ there are. Early estimates by would-be experts on racial classification historically ranged from a simplistic two (‘black’ and ‘white’) to a complex 63.6  Census and other listings these days usually invent a hybrid range of categories. Some are based upon ideas of race or skin colour; others of nationality; or a combination And there are often lurking elements of ‘lookism’ within such categories (‘black British’), dividing people by skin colour, even within the separate ‘races’.7

So people like me who say simply that ‘race’ doesn’t exist (i.e. that we are all one human race) can seem evasive, or outright annoying. We are charged with missing the realities of discrimination and failing to provide answers.

Nevertheless, I think that trying to combat a serious error by perpetrating the same error (even if in reverse) is not the right way forward. The answer to pseudo-racism is not ‘anti-racism’ but ‘one-racism’. It’s ok to collect statistics about nationality or world-regional origins or any combination of such descriptors, but without the heading of ‘racial’ classification and the use of phrases that invoke or imply separate races.

Public venues in societies that historically operated a ‘colour bar’  used the brown paper bag test for quick decisions,  admitting people with skins lighter than the bag and rejecting the rest.  As a means of classifying people, it’s as ‘lookist’ as phrenology  but with even fewer claims to being ‘scientific’.  Copyright © Jessica C (Nov. 2013)What’s in a word? And the answer is always: plenty. ‘Race’ is a short, flexible and easy term to use. It also lends itself to quickly comprehensible compounds like ‘racist’ or ‘anti-racist’. Phrases derived from ethnicity (national identity) sound much more foreign in English. And an invented term like ‘anti-ethnicism’ seems abstruse and lacking instant punch.

All the same, it’s time to find or to create some up-to-date phrases to allow for the fact that racism is a pseudo-science that lost its scientific rationale a long time ago. ‘One-racism’? ‘Humanism’? It’s more powerful to oppose discrimination in the name of reality, instead of perpetrating the wrong belief that we are fundamentally divided. The spectrum of human skin colours under the sun is beautiful, nothing more.

1 On this, see esp. PJC website BLOG/1 ‘Why is the Formidable Power of Continuity so often Overlooked?’ (Nov. 2010).

2 See T.M. Parssinen, ‘Popular Science and Society: The Phrenology Movement in Early Victorian Britain’, Journal of Social History, 8 (1974), pp. 1-20.

3 J.C. Lyons, The Science of Phrenology (London, 1846).

4 J. Coates, How to Read Heads: Or Practical Lessons on the Application of Phrenology to the Reading of Character (London, 1891).

5 J. Byrne, Anti-Phrenology: Or a Chapter on Humbug (Washington, 1841).

6 P.J. Corfield, Time and the Shape of History (London, 2007), pp. 40-1.

7 The image comes from Jessica C’s thoughtful website, ‘Colorism: A Battle that Needs to End’ (12 Nov. 2013): www.allculturesque.com/colorism-a-battle-that-needs-to-end.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 38 please click here

MONTHLY BLOG 37, HOW DO PEOPLE RESPOND TO ELIMINATING THE LANGUAGE OF ‘RACE’?

 If citing, please kindly acknowledge copyright © Penelope J. Corfield (2014)

 Having proposed eliminating from our thoughts and vocabulary the concept of ‘race’ (and I’m not alone in making that suggestion), how do people respond?

Indifference: we are all stardust. Many people these days shrug. They say that the word ‘race’ is disappearing anyway, and what does it matter?

Indeed, a friend with children who are conventionally described as ‘mixed race’ tells me that these young people are not worried by their origins and call themselves, semi-jokingly, ‘mixed ray’. It makes them sound like elfin creatures from the sun and stars – rather endearing really. Moreover, such a claim resonates with the fact that many astro-biologists today confirm that all humans (among all organic and inorganic matter on earth) are ultimately made from trace elements from space – or, put more romantically, from stardust. 1

So from a cosmic point of view, there’s no point in worrying over minor surface differences within one species on a minor planet, circulating within the constellation of a minor sun, which itself lies in but one quite ordinary galaxy within a myriad of galaxies.

Ethnic pride: On the other hand, we do live specifically here, on earth. And we are a ‘lookist’ species. So others give more complex responses, dropping ‘race’ for some purposes but keeping it for others. Given changing social attitudes, the general terminology seems to be disappearing imperceptibly from daily vocabulary. As I mentioned before, describing people as ‘yellow’ and ‘brown’ has gone. Probably ‘white’ will follow next, especially as lots of so-called ‘whites’ have fairly dusky skins.

‘Black’, however, will probably be the slowest to go. Here there are good as well as negative reasons. Numerous people from Africa and from the world-wide African diaspora have proudly reclaimed the terminology, not in shame but in positive affirmation.

Battersea’s first ‘black’ Mayor, John Archer (Mayor 1913/14) was a pioneer in that regard. I mentioned him in my previous BLOG (no 35). Archer was a Briton, with Irish and West Indian ancestry. He is always described as ‘black’ and he himself embraced black consciousness-raising. Yet he always stressed his debt to his Irish mother as well as to his Barbadian father.

In 1918 Archer became the first President of the African Progress Union. In that capacity, he attended meetings of the Pan-African Congress, which promoted African decolonisation and development. The political agenda of activists who set up these bodies was purposive. And they went well beyond the imagery of negritude by using a world-regional nomenclature.

Interestingly, therefore, the Pan-African Congress was attended by men and women of many skin colours. Look at the old photograph (1921) of the delegates from Britain, continental Europe, Africa and the USA (see Illus 1). Possibly the dapper man, slightly to the L of centre in the front row, holding a portfolio, is John Archer himself.

Illus 1: Pan-African Congress delegates in Brussels (1921)Today, ‘black pride’, which has had a good cultural run in both Britain and the USA, seems to be following, interestingly, in Archer’s footsteps. Not by ignoring differences but by celebrating them – in world-regional rather than skin-colourist terms. Such labels also have the merit of flexibility, since they can be combined to allow for multiple ancestries.

Just to repeat the obvious: skin colour is often deceptive. Genetic surveys reveal high levels of ancestral mixing. As American academic Henry Louis Gates has recently reminded us in The Observer, many Americans with dark skins (35% of all African American men) have European as well as African ancestry. And the same is true, on a lesser scale, in reverse. At least 5% of ‘white’ male Americans have African ancestry, according to their DNA.

Significantly, people with mixed ethnicities often complain at being forced to choose one or the other (or having choice foisted upon them), when they would prefer, like the ‘Cablinasian’ Tiger Woods, to celebrate plurality. Pride in ancestry will thus outlast and out-invent erroneous theories of separate ‘races’.

Just cognisance of genetic and historic legacies: There is a further point, however, which should not be ignored by those (like me) who generally advocate ‘children of stardust’ universalism. For some social/political reasons, as well as for other medical purposes, it is important to understand people’s backgrounds.

Thus ethnic classifications can help to check against institutionalised prejudice. And they also provide important information in terms of genetic inheritance. To take one well known example, sickle-cell anaemia (drepanocytosis) is a condition that can be inherited by humans whose ancestors lived in tropical and sub-tropical regions where malaria is or was common. It is obviously helpful, therefore, to establish people’s genetic backgrounds as accurately as possible.

All medical and social/political requirements for classification, however, call for just classification systems. One reader of my previous BLOG responded that it didn’t really matter, since if ‘race’ was dropped another system would be found instead. But that would constitute progress. The theory of different human races turned out to be erroneous. Instead, we should enquire about ethnic (national) identity and/or world-regional origins within one common species. Plus we should not use a hybrid mix of definitions, partly by ethnicities and partly by skin colour (as in ‘black Britons’).

Lastly, all serious systems of enquiry should ask about plurality: we have two parents, who may or may not share common backgrounds. That’s the point: men and women from any world-region can breed together successfully, since we are all one species.

1 S. Kwok, Stardust: The Cosmic Seeds of Life (Heidelberg, 2013).

2 For John Richard Archer (1869-1932), see biog. by P. Fryer in Oxford Dictionary of National Biography: on-line; and entry on Archer in D. Dabydeen, J. Gilmore and C. Jones (eds), The Oxford Companion to Black British History (Oxford, 2007, p. 33.

3 The Observer (5 Jan. 2014): New Review, p. 20.

4 M. Tapper, In the Blood: Sickle Cell Anaemia and the Politics of Race (Philadelphia, 1999).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 37 please click here

MONTHLY BLOG 35, DONS AND STUDENT-CUSTOMERS? OR THE COMMUNITY OF LEARNERS?

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2013)

Names matter. Identifying  things – people – events – in realistic terminology means that they are being fully understood and taken seriously. Conversely, it’s warping to the mind and eventually corrosive of good thought to be constantly urged to give lip-service to the ‘wrong’ terms. People who live under dictatorial systems of would-be thought-control often testify to the ‘dead’ feeling that results from public censorship, especially when it is internalised as self-censorship.

By the way, I wrote that paragraph before remembering that this sentiment dovetailed with something I’d read about Confucius. A quick Google-check confirmed my half-memory.  Confucius long ago specified that: ‘The beginning of wisdom is to call things by their proper names’. It’s a great dictum. It doesn’t claim too much. Naming is only the ‘beginning of wisdom’, not the entirety. And there is often scope for debating what is or should be the ‘proper’ name. Nonetheless, Confucius not only highlights the good effect of clear vision, accurately acknowledged to others, but equally implies the malign effects of the reverse. The beginning of madness is to delude oneself and others about the true state of affairs.

Which brings me to my current question: are University students ‘customers’? If so, an interesting implication follows. If ‘the customer is always right’, as the business world asserts but does not always uphold, should not all students get top marks for having completed an assignment or an exam paper? Or, at very least not get bad marks?

Interestingly, now that student payments for tuition are very much up-front and personal in the form of fees (which are funded as repayable loans), so the standard of degrees is gradually rising. Indeed, grade inflation has become noticeable ever since Britain’s Universities began to be expanded into a mass system. A survey undertaken in 2003 found that the third-class degree has been in steady decline since 1960 and was nearing extinction by 2000. And a decade on, the lower second (2.2) in some subjects is following the same trajectory. Better teaching, better study skills, and/or improved exam preparation may account for some of this development. But rising expectations on the part of students – and increasing reputational ambitions on the part of the Universities – also exert subtle pressures upon examiners to be generous.

Nonetheless, even allowing for a changing framework of inputs and outputs, a degree cannot properly be ‘bought’. Students within any given University course are learners, not customers. Their own input is an essential part of the process. They can gain a better degree not by more money but by better effort, well directed, and by better ability, suitably honed.

People learn massively from teachers, but also much from private study, and much too from their fellow-learners (who offer both positive and negative exemplars). Hence the tutors, the individual student, and other students all contribute to each individual’s result.2

A classic phrase for this integrated learning process was ‘the community of scholars’. That phrase now sounds quaint and possibly rather boring. Popularly, scholarship is assumed to be quintessentially dull and pedantic, with the added detriment of causing its devotees to ‘scorn delights and live laborious days,’ in Milton’s killing phrase.3  In fact, of course, learning isn’t dull. Milton, himself a very learned man, knew so too. Nonetheless, ‘the community of scholars’ doesn’t cut the twenty-first century terminological mustard.

But ‘learning’ has a better vibe. It commands ‘light’. People may lust for it, without losing their dignity. And it implies a continually interactive process. So it’s good for students to think of themselves as part of a community of learners. Compared with their pupils, the dons are generally older, sometimes wiser, always much better informed about the curriculum, much more experienced in teaching, and ideally seasoned by their own research efforts. But the academics too are learners, if more advanced along the pathway. They are sharing the experience and their expertise with the students. Advances in knowledge can come from any individual at any level, often emerging from debates and questions, no matter how naive. So it’s not mere pretension that causes many academics to thank in their scholarly prefaces not only their fellow researchers but also their students.

Equally, it’s good for the hard-pressed dons to think of themselves as part of an intellectual community that extends to the students. That concept reasserts an essential solidarity. It also serves to reaffirm the core commitment of the University to the inter-linked aims of teaching and research. Otherwise, the students, who are integral to the process, are seemingly in danger of getting overlooked while the dons are increasingly tugged between the rival pressures of specialist research in the age of Research Assessment, and of managerial business-speak in the age of the University-plc.4

Lastly, reference to ‘community’ need not be too starry-eyed. Ideals may not always work perfectly in practice. ‘Community’ is a warm, comforting word. It’s always assumed to be a ‘good thing’. Politicians, when seeking to commend a policy such as mental health care, refer to locating it in ‘the community’ as though that concept can resolve all the problems. (As is now well proven, it can’t). And history readily demonstrates that not all congregations of people form a genuine community. Social cohesion needs more than just a good name.

That’s why it’s good to think of Universities as containing communities of learners, in order to encourage everyone to provide the best conditions for that basic truth to flourish at its best. That’s far from an easy task in a mass higher-education system. It runs counter to attempts at viewing students as individual consumers. But it’s more realistic as to how teaching actually works well. And calling things by their proper names makes a proper start.

William Hogarth’s satirical Scholars at a Lecture (1736) offers a wry reminder to tutors not to be boring and to students to pay attention1 ‘Third Class Degree Dying Out’, Times Higher Education, 5 Sept. 2003: www.timeshighereducation.co.uk/178955/article: consulted 4 Nov. 2013.

2 That’s one reason why performance-related-pay (PRP) for teachers, based upon examination results for their taught courses, remains a very blunt tool for rewarding teaching achievements. Furthermore, all calculations for PRP (to work even approximately justly) need to take account of the base-line from which the students began, to measure the educational ‘value-added’. Without that proviso, teachers (if incentivised purely by monetary reward) should logically clamour to teach only the best, brightest, and most committed students, who will deliver the ‘best’ results.

3 John Milton, Lycidas: A Lament for a Friend, Drowned in his Passage from Chester on the Irish Seas (1637), lines 70-72: ‘Fame is the spur that the clear spirit doth raise/ (That last Infirmity of Noble mind)/ To scorn delights and live laborious days.’

4 On the marketisation of higher education, see film entitled Universities Plc? Enterprise in Higher Education, made by film students at the University of Warwick (2013): www2.warwick.ac.uk/fac

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 35 please click here

MONTHLY BLOG 5, STUDYING HISTORY FOR LOVE AND USEFULNESS COMBINED

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2011)

History as a University subject will have to fight harder for its custom – and why not? It has strong arguments for its cause. But they do need to be made loudly and clearly.

From 2012 onwards the success or failure of subjects will depend upon student choice under the new tuition fees regime (outside the protected ring-fence of state funding for Science; Technology; Engineering; and Mathematics). For good or ill, that sudden policy change creates a competitive market. It will be based on the choices of eighteen-year-olds, for all teaching (and hence research) in the Arts, Humanities and Social Sciences.

For History (meaning History as a subject for study), there is a risk. Not that interest in the endless ramifications of the subject will die. An interest in the human past is very pervasive among humans who live in and through time. It may take the form of ancestor worship. Or maybe swapping anecdotes about past sporting heroes. Or watching history programmes on TV. Or a myriad of other ways. In sum, a generalised interest in the human past is completely safe from the vagaries of fashion.

The risk, however, applies to the academic study of History. It may be marginalised by a stampede to take courses which seem more immediately ‘useful’ and/or more likely to lead to lucrative employment. Law, business management, and – for the numerate – economics might seem like the hot choices.

In fact, however, studying History is a good career choice. It focuses upon a great subject – the living and collective human past. Nothing could be more wide-ranging and fascinating. It is open and endless in its scope. And simultaneously it inculcates an impressive range of skills, which are individually and socially useful.

For that reason, History graduates go on to have careers in an impressive variety of fields. They experience relatively low levels of graduate unemployment. And they find mid-career changes much less difficult than do many others.

Forget old moans about ‘History is bunk’. Henry Ford who is credited with this pithy dictum (in fact, it may have been polished by a journalist) came to regret it deeply. It took a lot of accumulated human history to be able to manufacture a motor car. [For more on Henry Ford and the motor car, see P.J. Corfield’s Discussion-Piece pdf/1 All People are Living Histories: Which is Why History Matters – within this website section What is History?]

Forget too easy comments such as ‘History is dead’. In fact, the human past is a complicated mixture of things that have departed and things that survive. Like human DNA for a start: individuals come and go but, as long as the species survives, so does human DNA as a collective inheritance. The same applies to human languages. Some do disappear, with the communities who spoke them. Some mutate into different but related forms, like Latin into Italian. And most languages evolve slowly over many centuries, with all sorts of transfusions and minglings on the way: like English. The incredibly complex human past is far from over. It lives as long as humans as a species live.

The point is that History should be studied both for love of the subject AND for its individual and collective usefulness. It is not an either/or choice. But a rational choice to get BOTH.

People have many times listed the benefits to be gained from studying History, in terms of its high-level synthesis of both Knowledge and Skills. So the following list is not unique. These are the points that occur to me (Feb. 2011) and I look forward to learning of others.

THE STUDY OF HISTORY AT UNIVERSITY:

  • teaches students about their own society and its past
  • teaches also about other countries in the same part of the world
  • also takes a world-wide perspective and teaches about far distant places
  • enables students to switch their analytical focus as appropriate between close-focus studies AND broad surveys
  • teaches about periods of history that are close in time and also far distant in time
  • therefore encourages students to think through time and about time; and
  • allows extensive choice of specific periods, countries and/or themes for study, drawing upon the huge documented range of human experience
  • trains students simultaneously to analyse a magnificent array of sources, from words to numbers to pictures to sounds to physical objects – and even, in some cases, the smells of the past
  • teaches students to detect fraudulent use of sources
  • trains students to search for and use appropriate sources for their independent studies
  • requires the continuous weighing and assessing of disparate, imperfect and often contradictory evidence to formulate reasoned conclusions
  • inculcates the expression of cogent argument both in writing and in communal debate
  • also trains students to read and to assess critically a huge quantity of writings by expert authorities, who often disagree
  • trains students to use historical websites and databases both adeptly and critically
  • encourages students to think cogently about the links (and disjunctures) between the past and present
  • studies the meanings and often conflicting interpretations attached to the past
  • trains people to help with dispute resolution through historical understanding (‘where people are coming from’) and through empathy even for causes which are not endorsed personally
  • teaches the distinction between sympathy (personal support) and empathy (contextual understanding without necessarily endorsing)
  • allows students to distinguish between history as propaganda and history as reasoned (though still often disputed) analysis
  • allows students to analyse and debate the nature of studying the past; and
  • above all, inculcates an understanding of the human past within a historical perspective.

In sum, studying History at University can be undertaken for love and usefulness combined. It offers access to a huge, fascinating and endless subject, drawing upon the entire range of human experience and requiring a high synthesis of skills and knowledge.

No wonder that Francis Bacon (1561-1626) long ago praised an understanding of the human past simply as: ‘Histories make men [humans] wise’.

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 5 please click here