Posts

MONTHLY BLOG 105, Researchers, Do Your Ideas Have Impact? A Critique of Short-Term Impact Assessments

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2019)

Clenched Fist
© Victor-Portal-Fist (2019)

 Researchers, do your ideas have impact? Does your work produce ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’? Since 2014, that question has been addressed to all research-active UK academics during the assessments for the Research Excellence Framework (REF), which is the new ‘improved’ name for the older Research Assessment Exercise (RAE).1

From its first proposal, however, and long before implementation, the Impact Agenda has proved controversial.2 Each academic is asked to produce for assessment, within a specified timespan (usually seven years), four items of published research. These contributions may be long or short, major or minor. But, in the unlovely terminology of the assessment world, each one is termed a ‘Unit of Output’ and is marked separately. Then the results can be tallied for each researcher, for each Department or Faculty, and for each University. The process is mechanistic, putting the delivery of quantity ahead of quality. And now the REF’s whistle demands demonstrable civic ‘impact’ as well.

These changes add to the complexities of an already intricate and unduly time-consuming assessment process. But ‘Impact’ certainly sounds great. It’s punchy, powerful: Pow! When hearing criticisms of this requirement, people are prone to protest: ‘But surely you want your research to have impact?’ To which the answer is clearly ‘Yes’. No-one wants to be irrelevant and ignored.

However, much depends upon the definition of impact – and whether it is appropriate to expect measurable impact from each individual Unit of Output. Counting/assessing each individual tree is a methodology that will serve only to obscure sight of the entire forest. And will hamper its future growth.

In some cases, to be sure, immediate impact can be readily demonstrated. A historian working on a popular topic can display new results in a special exhibition, assuming that provision is made for the time and organisational effort required. Attendance figures can then be tallied and appreciative visitors’ comments logged. (Fortunately, people who make an effort to attend an exhibition usually reply ‘Yes’ when asked ‘Did you learn something new?’). Bingo. The virtuous circle is closed: new research → an innovative exhibition → gratified and informed members of the public → relieved University administrators → happy politicians and voters.

Yet not all research topics are suitable to generate, within the timespan of the research assessment cycle, the exhibitions, TV programmes, radio interviews, Twitterstorms, applied welfare programmes, environmental improvements, or any of the other multifarious means of bringing the subject to public attention and benefit.

The current approach focuses upon the short-term and upon the first applications of knowledge rather than upon the long-term and the often indirect slow-fuse combustion effects of innovative research. It fails to register that new ideas do not automatically have instant success. Some of the greatest innovations take time – sometimes a very long time – to become appreciated even by fellow researchers, let alone by the general public. Moreover, in many research fields, there has to be scope for ‘trial and error’. Short-term failures are part of the price of innovation for ultimate long-term gain. Unsurprisingly, therefore, the history of science and technology contains many examples of wrong turnings and mistakes, along the pathways to improvement.3

An Einstein, challenging the research fundamentals of his subject, would get short shrift in today’s assessment world. It took 15 years between the first publication of his paper on Special Relativity in 1905 and the wider scientific acceptance of his theory, once his predictions were confirmed experimentally. And it has taken another hundred years for the full scientific and cultural applications of the core concept to become both applied and absorbed.4 But even then, some of Einstein’s later ideas, in search of a Unified Field Theory to embrace analytically all the fundamental forces of nature, have not (yet) been accepted by his fellow scientists.5 Even a towering genius can err.

Knowledge is a fluid and ever-debated resource which has many different applications over time. Applied subjects (such as engineering; medicine; architecture; public health) are much more likely to have detectable and direct ‘impact’, although those fields also require time for development. ‘Pure’ or theoretical subjects (like mathematics), meanwhile, are more likely to achieve their effects indirectly. Yet technology and the sciences – let alone many other aspects of life – could not thrive without the calculative powers of mathematics, as the unspoken language of science. Moreover, it is not unknown for advances in ‘pure’ mathematics, which have no apparent immediate use, to become crucial many years subsequently. (An example is the role of abstract Number Theory for the later development of both cryptography and digital computing).6

Hence the Impact Agenda is alarmingly short-termist in its formulation. It is liable to discourage blue skies innovation and originality, in the haste to produce the required volume of output with proven impact.

It is also fundamentally wrong that the assessment formula precludes the contribution of research to teaching and vice versa. Historically, the proud boast of the Universities has been the integral link between both those activities. Academics are not just transmitting current knowhow to the next generation of students but they (with the stimulus and often the direct cooperation of their students) are simultaneously working to expand, refine, debate, develop and apply the entire corpus of knowledge itself. Moreover, they are undertaking these processes within an international framework of shared endeavour. This comment does not imply, by the way, that all knowledge is originally derived from academics. It comes indeed from multiple human resources, the unlearned as well as learned. Yet increasingly it is the research Universities which play a leading role in collecting, systematising, testing, critiquing, applying, developing and advancing the entire corpus of human knowledge, which provides the essential firepower for today’s economies and societies.7

These considerations make the current Impact Agenda all the more disappointing. It ignores the combined impact of research upon teaching, and vice versa. It privileges ‘applied’ over ‘pure’ knowledge. It prefers instant contributions over long-term development. It discourages innovation, sharing and cooperation. And it entirely ignores the international context of knowledge development and its transmission. Instead, it encourages researchers to break down their output into bite-sized chunks; to be risk-averse; to try for crowd-pleasers; and to feel harried and unloved, as all sectors of the educational world are supposed to compete endlessly against one another.

No one gains from warped assessment systems. Instead, everyone loses, as civic trust is eroded. Accountability is an entirely ‘good thing’. But only when done intelligently and without discouraging innovation. ‘Trial and error’ contains the possibility of error, for the greater good. So the quest for instant and local impact should not be overdone. True impact entails a degree of adventure, which should be figured into the system. To repeat a dictum which is commonly attributed to Einstein (because it summarises his known viewpoint), original research requires an element of uncertainty: ‘If we knew what it was we were doing, it would not be called “research”, would it?’8

ENDNOTES:

1 See The Research Excellence Framework: Diversity, Collaboration, Impact Criteria, and Preparing for Open Access (Westminster, 2019); and historical context in https://en.wikipedia.org/wiki/Research_Assessment_Exercise.

2 See e.g. B.R. Martin, ‘The Research Excellence Framework and the “Impact Agenda”: Are We Creating a Frankenstein Monster?’ Research Evaluation, 20 (Sept. 2011), pp. 247-54; and other contributions in same issue.

3 S. Firestein, Failure: Why Science is So Successful (Oxford, 2015); [History of Science Congress Papers], Failed Innovations: Symposium (1992).

4 See P.C.W. Davies, About Time: Einstein’s Unfinished Revolution (New York, 1995); L.P. Williams (ed.), Relativity Theory: Its Origins and Impact on Modern Thought (Chichester, 1968); C. Christodoulides, The Special Theory of Relativity: Foundations, Theory, Verification, Applications (2016).

5 F. Finster and others (eds), Quantum Field Theory and Gravity: Conceptual and Mathematical Advances in the Search for a Unified Framework (Basel, 2012).

6 M.R. Schroeder, Number Theory in Science and Communications: With Applications in Cryptography, Physics, Biology, Digital Information and Computing (Berlin, 2008).

7 J. Mokyr, The Gifts of Athena: Historical Origins of the Knowledge Economy (Princeton, 2002); A. Valero and J. van Reenen, ‘The Economic Impact of Universities: Evidence from Across the Globe’ (CEP Discussion Paper No. 1444, 2016), in Vox: https://voxeu.org/article/how-universities-boost-economic-growth

8 For the common attribution and its uncertainty, see [D. Hirshman], ‘Adventures in Fact-Checking: Einstein Quote Edition’, https://asociologist.com/2010/09/04/adventures-in-fact-checking-einstein-quote-edition/

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 105 please click here

MONTHLY BLOG 59, SUPERVISING A BIG RESEARCH PROJECT TO FINISH WELL AND ON TIME: THREE FRAMEWORK RULES

If citing, please kindly acknowledge copyright © Penelope J. Corfield (2015)

The ideal is helping people to finish a big project (a book, a thesis) not only well – that goes without staying – but also within a specified time. Why bother about that latter point? Mainly because people don’t have unlimited years and funds to produce their great work. Plus: the discipline of mental time-management is valuable in itself. When all’s said and done, there’s nothing like a real deadline.

So first framework rule: check that the researcher/writer really, really, really wants to complete the project. (Not just wants the qualification at the end of it). What’s needed is a burning desire to sustain the researcher throughout the four years it takes to research, write and present to publishable standard an original study of c.100,000 words. Ability, aptitude for the specific subject, and a good supervisor, are certainly needed. But more still is required. Motivation is crucial.
2015-11 No1 Early Flame
How burning should the burning desire be? Maybe not a total conflagration from the very start. But a genuine self-tended spark that can gain strength as things proceed. Finishing a big project is a long slog. There are moments of euphoria but also risks of boredom, isolation, exasperation, wrong turns, discouragement and even burn-out. The finicky finishing processes, which involve checking and checking again, down to every last dot and comma, can also drive people mad. In fact, the very last stages are highly educational. Each iteration produces a visible improvement, sometimes a major leap forward. Completing a big project is a wonderful experience. But it takes a burning desire to get there.

A second framework rule follows logically. Check continually that the scale of the project matches the allotted time for completion. That’s a necessity which I’ve learned from hard experience. Keeping a firm check on research/time commitments is vital for all parties. There are a few people with time to spare who do truly want a life-time project. That’s fine; but they can’t expect a life-time supervisor.

Checking the project’s scale/timetable entails regular consultation between supervisor and researcher, on at least a quarterly basis. Above all, it’s vital that all parties stay realistic. It’s too easy to kid oneself – and others. The worst thing (I’m prone to doing this myself) is to say airily: ‘Oh, it’s nearly finished’. Take stock realistically and, as needed, reconfigure either the timetable or the overall plan or both. If the project is being undertaken for a University research degree, there will also be a Departmental or Faculty review process. Make that a serious hurdle. If things are going well, then surmounting it will fuel the fires positively. But, if there are serious problems, then it’s best for all concerned to realise that and to redirect the researcher’s energies elsewhere. It’s hard at the time; but much better than protracting the agony and taking further years to fail.

Thirdly, organise a system of negotiated deadlines. These are all-important. The researcher should never be left drifting without a clear time framework in which to operate. Each project is sub-divided into stages, each undertaken to a specific deadline. At that point, the researcher submits a written report, completed to a high standard of technical presentation, complete with finished footnotes. These are in effect proto-chapters, which are then ‘banked’ as components of the finished project, for further polishing/amending at the very end. Generally, these detailed reports will include: Survey of Contextual Issues/Arguments; Overview of Secondary Works; Review of Original Sources and Source Critique; Methodology; Research Chapters; and Conclusion. Whatever the sequence, the researcher should always be ‘writing through’, not just ‘writing up’ at the end.2

Setting the interim deadlines is a matter for negotiation between supervisor and researcher. It’s the researcher’s responsibility to ‘own’ the timetable. If it proves unrealistic in practice, then he/she should always take the initiative to contact the supervisor and renegotiate. Things should never be allowed to drift into the limbo of the ‘great work’, constantly discussed and constantly postponed.3

For my part, I imagine setting a force-field around everyone I supervise, willing them on and letting them know that they are not alone. It also helps to keep researchers in contact with their peers, via seminars and special meetings, so that they get and give mutual support. Nonetheless, the researcher is the individual toiler in the archives or library or museum or (these days) at the screen-face. Part of the process is learning to estimate realistically the time required for the various stages – and the art of reconfiguring the plan flexibly as things progress.

Undertaking a large-scale project has been defined as moving a mountain of shifting sand with a tea-spoon. Each particular move seems futile in face of the whole. But the pathway unfolds by working through the stages systematically, by researching/writing to flexibly negotiated deadlines throughout – and by thinking hard about both the mountain and the pathway. So original knowledge is germinated and translated into high-quality publishable material. Completion then achieves the mind-blowing intellectual combustion that was from the start desired.
sunrise -early risers
1
What follows is based upon my experience as a supervisor, formally in the University of London, and informally among friends and acquaintances seeking advice on finishing.

2 See ‘Writing Through’, companion BLOG no. 60 (forthcoming Dec. 2015).

3 A literary warning comes from Dr Casaubon in George Eliot’s Middlemarch (1871/2).

For further discussion, see

To read other discussion-points, please click here

To download Monthly Blog 59 please click here