Uncertain Measures in Uncertain Times

27 October 2020

Can measures ‘emerge’? Our Director of Learning and Influence Gen Maitland Hudson explores the idea of emergent measures, what these might be and what these mean in the context of the social sector.

A tweet of an NPC measurement workshop a couple of weeks ago asserted that “people who commission evaluations look for controlled certainty, rather than being comfortable with the uncertainty of our world”. That is a value-laden statement that sets up a false binary. We all live in a world of controlled certainty and ubiquitous uncertainty at one and the same time; that is the nature of the human condition.

More generally I wondered as I read the tweets from the session, whether there wasn’t another false dichotomy at work. This might be setting two processes against one another as if they were natural opposites when they are simply different. Worse still, I suspect we are rarely, arguably never, in a position to pick one over the other. Without reading too much into the session itself – which to be clear I didn’t attend – I think it may be useful to pick up on the idea of ‘emergent measures’ and what these might be, and the kinds of measures that arise by contrast through the use of scientific method and the differences between the two.

Can measures ‘emerge’?

I don’t have any doubt at all that they can, and do, and that history is full of fascinating examples of ingenious measures that helped people to take stock of the world around them, and that these measures did not arise from an intentional application of minds to the solving of a clearly identified problem using anything like a conventional expert methodology.

When I describe these measures as ‘emergent’ I mean that all the way down. This isn’t a contrast between ‘bottom up’ measures developed collaboratively by a group with lived experience in a workshop, versus measures developed ‘top down’ by experts in a laboratory or an academic department. I don’t suppose even for a second that there was anything fundamentally reflective in the evolution of the labour measures described below, or that history ought to have recorded someone as their inventor. I think they really emerged hand in glove, without, as Ian Hacking puts it, being able to say which came first the thought or the mitten.

I’ve written about these kinds of measures before, but ICYMI, this is an example of measurement ingenuity I like very much: the French medieval labour measure of land.

The ‘journal’ – meaning day’s labour – measured land not by size but by effort. It started not from an auditing of the natural world, but from the perspective of the labourer who has to sweat to make the land productive: how long will it take him? What tools will he use? How productive will his labour be? It is a measure that takes into account the human experience of cultivation, the quality of the land itself, its productivity, and its location. The information contained in measuring a single ‘journal’ is far richer than the information contained in a measure of space. It speaks to the labourer as well as the landowner. It describes the labourer’s life; and locates measurement expertise in the hands of those most concerned by it on a day to day basis. A landowner was dependent on a labourer to understand his own property. He could not himself assess the number of ‘journals’ it held. Most villages had an experienced labourer who would measure a property using labour and seed measures; there was no place for an educated ‘expert’ external to the work itself.

It’s worth noting too that while the measures described here are French peasant ones, the same kinds of approaches to labour and seed measurement were developed across the world, including in Africa where they were ultimately displaced by colonisation.

This is not to suggest that the world in which this kind of measurement existed – a human-centred measure if ever there was one – was a world that was fairer to labourers than landowners. Good measures don’t defeat bad politics, and by extension neither are bad measures the kryptonite of good politics. There is no necessary causal relationship between measurement and power relations, though it doesn’t always feel that way in the sector’s discussions on social impact.

The medieval measurement economy changed dramatically with the introduction of the metric system and a new priesthood of land surveyors who took measurement and knowledge out of the hands of labourers, and stripped them not only of power but also of their capacity to make sense of their day to day work in a way that was simple and effective. The metre also came with suffrage, court reform, primary education and the civil code, showing that measures and politics are not indivisible.
 

Can measures be defined using scientific method?

Again, I have no doubt at all that good and useful ways of helping people to take stock of the world around them have come from the application of scientific method. I’m using ‘scientific method’ loosely here, I mean by that the focused effort by an individual or a group to respond to a defined problem in a defined way, usually through the development of a new kind of knowledge that shifts our understanding of the world.

There is a nice example in the history of our understanding of colour blindness. Advances in describing colour blindness were made by John Dalton in the late eighteenth century. Dalton is remembered in his native country for his work on atomic theory, but outside England he gave his name to the condition he had himself and that he described in an essay of 1794. In France if you are colour blind, you are ‘daltonien’. In Dalton’s essay on colour blindness you have the coming together of scientific method applied to an unusual way of experiencing the world, inspired by personal experience. You also have a named scientist going about the reordering of our knowledge.

Colour blindness is now generally tested and measured, at least by opticians prior to more precise diagnosis by specialists, using Ishihara plates. The plates used in the tests were developed by Shinobu Ishihara on the orders of the Japanese military in 1916, hand painted in watercolour, and printed for the first time in 1917. They show circles of dots in various hues and in their centres numbers and paths. Some of these are visible only to those with normal colour vision, and some to those with red-green colour deficiencies. They are a test but also a measure since they can roughly determine degrees of colour blindness.

The longevity of the tests may owe something to their simplicity and beauty (and to their inventor’s vigorous commitment to promoting them internationally), but perhaps more to the fact that, in the words of Eric Kindle, the ” ‘Ishihara’ is not, finally, a test of mere dysfunction – you either see something or you can’t – but instead one of difference: that (most) everyone sees something, it is just not always the same. This is surely a more productive understanding of colour-deficient vision, and one that guided Shinobu Ishihara to enduring effect.”

 

In Ishihara’s plates we have the application of scientific method, commissioned by a powerful state institution, the Japanese army, that produced something enduring that still enables a diagnosis of difference without stigma.

That doesn’t in the least mean that we can always, or ever, precisely control the outcome of the use of scientific method to achieve good, predictable or long-standing outcomes. Andrew Pickering talks in The Mangle of Practice about the ways in which scientific goals have “to be understood in terms of contingently formulated accommodations to temporally emergent resistance”. He illustrates that idea with the example of Donald Glaser building the bubble chamber, a solid advance in conventional science that won Glaser the Nobel Prize. “It is worth noting that his tentative assemblages all did something”, says Pickering, “though typically not what Glaser hoped”. The conventional view of the certainty of scientists falls down a bit when looked at this way.

But even with its continual accommodations, scientific approaches to identifying problems, framing them in a way that allows for intentional investigation, developing models and adapting models and explanatory theories to emergent resistance can help us to develop measures that are useful, not only to states and institutions, but to people. They give us the means to pick out things that matter, and methods to help us think about them.
 

Which kind of measure is better?

Any one of us might have a methodological, ethical or even an aesthetic preference for one or other of these measurement processes, the emergent or the scientific. But even if you do have such a preference, it won’t make them interchangeable.

It is one of the handy things about scientific approaches that we can apply them readily, build models and adapt them like Glaser with his bubble chambers, or Ishihara reinventing Jakob Stilling’s pseudo-isochromaticism test of 1878. We can also flex them because we have a defined set of methods to flex, and a way of taking stock of the observations that come from our experiments.  

Emergence is not so handy.

There is a tendency amongst people who embrace complexity in the social sector to talk about creating the conditions of emergence, and then to say that these conditions largely consist in nice things like trust and supportive relationships and unrestricted grant funding and ‘learning’. The evidence, certainly when it comes to emergent measures, doesn’t really support that attractive picture.

The French medieval land measure based on labour emerged in the crucible of serfdom, endemic poverty and more than occasional starvation. Other emergent measures arose in similarly pressured circumstances. The AA’s measure of consecutive days of sobriety developed over several fraught years of self-help amongst alcoholics (with no unrestricted grant funding: as it happens Rockefeller was approached by the AA’s founders and although wholly supportive of their work refused to fund them on the grounds that mutual aid was essential to the model and it would fail if he bankrolled it; perhaps he was right).

Quality grades of wheat emerged in the frenetic atmosphere of late nineteenth century globalisation in which wheat varieties available to the British market rose from sixteen to sixty five within twenty years.

Trying to mastermind the conditions of emergence for these kinds of measures looks, to my mind, much more like a doubtful claim on control than commissioning an evaluation. Evaluations are narrow efforts at knowledge production in limited contexts laden with caveats; ‘the conditions of emergence’ is a much more expansive idea that might not be limited at all and which allows for much less careful delineation. If you are seeking to understand a social programme, here and now, some kind of formal method seems a much more modest undertaking than waiting for a measure to emerge, let alone trying to set the conditions for that emergence. That doesn’t mean it isn’t open to critique, or can’t be made more accessible, or involve a more diverse group of researchers, or be more reflective. All methods can be improved.

In our own measurement work at SIB, we try to develop models, collect data, analyse that data and share our findings in ways that are modest, useful and applicable. We hope our analyses tell us something about the world, and that we can use that something to effect change. Most of the time this is very small-scale change, but sometimes it might be ambitious and systemic. The information we collect helps to inform our work, and to communicate it to others because at its best, a measure is an illustration. It pictures the world and gives us the means to test whether what we are seeing is what others are seeing too, in the manner of an Ishihara plate. Does the picture of local spending that we have gathered through transactional data match the experience of local people buying groceries in their local shops? Sharing that data triggers a wider conversation and supports the tracking from national to local that is essential to being an effective finance intermediary. Without these measures, we have less to talk about.

Two weeks ago my mother had cancer surgery. Cancer treatment is a world of the fairly predictable, the wholly uncertain and of many measures from blood pressure to cell count to tumour growth. The medical prognosis and the statistical risk give important fixed points in an experience that contains continual anxiety. The facts and measures of scientific method are reassuring. They offer a means of treating a problem and managing the experience of that problem even whilst the uncertainty of human vulnerability remains always present.

There is nothing more certain than death, but few of us know when it is going to happen. Not even those of us who commission evaluations.   

Photo by Sarah Sigler on Unsplash

Genevieve Maitland Hudson

Deputy Chief Executive Officer

Gen has spent the last ten years working with social programmes that are committed to the informed use of information and data to improve their work. She began her career in academia with a doctorate in politics and philosophy. She has lectured at Oxford University, Roehampton University, the Ecole Normale Supérieure in Paris, Birkbeck College London and Cambridge University.

accreditation
SEUK Logo
accreditation
CAN Logo
accreditation
BSI ISO 9001 & UKAS Logo
accreditation
accreditation

The Social Investment Business is the trading name for the Social Investment Business Foundation, Registered Company No. 05777484 (England), Registered Charity No. 1117185 (England & Wales), The Social Investment Business Limited, Registered in England No. 06490609, VAT No. 927456693, Futurebuilders-England Limited, Registered in England No. 05066676 and Forward Enterprise FM Ltd, Registered in England No.11238102. Registered Office: CAN Mezzanine, Borough, 7-14 Great Dover Street, London, SE1 4YR.

We use cookies to improve your experience using this website. Learn more