Politics eats the evidence-base for breakfast

The idea and fetishizing of evidence-based policy is something we’re interested in at Arbitrary Constant. Here we wonder how many countries there are and what this might mean for Personal Health Budgets; here we explore what some of the biases and heuristics of evidence-based policy might be.

A news article about a lorry speed limit change (from 40mph to 50mph on single-track roads), an evidence-based impact assessment and competing interests was therefore bound to pique our interest.

The Daily Telegraph reported at the end of October:

The Government [has] pressed ahead with plans to raise the speed limit for lorries despite being warned of a likely increase in road deaths because it benefits the haulage industry

The information is generated by the government’s own impact assessment, the topline details of which are as follows:

  • There are between 60 and 80 fatal accidents involving HGVs on relevant roads, of which an estimated 18 per year take place at between 36mph and 44mph
  • Vehicles that travel between 36-44mph will be influenced by the increased speed limit – driving at an average of between 2.5-3.9mph faster
  • This increase would result in 2.6-3.5 more deaths per year
  • The potential benefit of reduced accidents from less overtaking is not included because there isn’t “sufficient confidence” it would happen
  • On the benefits side, hauliers will save time (worth £13.8m), reduce costs (£2.5m) and government will gain more fuel duty revenue (£2.1m).

Far from bringing certainty to the situation, the evidence base has put us in a precarious position, hasn’t it? We can see this in two main ways.

  1. An evidence base has been put together and a policy position derived from it. Whether the evidence base is robust I don’t know, but it clearly involves some assumptions, parameters and interpretations that could be used, if someone were so inclined, to question the conclusions drawn.
  1. The evidence base says the change in policy will be good for one group (hauliers and government) to the tune of around £18m. At the same time it also says the change in policy won’t be good for another group, i.e. the approx. 3 additional people who would die because of the speed limit increase.

Nevertheless, the relevant Minister has pressed ahead with the increase in the speed limit.

To me, this is a clear demonstration of how it matters to what end, i.e. policy, means, i.e. the evidence base, are put. It reaffirms not only that evidence-based policy isn’t rational, predictable or benefit-maximising, but that it also doesn’t happen in a vacuum.

Policy nearly always means politics, and – to adapt a phrase – politics eats the evidence-base for breakfast.


Mark Twain’s ‘returns of conjecture’ from ‘investment of fact’

One gets such wholesale returns of conjecture out of such a trifling investment of fact.

This was Mark Twain on science in “Life on the Mississippi”, but he could well have been talking about any aspect of the modern world.

It’s a human folly to interpret new information in a way that confirms everything you already think. In danger of repeating this folly, though, the Twain quotation hits home with me for two current reasons.

First, I recently reflected on my relationship with news and decided to make a conscious and proactive choice to opt out of the news, social and other media. This was because:

I’d grown tired of most sources of media. Their focus seemed only to be on trivial, untrue or highly creative interpretations of things to do with politics and policy, or fanning the flames of these things with news stories and opinion pieces. I’d also grown increasingly tired with social media, the majority of which was people sharing either trivial, untrue or highly creative interpretations of things relating to politics and policy, or sharing a news story or opinion column that had fanned the flames of their outrage. On top of this, I found myself frustrated with the never-ending wealth of blogs, reports, videos and so on which offered organisation x’s perspective on the latest thing y or z.

This clearly reflects Twain’s observation. In this case, ‘trifling fact’ was the latest policy idea or something political that had happened, and the torrent of interpretations, opinions and perspectives were the ‘wholesale returns of conjecture’.

The second is on the nature of evidence.

Instead of considering, questioning and reflecting on new evidence – what it might actually tell us, what limitations it has, what our interpretations of it might reveal of our own assumptions – our current policy and political approaches tend to use any new evidence as the jumping off point for any number of other directions. As I intimated before, this is because we don’t live in a rational, evidence-led vacuum that is protected from the whims of politicians and public opinion. This in itself is fine (well, it’s reality); Twain’s observation reinforces to me we should invest more in what is currently ‘trifling fact’ than where most effort is made – on ‘wholesale returns on conjecture’.

How many countries are there? On evidence and PHBs

Image via http://damirbecirovic.com/
Image via http://damirbecirovic.com/

How many countries are there?

This seems like a straightforward question to answer, doesn’t it? Most primary school children could give you an answer, and even if they couldn’t they could quickly look it up in an Atlas.

But perhaps it’s not as simple a question to answer as we think. Scotland and Wales are countries, aren’t they(?), and yet they don’t appear on the list of countries recognised by the United Nations: the UN reckons there are 193 countries, including “the United Kingdom”. My Times World Atlas from 1986 says there were 173 countries. And football’s governing body, FIFA, currently has a list of 209 countries with football rankings.

So, in order to know how many countries there are we need to ask ourselves at least two prior questions: (1) What do we mean by a “country”?; and (2) Who are we asking?

Maybe the question is a bit complicated, so let’s ask ourselves an easier question by going up a level: how many continents on the world are there?

Erm, well. National Geographic reports: “By convention there are seven continents… [but] some geographers list only six [and] in some parts of the world students learn there are just five continents.” Which means the answer again depends on asking other questions, including: (1) What do we mean by a “continent”?; and (2) Who are we asking?

This “facts” business is tricky, isn’t it?

I share this by way of thinking about what we mean by “evidence” in the context of “evidence-based policy” and the recent example of Personal Health Budgets.

A significant announcement by Simon Stevens, the Chief Executive of NHS England, about Personal Health Budgets gave rise to some teeth-gnashing earlier this month.

The gnashing focused on the evidence base that underpins the effectiveness of Personal Health Budgets. Some folks, especially the well-known Ben Goldacre of Bad Science fame, are not convinced by the current status of the PHBs evidence. They think there should be at least a Randomised Control Trial (RCT) to test whether Personal Health Budgets work. Others, including advocates of personalisation in public services more generally, noted both the results of the existing evaluation of the Personal Health Budgets pilot and the value of all types of evidence, especially including the views of patients/users themselves.

Both groups therefore lay claim to “evidence-based policy”, which leads me to two reflections:

  1. It’s hardly an original thought (indeed, there are entire disciplines dedicated to such questions) but we must remember there is value associated with all different types of evidence and research methods. The value derived, and of the associated evidence arrived at, depends on what types of answers you’re hoping to uncover, how questions are framed and what pre-questions and/or assumptions underpin the framing of those questions. Different people have different thresholds for evidence and research methods, quite aside from the fact that one type of evidence or research method that’s a gold standard in one discipline could be next to useless in another.

For me, this is the equivalent of the first pre-question we came to in considering countries and continents: What do we mean by “evidence”?

  1. Let’s not even get into the “policy” bit of “evidence-based policy”. For example, when has policy ever been based on evidence anyway? Does policy making happen in a rational, evidence-led vacuum that is protected from the whims of politicians and public opinion which, heaven forfend, may not be evidence based? Notwithstanding questions of what we mean by evidence, it’s safe to say that not all policy is based on what evidence there is. This is therefore the equivalent of the second pre-question we came to in considering countries and continents: Who are we asking what we mean by “evidence”?

The up-shot of this in the context of the evidence base for Personal Health Budgets is that Ben Goldacre and advocates of personalisation are both right, and they’re both wrong. There cannot be a definitive answer to the question of whether Personal Health Budgets are effective until some other, perhaps unanswerable questions, are considered.


Peer support workers in mental health: win-win, not win-lose (updated)

There’s an excellent editorial in the latest Journal of Psychosocial Nursing (JPN), which reports on recent evidence concerning peer support workers and what this means for mental health professionals, especially nurses.

The evidence picks up a recent Cochrane Database systematic review, which has been brilliantly summarised by Mental Elf. The systematic review found:

  • Outcomes for people with mental health problems are no different when interventions have been delivered by other people with mental health problems (i.e. peers) than when they’ve been delivered by professionals
  • Peer support interventions for people with depression were better than typical interventions.

Put simply, peer support interventions are at least as good as professional interventions from a clinical point of view, quite aside from the additional, “softer” benefits that might accrue to both the user and the peer supporter.

Building on this, organisations like the Centre for Mental Health have published really useful documents, such as “Peer Support Workers: Theory and Practice” and “Peer support in mental health: is it good value for money?” (answer: yes).

But the JPN article also goes on to explore the implications of the positives of peer support for mental health professionals, especially nurses.

It makes an important point that often gets missed:

Studies evaluating the views of service users and carers show that mental health nursing has a lot to offer with skills, knowledgeable, caring clinicians providing a range of therapeutic interventions and organising and coordinating multi-disciplinary care… [Nurses] should recognise that our role can profit from collaborating with and listening to colleagues who have first-hand experience as services users.

The drive to include peer support workers, as well as more personalised approaches, in mental health, is often seen as a zero-sum, win/lose power game: power is taken away from professionals and given to users.

I think this is wrong: it’s a positive-sum, win/win game, where both professionals and users can both benefit.

It’s great to see this point being made, and I hope we can keep finding examples of where win-win is the case in practice rather than win-lose the worry in theory.

(Thanks to @teaandtalking and @coyle_mj for highlighting the Journal of Psychosocial Nursing that prompted this post.)

Update: Completely forgot to include the Centre for Mental Health’s “Peer Support Workers: Practical guide to implementation” in the resources above.


The impact of advocacy – call for evidence

In my new work role at the National Development Team for Inclusion (NDTi) (and on which more blogging goodness to come soon),  I’m getting right into it with a really interesting piece of work about advocacy and evidence of its impact.

We’re not looking directly at creating new evidence about advocacy: we’re looking to gather and review the evidence that’s already available about the impact of different types of advocacy for people who need support.

What we want to do is:

  • Help to understand the impact of advocacy, and the benefits of investing in it against a range of different factors and outcomes
  • Describe this in relation to different forms and types of advocacy to help inform decisions about what type of advocacy to invest in for which purpose
  • Focus on gathering evidence of economic and financial impact (if such evidence exists), in order to help inform investment decisions in the current financial context.

The purpose of the work is to present the evidence that exists about advocacy in a more comprehensive and robust way than currently exists. It will also help provide evidence for organisations who deliver advocacy services about their existing and potential impact.

Full details of the work we’re doing is available here: the impact of advocacy for people who need support. If you know of evidence that could be useful as part of this review, please do get in touch using the comments below or via Twitter – @rich_w


Community development, improved health outcomes and clear ROI

The Health Empowerment Leverage Project (HELP) has been working to promote better collaboration between health agencies and local communities, with a particular interest in the potential for community development to play a wider role in relation to innovation, prevention and participation.

Community development offers support for independent voluntary local community groups, organisations and networks, producing wider and more effective community activity. As a “bottom-up” approach, it ensures work is driven and owned by residents, and complements “top-down” engagement by public agencies such as local authorities and (what were) PCTs. Qualitative impacts can be both directly in individuals involved in community development, and indirectly through service changes and resulting improvements.

As such, community development is of particular interest to me in its similarities with the role that disabled people’s user-led organisations play in involving and representing disabled people in coproducing services in social care and health. Where “residents” drive community development, so “service users and/or disabled people” drive coproduction.

Community development is a proxy for the work of DPULOs.

In the case of HELP, the role of community development was tested in a health setting in 3 particular geographical areas. The qualitative results were impressive:

  • New developments – such as increased volunteering, wider social networks, better cooperation between community groups, and greater trust between residents and public agencies – were observed.
  • Residents who were active in community development benefitted the most, but all residents benefitted from the improvements they secured, including in services and ameneties
  • The increased dialogue and collaboration with communities gave public agencies better intelligence for commissioning and engendered more trust and cooperation from service users.

From a quantitative point of view, there were further substantial results:

  • Cardiovascular disease, depression and obesity were three widespread conditions which the research showed to be alleviated by general community activity: it was cautiously estimated the range of activity generated by a two-year community development pilot project prevents 5% a year of the known events in respect of this limited selection of the relevant health conditions.
  • (Similar projects suggest community development can also contribute to improvements in areas such as emergency ambulance calls, A&E attendance, emergency hospital admissions / readmissions and the prevention of falls.)
  • In an illustrative neighbourhood of 5,000 people, there would be a saving for the health service of £558,714 over three years on depression, obesity, CVD and a small number of the other health factors. This is a saving of £3.80 for every £1 invested in a £145,000 community development programme over the same period.
  • If the community development method was applied simultaneously in 3 neighbourhoods, there would be a likely saving for the local health service of £1,676,142 from an investment of £261,900, a return of £6.40 for every £1 invested.
  • Further savings produced by associated reductions in crime and anti-social behaviour from the same activities produces further savings, which aren’t included in the above.

The final key point to draw out is that community development in this form is not an additional layer or thing that public agencies must do: it is instead described as a “stimulant” that brings alive the interface between residents (users) and public agencies.

These are important qualitative and quantitative results that demonstrate the value added by community development approaches and, by extension, coproduction with organisations such as DPULOs.

You can find out more about the Health Empowerment Leverage Project here.

(Inevitably, my summary above misses out several key discussions / issues / thoughts etc. Thus, below is the full document from which I’ve drawn out the above.)