Research communication – are we getting it right when targeting policy and practice?

We’re interested in research communication, across disciplines but also when researchers communicate with others such as policy makers and practitioners. Our correspondent Andrew Clappison introduces his mini series that will explore what research communication means.

I believe there are several common issues that undermine the efficacy and perceived role of ‘research communication’:

1. Communication in practice:

People have come to distinguish between research dissemination and communication – the pushing out of research versus active two-way conversations around it. In more recent times, this has been extended by placing emphasis on creating an enabling environment for research uptake, generating demand and striving for impact.

This may not sound significant on the face of it, but there is a danger that in the drive for success, the research itself (and its quality, timeliness and appropriateness) gets overlooked. Research impact should not be linked to who shouts the loudest or has the biggest communications budget. And while I’m not saying there is always a necessary link between ‘advocacy’ and policy impact, we do need to look more closely at what this all means and how we can guard against the unwise promotion of poor quality research. A process I will call the ‘governance of evidence’.

2. Quality of Evidence:

‘Evidence informed policy making’ is fashionable at the moment – but what does this actually mean and how do we know when evidence is good enough to be communicated or advocated widely? In my opinion, research needs to be stress tested before we see researchers and intermediaries advocating for its incorporation into policy, for example.

At what point should we communicate research and should it come with a ‘health’ warning when it has not been fully ‘stress’ tested? There are usually no guidelines on this. I’m not saying we should not communicate research before we know its true value since other researchers’ work can add to the picture, but we need to do this in a responsible and neutral way. There are no easy answers here, and we need to discuss and consider further.

3. The end user’s capacity to understand evidence:

Whether you are communicating your research to an intermediary or a decision maker, it’s important to consider whether they have the capacity to use evidence effectively. If not, research communication becomes far more difficult, and it’s far more likely that help will be required. This takes us into the realm of competency building and looking for effective research intermediaries. Perhaps it could involve seeking out new partnerships with useful allies such as think tanks. A big question is, to what extent can we expect the researcher to play a leading role in such a potentially complex process of communication?

4. Researcher’s capacity to communicate:

Linked to this is whether researchers typically have the appropriate research communication skills. How useful can researchers be when confronted with the most complex communication challenges? Even if researchers’ “responsibility” for communicating their research stops at appointing expert help and being accountable for their outputs, do they understand enough about what is required to make such calls and to supervise progress?

5. Communication funding:

All this brings costs into focus, highlighting the need for appropriate research communications funding. But should all research be communicated? What purpose does this serve? Effective research communication can be resource heavy in a number of ways, but if you are not going to do it properly, why do it at all? Many research programmes still fail to have a set budget for research communication, and money set aside for communication often gets siphoned back into doing the research itself. Funding is an important issue, but it still remains a grey area that does not get enough attention, in my opinion.

6. Measuring communication impact:

When the effectiveness of research communication is scrutinised, it often comes out poorly. This is largely because measuring impact is difficult. And while funders want to see clear definable outcomes in order to justify communications spend, this is rarely possible. So, does research communication actually work? And if so, what is the evidence? And what is the best way to collect this evidence? These are all big questions.

“Unwieldly, but important questions”: Join the discussion

These are some of the main issues that I think underpin the current state of play in research communication. These issues present us with an unwieldly array of questions. And while I can’t promise all the answers, I hope that you follow the series, pose questions, and contribute to what will be an important and unashamedly public undressing of the issues. You can begin by letting me know if you think I’ve missed something important or you want me to explore something specific: tweet @andrewclappison and @jobsacuk for further conversation on this topic so far, or leave a comment below.

Image credit: Maurice (CC BY 2.0)

LinkedInEmailPrintShare

Leave a Reply

Your email address will not be published. Required fields are marked *