The seemingly dry topic of “indicator dashboards” (see the full report) is connected to fundamental questions about the role and authority of science in democratic societies. Below, I state what I take to be three important fundamentals when discussing how and why to govern evidence-informed policymaking:
1. Science Doesn’t Speak Truth to Power
The good use of scientific knowledge in public decision-making is a keystone of modern societies. The scientific mindset is critical, self-critical and reflexive. Any good scientist knows that scientific knowledge is fallible, and that precision comes at the cost of scope. Yet, EU regulation occasionally describes science as “vital to establishing an accurate description of the problem, a real understanding of causality and therefore intervention logic”.
This promise is too bold and it creates a risk that was mentioned already in President Eisenhower’s 1961 Farewell Address:
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.
To mitigate this risk, the use of scientific evidence in policymaking should be governed with an eye to the norms of:
- transparency: Where does the knowledge come from?
- reflexivity: What can we say about the known and unknown unknowns?
- and contextuality: Is this knowledge input fit for purpose?
2. Control is a False Ideal
The language of “indicators”, “dashboards”, “bench-marking” and so on, is often associated with ideas of conventional intervention logic and the hope to command and control.
Such ideas and hopes may be well suited in prisons and in the army, but they are not useful to cultivate transparency, reflexivity and contextuality, which ultimately all depend on the presence of trust, truthfulness and the willingness to be convinced by the force of the better argument. Network approaches to governance are more relevant for evidence advisory ecosystems. Indicators and dashboards may still be useful, but not to command and control.
3. We should be smarter than SMART
A challenge that I really believe we can solve, is the presence of business jargon within European institutional discourse. “SMART” is an instance of such jargon. Its origin is a one-page paper written in 1981 by the management consultant George T. Doran.
Since then, it has lived its own life and penetrated public discourse in its various incarnations. The “A”, for instance, has variously stood in for Achievable, Attainable, Assignable, Agreed, Action-oriented and Ambitious.
In many instances, there is nothing wrong with SMART. When developing good governance of something as dynamic and contingent as an evidence advisory ecosystem, however, the problem with SMART is that it encourages to decide beforehand, ex ante, what is the “specific and measurable” desired state of the system to be monitored. This is not how network governance works or even should work. We should stop using such jargon.
Share this page
"A challenge that I really…
"A challenge that I really believe we can solve, is the presence of business jargon within European institutional discourse" - you make an excellent point re: SMART and its ilk, but I admire your optimism! ;)
Regarding transparency, reflexivity and contextuality, here in the Knowledge4Policy (K4P) platform we practice transparency pretty well, I think: every piece of knowledge can be linked to the profile of the organisation that published it and/or the profile of the project that created it, and (in the case of member-submitted knowledge like your blog post) the person who submitted it. Moreover, member profiles can also be linked to organisation and project profiles. Finally, when we create working groups later this year (hopefully), each will have a public-facing edge setting out what the group is doing, who's in it, and what its inputs and outputs are.
We have not explicitly tackled reflexivity and contextuality, however, so you got me thinking.
Obviously, with each piece of knowledge here either created, curated or moderated by JRC staff, it is implied that the knowledge is of high quality, but that doesn't necessarily mean it is fit for every purpose, and should be used in every context. And there's no specific way K4P editors can signal anything about the knowns and unknowns about a particular piece of knowledge - or an entire field - beyond simply writing something in the text. I can't help thinking that we could use a linked data approach to characterise the knowledge better.
So how could we incorporate these considerations into the structure of the knowledge - and interfaces to knowledge - here on K4P?
In reply to "A challenge that I really… by lowryma
Thank you for your…
Thank you for your thoughtful reflection, which I think itself points in the direction of the answer to your question about how to incorporate considerations of reflexivity and contextuality.
Often, a sentence that begins with "Obviously", has content that can offer a rewarding reflection. I make a similar point in the report, with reference to Bernard Williams' reflection on "necessity". In your piece, "obviously" is being followed by a claim that knowledge created/curated/moderated by JRC is implied to be of high quality. My guess is that the obviousness consists in institutional mandate, that is, the function that the JRC is supposed to fulfil and for which it exists, according to the official narrative.
This institutional reality - that knowledge that is placed or mediated in a certain way, is supposed to be of high quality for that reason - does not contradict or violate the norm of reflexivity or the insight of contextuality per se, but it takes work to reconcile them. If one commits to the view that quality is fitness for purpose, quality cannot be assessed without taking purpose into account, which implies a need to specify it (and to decide on the relevance and importance of that purpose). Quality as fitness for purpose cannot be assessed a process that is kept entirely in-house.
In reality, however, quality is itself such a complex entity that it eludes a simple definition. "Fitness for purpose" is one important dimension. Another dimension of quality is indeed the internal one of whether something - a method or a knowledge claim, for example - is evaluated and approved by some kind of closed peer community - a discipline, an expert body, an epistemic community, a thought collective in Ludwik Fleck's words. In practice, this is what "quality" often is taken to mean - for instance that a paper was published in Nature or Science. Both of these dimensions are important and - even more importantly - they are dynamic and complex in themselves in ways that make attempts at formalizing and exhausting them by description, highly prone to error. This was well explained in a popular novel by Robert Pirsig, "Zen and the Art of Motorcycle Maintenance". In it, he presents an argument why we should think of quality in dynamic terms (as something that happens rather than something that is).
Little of what I write here, is easily implemented within the prevailing institutional logic. We are, however, living in turbulent times. This is evident to the young generations that come to our universities to study; it is so blatantly visible now that even middle-aged people like me notice. Our institutions have to change or become obsolete. Looking beyond the EU, it does not seem desirable that our modern institutions of liberal democracy and accountable public administration collapse or disintegrate. So we must change, we must experiment to adapt to new challenges. The K4P initiative seems to be one such site for thinking and experimentation.
In reply to Thank you for your… by nstranro
Thanks for your reply. …
Thanks for your reply.
"Another dimension of quality is indeed the internal one of whether something ... is evaluated and approved by some kind of closed peer community... for instance that a paper was published in Nature or Science."
Well, yes (obviously ;) ) this is the quality I was referring to, as it's almost impossible to know what purpose a (usually anonymous) web visitor has when visiting K4P, so we'll not be able to qualify K4P knowledge in terms of fitness for purpose. So it's not only difficult to translate what you write into something implementable within the prevailing institutional logic, it's also difficult to implement it in terms of this website's content strategy.
Totally with you re: liberal democracy and accountable public administration, though. Glad you think K4P could be a small part of the solution.
If interested, you may find…
If interested, you may find Roger's full report on "Indicator dashboards in governance of evidence-informed policymaking: Thoughts on rationale and design criteria" following this link: https://publications.jrc.ec.europa.eu/repository/handle/JRC129902.
More information on our evaluation framework for institutional capacity of science-for-policy ecosystems are here: https://knowledge4policy.ec.europa.eu/projects-activities/developing-ev….
Login (or register) to follow this conversation, and get a Public Profile to add a comment (see Help).
The goal of food system transformation is to generate a future where all people have access to healthy diets, which are produced in sustainable and resilient ways...
Science for Policy Handbook provides advice on how to bring science to the attention of policymakers. The handbook is dedicated to researchers and research organisations aiming to achieve policy impact.
Today...
This platform aims at supporting policy makers, program planners, NGO staff and many other stakeholders in taking informed decisions at country, regional and global level...