The seemingly dry topic of “indicator dashboards” (see the full report) is connected to fundamental questions about the role and authority of science in democratic societies. Below, I state what I take to be three important fundamentals when discussing how and why to govern evidence-informed policymaking:
1. Science Doesn’t Speak Truth to Power
The good use of scientific knowledge in public decision-making is a keystone of modern societies. The scientific mindset is critical, self-critical and reflexive. Any good scientist knows that scientific knowledge is fallible, and that precision comes at the cost of scope. Yet, EU regulation occasionally describes science as “vital to establishing an accurate description of the problem, a real understanding of causality and therefore intervention logic”.
This promise is too bold and it creates a risk that was mentioned already in President Eisenhower’s 1961 Farewell Address:
Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.
To mitigate this risk, the use of scientific evidence in policymaking should be governed with an eye to the norms of:
- transparency: Where does the knowledge come from?
- reflexivity: What can we say about the known and unknown unknowns?
- and contextuality: Is this knowledge input fit for purpose?
2. Control is a False Ideal
The language of “indicators”, “dashboards”, “bench-marking” and so on, is often associated with ideas of conventional intervention logic and the hope to command and control.
Such ideas and hopes may be well suited in prisons and in the army, but they are not useful to cultivate transparency, reflexivity and contextuality, which ultimately all depend on the presence of trust, truthfulness and the willingness to be convinced by the force of the better argument. Network approaches to governance are more relevant for evidence advisory ecosystems. Indicators and dashboards may still be useful, but not to command and control.
3. We should be smarter than SMART
A challenge that I really believe we can solve, is the presence of business jargon within European institutional discourse. “SMART” is an instance of such jargon. Its origin is a one-page paper written in 1981 by the management consultant George T. Doran.
Since then, it has lived its own life and penetrated public discourse in its various incarnations. The “A”, for instance, has variously stood in for Achievable, Attainable, Assignable, Agreed, Action-oriented and Ambitious.
In many instances, there is nothing wrong with SMART. When developing good governance of something as dynamic and contingent as an evidence advisory ecosystem, however, the problem with SMART is that it encourages to decide beforehand, ex ante, what is the “specific and measurable” desired state of the system to be monitored. This is not how network governance works or even should work. We should stop using such jargon.
"A challenge that I really…
"A challenge that I really believe we can solve, is the presence of business jargon within European institutional discourse" - you make an excellent point re: SMART and its ilk, but I admire your optimism! ;)
Regarding transparency, reflexivity and contextuality, here in the Knowledge4Policy (K4P) platform we practice transparency pretty well, I think: every piece of knowledge can be linked to the profile of the organisation that published it and/or the profile of the project that created it, and (in the case of member-submitted knowledge like your blog post) the person who submitted it. Moreover, member profiles can also be linked to organisation and project profiles. Finally, when we create working groups later this year (hopefully), each will have a public-facing edge setting out what the group is doing, who's in it, and what its inputs and outputs are.
We have not explicitly tackled reflexivity and contextuality, however, so you got me thinking.
Obviously, with each piece of knowledge here either created, curated or moderated by JRC staff, it is implied that the knowledge is of high quality, but that doesn't necessarily mean it is fit for every purpose, and should be used in every context. And there's no specific way K4P editors can signal anything about the knowns and unknowns about a particular piece of knowledge - or an entire field - beyond simply writing something in the text. I can't help thinking that we could use a linked data approach to characterise the knowledge better.
So how could we incorporate these considerations into the structure of the knowledge - and interfaces to knowledge - here on K4P?
If interested, you may find…
If interested, you may find Roger's full report on "Indicator dashboards in governance of evidence-informed policymaking: Thoughts on rationale and design criteria" following this link: https://publications.jrc.ec.europa.eu/repository/handle/JRC129902.
More information on our evaluation framework for institutional capacity of science-for-policy ecosystems are here: https://knowledge4policy.ec.europa.eu/projects-activities/developing-ev….
Login (or register) to follow this conversation, and get a Public Profile to add a comment (see Help).
06 Jul 2022 | 08 Jul 2022
Creating an enabling environment for agricultural innovation in emerging markets
Environmental Performance of Agriculture in OECD Countries
Understanding women’s roles and trade potential along the fisheries and aquaculture value chains: Case studies from Ghana and Nigeria: Making the African Continental Free Trade Area work for women
Share this page