🔸
KEM's for mission-driven innovation
Download pdf in DutchNederlandse versie
  • Key Enabling Methodologies (KEMs) for mission-driven innovation
  • Tabel of contents
  • Introduction to the agenda
    • 1. Introduction
      • 1.1 Background: Mission-driven Innovation Policy
      • 1.2 Key Enabling Methodologies or KEMs
      • 1.3 Categories of KEMs
      • 1.4 Conditions and the use of KEMs
      • 1.5 This agenda
  • Categories
  • 2 Vision and imagination
    • 2.2 State of the art
    • 2.3 Challenges and research questions
    • 2.4 References
  • 3. Participation and co-creation
    • 3.2 State of the art
    • 3.3 Challenges and research questions
    • 3.4 References
  • 4. Behaviour and empowerment
    • 4.2 State of the art: perspectives on behavioural change
    • 4.3 Challenges and research questions
    • 4.4 References
  • 5. Experimental Environments
    • 5.2 State of the art: from modeling to experimentation
    • 5.3 Challenges and research questions
    • 5.4 References
  • 6. Value Creation and upscaling
    • 6.2 State of the art
    • 6.3 Challenges and research questions
    • 6.4 References
  • 7. Institutional change
    • 7.2 State of the art
    • 7.3 Challenges and research questions
    • 7.4 References
  • 8. System change
    • 8.2 State of the art
    • 8.3 Challenges and research questions
    • 8.4 References
  • 9. Monitoring and effect measurement
    • 9.2 State of the art
    • 9.3 Challenges and research questions
    • 9.4 References
  • Methods in the Mission-Driven Innovation Policy
    • 10. Methods in the Mission-Driven Innovation Policy
      • 10.1 Programming and KEM research
      • 10.2 Methodological challenges in the missions
      • 10.3 Cohesive application
  • Notes
    • About the development of this agenda
    • Colophon
Powered by GitBook
On this page
Export as PDF
  1. 9. Monitoring and effect measurement

9.3 Challenges and research questions

Adjustment and accountability Targeted M&E methods provide a scientifically based insight into the relationship between the activities in the project and the visible results. However, adjustment during the process is minimal. Learning M&E methods allow for adjustment and uncertainty, but how do we know whether this adjustment is an improvement? How ‘statistically’ reliable are the first insights that serve as input for iteration and can bring about adjustment of the approach? It is important to find a balance between M&E methods with sufficient scientific rigor and useful and applicable methods for monitoring changes in complex systems. Should new methods be developed for this, or is adaptation of existing methods sufficient? And how important is it to substantiate all decisions ‘statistically’?

Research questions that can be posed:

  • How can we test assumptions in the design process in an insightful but non-burdensome manner, in such a way that this provides us with an evidence-based foundation for further development of the intervention?

  • How can we test the designed intervention, in such a way that it provides us with useful information about changes at system level and context level, and about the generalisability of the intervention (effectiveness of underlying working mechanisms), without hindering the design process and the further development of the intervention or freezes for a long time?

Quantification of impact and the role of the selected indicators A change within systems often involves more than just direct and expected effects. How do we map these indirect and external effects? Indirect effects often only come into the picture late and are difficult to quantify or monetise. For example, what is the value of happiness or the knowledge generated during transition issues? We know that these aspects have an important effect on economic growth and our prosperity, but how do you map them out? In addition, the choice of indicators or M&E tools can also determine the form and direction of the interventions. We see that development strategy is determined by measurable indicators or KPIs, such as ‘attention span’ at companies such as Netflix and Google. But is this the right strategy, and how important are data / indicators that cannot (yet) be measured? New developments in this area will also determine the nature of interventions.

Research questions that can be posed:

  • How can we formulate output, outcome and impact indicators that are relevant to the transition goal and intermediate goals that are tailored to the mission as much as possible?

  • What is the effect of the measurable and available indicators on the form and direction of our interventions?

  • How can we test the effectiveness and efficiency of our design process?

Applying new datasets and new data-driven methods The developments in AI and big data analytics offer many opportunities for transition issues. With the help of these methods, learning and real-time insight can be obtained into the (potential) effects of the contribution of interventions to the transition, as well as into possible relevant external developments. A first step has been taken with the development of a data-driven Foresight analysis method (Goetheer et al., 2020), in which AI and big data and the use of different types of data sources can support the decision-making of transitions. These methods can also be used to gain more insight into the expected effects in advance (data-driven predictive modeling). However, new data sources must be used (such as citizen science data, open source, or data from non-protocol studies), which are by definition diverse, unstructured and incomplete. In the next steps, we need to find out which data is available or can be created, how to use it, which methods fit these datasets, and how to deal with these limitations in the quality and reliability of data.

Research questions that can be posed in this regard:

  • Which methods should we apply and / or develop to obtain the correct estimates (prognosis) and classifications (diagnosis, screening, monitoring) from new types (diverse, new, unstructured, incomplete) data?

  • How do we identify the relevant data sources and data types for monitoring and evaluating transitions, including validating data / information?

  • How do you design a hybrid data-driven M&E method, linked to innovation intelligence?

  • How do you ensure that information generated by AI and big data is explainable, understandable and accepted?

  • How do we deal with privacy-sensitive data and the decline in the willingness of the population to participate in registrations and studies?

Previous9.2 State of the artNext9.4 References

Last updated 4 years ago