9.3 Challenges and research questions

Adjustment and accountability Targeted M&E methods provide a scientifically based insight into the relationship between the activities in the project and the visible results. However, adjustment during the process is minimal. Learning M&E methods allow for adjustment and uncertainty, but how do we know whether this adjustment is an improvement? How ‘statistically’ reliable are the first insights that serve as input for iteration and can bring about adjustment of the approach? It is important to find a balance between M&E methods with sufficient scientific rigor and useful and applicable methods for monitoring changes in complex systems. Should new methods be developed for this, or is adaptation of existing methods sufficient? And how important is it to substantiate all decisions ‘statistically’?

Research questions that can be posed:

  • How can we test assumptions in the design process in an insightful but non-burdensome manner, in such a way that this provides us with an evidence-based foundation for further development of the intervention?

  • How can we test the designed intervention, in such a way that it provides us with useful information about changes at system level and context level, and about the generalisability of the intervention (effectiveness of underlying working mechanisms), without hindering the design process and the further development of the intervention or freezes for a long time?

Quantification of impact and the role of the selected indicators A change within systems often involves more than just direct and expected effects. How do we map these indirect and external effects? Indirect effects often only come into the picture late and are difficult to quantify or monetise. For example, what is the value of happiness or the knowledge generated during transition issues? We know that these aspects have an important effect on economic growth and our prosperity, but how do you map them out? In addition, the choice of indicators or M&E tools can also determine the form and direction of the interventions. We see that development strategy is determined by measurable indicators or KPIs, such as ‘attention span’ at companies such as Netflix and Google. But is this the right strategy, and how important are data / indicators that cannot (yet) be measured? New developments in this area will also determine the nature of interventions.

Research questions that can be posed:

  • How can we formulate output, outcome and impact indicators that are relevant to the transition goal and intermediate goals that are tailored to the mission as much as possible?

  • What is the effect of the measurable and available indicators on the form and direction of our interventions?

  • How can we test the effectiveness and efficiency of our design process?

Applying new datasets and new data-driven methods The developments in AI and big data analytics offer many opportunities for transition issues. With the help of these methods, learning and real-time insight can be obtained into the (potential) effects of the contribution of interventions to the transition, as well as into possible relevant external developments. A first step has been taken with the development of a data-driven Foresight analysis method (Goetheer et al., 2020), in which AI and big data and the use of different types of data sources can support the decision-making of transitions. These methods can also be used to gain more insight into the expected effects in advance (data-driven predictive modeling). However, new data sources must be used (such as citizen science data, open source, or data from non-protocol studies), which are by definition diverse, unstructured and incomplete. In the next steps, we need to find out which data is available or can be created, how to use it, which methods fit these datasets, and how to deal with these limitations in the quality and reliability of data.

Research questions that can be posed in this regard:

  • Which methods should we apply and / or develop to obtain the correct estimates (prognosis) and classifications (diagnosis, screening, monitoring) from new types (diverse, new, unstructured, incomplete) data?

  • How do we identify the relevant data sources and data types for monitoring and evaluating transitions, including validating data / information?

  • How do you design a hybrid data-driven M&E method, linked to innovation intelligence?

  • How do you ensure that information generated by AI and big data is explainable, understandable and accepted?

  • How do we deal with privacy-sensitive data and the decline in the willingness of the population to participate in registrations and studies?

Last updated