Rapid Experiments With A Purpose

Michael Moses
No Comments

RAPID EXPERIMENTS WITH A PURPOSE

How Global Integrity Uses Theories of Change to Make Small Bets, Learn, and Adapt

Michael Moses, Director of Advocacy and Programs – January 11, 2017


This blog post was originally published by BEAM Exchange as part of their Adaptive Management Blog Series


The way in which development interventions — from programs that aim to strengthen market systems to those that intend to improve government accountability — play out, in practice, depends on particular features of the political landscape in which they take place. How is power distributed? Who makes decisions? What are their incentives? And who wins or loses if there are changes to the status quo?

Political dynamics are anything but simple: not only are they difficult to fully grasp, especially for actors from outside a given system, they’re dynamic and complex, and different in different places. At Global Integrity, we’ve come to realise that there’s no one-size-fits-all blueprint for opening governance, or creating an efficient market system. Emerging evidence in various sectors (see here, here, and here, for just a few examples), makes clear the importance of learning about the features of the systems in which we work, and over time, adapting our efforts at navigating and shaping those features, including political dynamics, in order to promote progress.

But how do we learn and adapt as quickly and efficiently as possible? Traditional approaches to development programming built on logframes and/or results frameworks don’t quite fit the bill. In fact, they often restrict our flexibility, and limit the extent to which we can learn about and adapt to emerging features of a system. Other methods are needed. So we’ve come up with a deliberate approach, modelled on action research principles, and insights from a number of communities and practitioners, including the Asia Foundation and the Doing Development Differently community, that aims to help us and our partners quickly experiment in, learn about, and adapt to the systems in which we’re working.

Our structured method to experimenting, and learning by doing, is different from more static programming approaches. It’s built on rapid, intentional, and progressive experimentation, and involves making a small bet, framed around a politically savvy theory of change; learning about its effectiveness, and about the system at large; and then incorporating that learning into another, adapted bet, and experimenting again.

Making scheduled, quickly recurring cycles of experimentation, learning, and adaptation the foundation of our work helps us to do two things at once: first, to take our best shot at contributing to solving the problem we’re hoping to address; and second, to learn more about the system in which we’re working, and the effectiveness of our projects, which provides a platform for quick, data-driven adaptation. And over time, these cycles help us progressively figure out how to more effectively contribute to sustainable, grounded reforms that make a difference.

There are five stages to what we (inelegantly!) call the “learning by doing cycle.” These cycles take place at quick, regular intervals throughout a given project or programme:

Stage 1: Problem identification: define the issue you intend to tackle

Stage 2: Design and implement: Make a small bet, and put a Theory of Change into action

  • Come up with an experimental theory of change, or strategy for tackling the defined problem. Be explicit about the assumptions you’re making. Ensure you’re responding to the political dynamics and power issues you mapped in stage 1, and establish a process for checking on how things are going — what do you need to learn about as you go? How is betting on this theory of change going to help you improve your knowledge base, and lay the groundwork for more informed action in the future? Then get to work implementing!

Stage 3: Monitor: check how things are going

  • Gather data on how your theory of change is playing out in practice. Emphasise collecting only evidence that’s relevant to helping you get better, and helps you evaluate any assumptions you’re making. Pay attention to and document changes in your context, including shifts in power, changing incentives, and other similar issues. This should be as participatory as possible (see this great initiative for one example of how that might be done).

Stage 4: Reflect: build in time and space to pause, reflect, and learn

  • Now that you’ve done some implementation, and collected evidence on how things are going, take a moment to pause and reflect with your partners and colleagues. As with other stages, participation is key here too. What new things have you learned about the system? What’s working in your project or programme? What’s not? And what sorts of changes should you make, especially to your experimental theory of change?

Stage 5: Adapt: make course corrections, and try again

  • Incorporate the lessons and insights that emerged during your reflection — revisit stages 1 and 2, and make course corrections. Adjust your problem definition, your understanding of your system, your theory of change, and project plan. Basically, use what you’re learning to design a new experiment, run it, and learn and adapt all over again.

Here’s just one example of how we’re supporting deliberate experimentation in practice: we’re currently partnering with six civil society organisations in five countries across the world, each of whom is tackling a different open government challenge in a different complex system — from access to justice in South Africa to the implementation of e-government programmes in Indonesia. Instead of taking a standard, logframe approach to these projects, we’re helping our colleagues design and implement a sequence of carefully designed experiments — each built on detailed theories of change. The aim is to help our partners, iteratively and incrementally, make small bets, learn, and adapt their theories of change and projects every couple of months, and so more effectively make progress towards achieving their goals.

These projects are still in early stages, but initial signs are good — our partners’ use of learning by doing is helping them determine what works in their contexts, and to figure out whether and how to adapt, in real time, to overcome emerging challenges. At the same time, their experiences are helping me and my colleagues at Global Integrity reflect on our overarching experimental design, and giving us the data we need to provide more effective support.

You can read more about our partners and their work here, or in this video. And feel free to reach out — we’d love to hear how you’re trying to learn about and adapt to the contexts in which you’re working, and to talk with you about how we might help one another.


Michael Moses is the Director of Advocacy & Programs at Global Integrity, an action – learning lab for open governance based in Washington, DC. You can reach him at michael.moses@globaintegrity.org, or @GlobalIntegrity

Michael Moses
Michael Moses
Managing Director, Programs and Learning

Michael is Managing Director for Programs and Learning

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Related blog posts