This post is part of a learning series exploring good practice, examples, or new applications of MSD.
This blog presents adapted extracts from ‘Market system diagrams: Or, how I learned to stop worrying and love the doughnut’, which is a chapter in Joanna Ledgerwood’s book written in tribute to the great Alan Gibson: Ledgerwood, J. (ed) (2021). Making Market Systems Work for the Poor: Experience inspired by Alan Gibson. Rugby, UK: Practical Action Publishing
The full book is available here: http://dx.doi.org/10.3362/9781788531443.
Introduction
If we want one visual image of what M4P[1] is, it’s the doughnut. It’s everywhere in M4P training. It’s everywhere in explanations and definitions of M4P. What’s different about M4P? Doughnut! How do you do M4P? Lots of doughnuts! This shouldn’t be surprising – we’re in the business of transforming systems, and doughnuts are the only diagrammatic representation of the system outlined in the M4P approach. They are the lens for understanding what a system is, and how it is underperforming. They are central to market systems analysis, and to the way of thinking that investigates and addresses root causes of market system underperformance.
But while the doughnut is everywhere in explanations of the M4P approach, it’s hard to find them in active use in the implementation of M4P. If you do find them, they are likely to be stale, unconsumed, or somebody else’s. Stale in the sense that one might turn up in old programme documents, but it’s been a few years since it came out of the oven. Unconsumed in the sense that the doughnut analysis ends up not being used in ongoing strategy-setting. And someone else’s, because doughnuts often seem to be a tool of occasional external M4P consultants rather that a tool for regular systems analysis undertaken by M4P programme staff. What is going on? Why is the defining framework of M4P not used more, and why is it not used more consistently? And why is this a problem?
‘Incentives and capacities’ analysis is usually used by M4P programmes to understand the prospects of system actors changing behaviour to perform a function more effectively (or to pay for it). Here we’ll use incentives and capacities to analyse why M4P programmes are not performing doughnut-based diagnostics more effectively.
As always with incentives and capacities, it is important to be clear about the behaviour change we’re seeking – what is it exactly that constitutes improved doughnut usage? We want to see programmes doing the following: Use doughnuts to analyse the reasons the system isn’t working. Do this internally and regularly, and use the findings from this analysis in creating and adapting programme strategies and intervention design.
Capacity of programmes to use the doughnut
Information about doughnuts: Doughnuts don’t exist outside M4P. They are nowhere else in development programming or social science. As such, there is relatively little information about how to use them outside the M4P bubble – so we need to improve learning within our bubble. It would be useful to consolidate practical understanding on an ongoing basis that can support a growing community of practice in doughnut-led diagnostics. One way would be to have more of the often-excellent learning documents produced by M4P programmes focus on practical tips and experience of using doughnuts, and for the various forums on M4P practice to draw out more clearly the sum of information already available and add new guidance as it is produced. Better understanding of what it is to use doughnuts may reduce the incidence of misuse, and support more people being willing to give it a try.
Conceptual confusion: Part of the strength of the doughnut lies in its ability to encompass aggregated and simplistic representations of complex processes. Lots of difficult-to-articulate actions, often around information exchanges, can be aggregated into ‘supporting functions’. This allows things like ‘market coordination’ to be bunged into a doughnut even if we can’t easily nail down exactly what it entails. This simplicity has advantages – it’s often closer to how we see things in real life, and it enables us to move quickly.
But there are costs to this. It makes it difficult for the uninitiated to readily understand what’s going on and creates a barrier to entry for those not familiar. It creates a language barrier between M4P and the social sciences as well as other development approaches. This contributes to a lack of clarity as to what systems are and what systemic change is. This is not a criticism of the doughnut per se. Doughnut-led analysis is integral to the M4P approach and good development generally. But I believe it does indicate a possible gap within the M4P approach. I believe there is space for tools to go alongside the ‘supporting function’ framing of systems that sets out visually exactly who is doing what within the system, breaking down ‘supporting functions’ into their constituent actions. Disclaimer: I’ve had a go at this myself.
Incentives for programmes to use the doughnut
As always addressing capacity constraints will only get us so far in shifting behaviour – incentives are key. What underpins the lack of incentives to use the doughnut?
Disconnection of doughnuts: Building on the conceptual confusion set out above, there are two things that in my experience tend to focus the thoughts and actions of M4P programmes. First is partnership agreements. Second is intervention monitoring and measurement. Both are about relationships – relationships with the partners who often become responsible for delivering numbers; and relationships with donors to whom the programme is accountable. These human relationships often take precedence over the more abstract understanding of systems.
Programmes often become very partner-centric, rolling out agreement after agreement with one partner as they capitalise on the relationship that has been built; it is easier to continue cost-sharing new behaviour changes with one partner than look beyond that partner to the supporting function or rule of which it is part. The incentives for programmes are often to get numbers as easily as possible. Working with new partners involves difficulty, hard work, uncertainty and cost. Working with new partners in new supporting functions or rules still more so. ‘Better the devil you know’ becomes a way of thinking (not least because the system-wide doughnut analysis was done three years ago by someone else), so if the numbers are doing OK, why bother analysing the system?
Which brings us to monitoring and measurement. “Diagnose down, measure up” is an oft-repeated mantra, but I think this is problematic. With doughnuts and incentives and capacities analysis, we diagnose down to where the programme forms a partnership. Then set up logic models and measurement plans based on that partnership through to impact on the target group. What’s missing? Some kind of measurement of the performance of supporting functions and hence of the system. Why is this a problem? Because in the in the case of programmes accountable to donors, it is often the case that ‘what gets measured gets done’.
This leads to a situation whereby programmes are not accountable for changing systems because systems are not measured – not at the diagnostic stage, and not later even if there are sporadic efforts to capture ‘expansion’ of benefits. I think ‘diagnose and measure down, measure up’ would produce healthier, system-centric accountability framework for programmes. If system diagnostics are incorporated into ongoing measurement systems, then there will be greater incentives to keep the focus on the system throughout the programme lifecycle. And hopefully to use doughnut-led diagnostics to continually understand both progress within the system and reasons for its underperformance.
A dearth of donor desire for doughnuts: Programmes are responsive to donor priorities. What might it take for donors to start incentivising doughnut use? Donors’ technical skills in diagnostics are – at least in my experience – inadequate to hold programmes to account in their identification and prioritisation of important reasons systems are not working. We’ve a couple of options for addressing this, though they may be fanciful. First, if donors used doughnuts themselves they might understand them better, and so be better able to interrogate the lack of or quality of programmes’ doughnut-led diagnostic processes. In principle you might imagine that donors who understand and endorse the M4P approach would use doughnuts to understand national-level priorities for the poor in a given country, and commission programmes within that framework. You might also imagine this would best be a coordinated process between multiple donors and national government in a given country, producing a unified overall understanding of why the economy is not functioning for the poor. This in turn could be extended and deepened to include coordination with and between implementers working on specific programmes.
But we must return to reality. It might be slightly more feasible to return to the ‘what gets measured gets done’ principle and push for consistency and transparency in the methodology of doughnut-led diagnostics. We have said diagnostics represent an investigation, not a lengthy dissertation. But the principles of research, whereby we state our methodology explicitly so that others can interrogate the process we followed to collect information reach our conclusions, are not incompatible with market system diagnostics. A clearer set of methodological principles around a diagnostic process that establishes best practice for measurement of the underperformance of supporting functions would not only help the sector learn about how best to conduct their diagnosis, it would also render the process more transparent and set the basis for accountability, coordination and incentives.
[1] M4P refers to making markets work for the poor, an earlier phrase used to describe MSD.