How often do you hear the word “sustainability”?
In development, we almost all recognise sustainability as a critical component of success. Yet privately, whether we work as a donor, implementer or consultant, most of us pay less attention to sustainability than we would like.
Why is that, and how can we fix it?
The first reason why sustainability is valued more in principle than practice is incentives. Development programmes’ performance is judged more by how many people the programme impacts, and by how much, than by how long that benefit will last. For example, we recently analysed a random sample of 30 development programme logframes from 2017-2019. Only 5 explicitly mentioned post-programme impacts, and only 40% percent had any form of sustainability indicator as a top-level objective. Just under half (47%) percent had no sustainability indicator at all. As logframe indicators and targets are the main way we assess programme performance, this is alarming. What gets measured in the logframe is what most implementers feel most pressure to deliver.
This is more a symptom of the problem than its root cause, though.
Why isn’t there more pressure to deliver on sustainability?
Crucially, sustainability is both hard to measure and hard to predict. Sustainability of a programme’s impact cannot – by definition – be measured within the life of a programme, and by the time it can be measured, reports have been written, staff have disbanded, budgets are closed, and nothing can be done to change a poor result. Consequently, neither donors nor implementers have a sufficiently strong incentive to overcome the logistical, technical and financial challenges of returning several years after programme completion to measure whether impact has lasted.
Can anything be done about this?
It would be easier to increase attention to sustainability if we could accurately predict sustainability. Fortunately, development agencies already have a reasonable idea of which factors increase the probability of development impact lasting. Springfield and others have even outlined suggested indicators of sustainability which programmes can use while interventions are live to predict how likely benefits are to last.
Such proxies for sustainability are a great start, but they have their limitations. So far, there is not enough evidence to convince sceptics of their utility. Even Market Systems Development programmes, which are explicitly geared towards sustainability, struggle here.
What we need is strong evidence for which of the characteristics that we can observe and measure “now” consistently correlate with impact lasting long-term. If we could base our findings on rigorous empirical research, we would be a lot closer to developing more widely accepted sustainability indicators.
We already have an idea what this research could look like. What if we analysed a sample of development programmes, all of which ended at a similar time, and measured how much of their impact had lasted? If our sample programmes were sufficiently well documented – and most are – we could then compare characteristics of their intervention strategies. This would allow us to begin to identify differences between the sustainable and unsustainable interventions that are measurable during the life of the programme.
Armed with this information, the possibilities look exciting. If you know which factors correlate with lasting impact, you can look for them when monitoring and measuring a programme’s results. This could be done “live,” using data to identify weaknesses and improve interventions during the life of the programme. Ultimately, the results of this research could reduce the pressure to focus on short-term results by providing a measure that correlates with lasting impact.
With such ambitious objectives, we appreciate the importance of getting the research right. If your organisation funds or implements high quality, applied research and you are interested in partnering with us, or if you simply have ideas on how this research could be done well, Springfield would like to hear from you.
 We found two bilateral and one multilateral donor that make logframes available online and drew a random sample of 10 logframes from each. Programmes or projects were limited to those that were ‘active’ at some point between 1 January 2017 and 1st June 2019. Although the sample size is too small to be representative, randomisation was used to reduce bias.
 To check this, we asked a range of programme managers and advisers what they feel most pressure to achieve. With few exceptions, they cited greater pressure to create impact during the programme than to create impact likely to continue after it ends.
 For example, sustainability indicators associated with ‘Adopt-Adapt-Expand-Respond.’ We have also been influenced by a recent paper from AIP-Rural (PRISMA): Shahi, Nasution and Tomecko (2018) Understanding, Promoting and Measuring Behaviour Change in Partners.
 Those logframes that did have use a sustainability indicator mostly focused on measures of increased institutional capacity. Some used changed perceptions or independent co-investment from local players as proxies for sustainability. These logically indicate increased likelihood of sustainability but to date empirical evidence on the correlation between such proxies and actual sustainability is lacking.
 We recognise that correlation does not equal causation. However, such research provides a stronger evidence base than we currently have, as well as a foundation for future research into causality.
Join the discussion 6 Comments
This raises a really important topic; sustainability is something (like systemic change) that we talk about a lot but seldom see. A few queries with the proposed approach though. 1) Will the differences between sustainable and unsustainable interventions always be contextual and prove impossible to generalise?; 2) Will looking for factors that correlate with sustainability further disincentivise donors to actually fund ex-post studies? 3) Measuring sustainability isn’t rocket science, it just requires commitment; but the current ‘projected-ised’ aid world largely isn’t set up for this (but there’s no reason it can’t be). Is the danger here that here we more efficiently address the symptom but don’t tackle the underlying cause?
Great questions, and I’d be interested to hear your answers to each of them as well.
Initial thoughts from me:
1) I think this is a question about research design, assuming we can agree that intervention design makes a difference to the sustainability of impact. I think that is pretty uncontroversial (maybe design doesn’t always have the BIGGEST impact, but if it has no impact whatsoever then what on earth are we all doing?!). As for research design – no design can completely eliminate the influence of context on results, but good research design can reduce it. Will the results be generalisable? Probably not to all sectors, all contexts, all programmes etc. – but that’s not the same thing as saying that they won’t be useful. Development experts, myself included, make a lot of claims about what we can do, given the variabilities of context, to increase the likelihood of sustainability, and I think we are long overdue both a wider and a deeper evidence base underpinning such claims. What’s the alternative to starting to build that evidence base? Throwing up our hands and saying “it’s all too complex and/or contextual” won’t get us anywhere and arguing on the basis of logic is valid, but has its limits. What do you think?
2) This is a valid concern, and one that I share, because I think it’s important to fund ex-post studies. However, even when such studies are funded, adaptive management needs live data to feed into decisions. If ex-post studies ONLY tell us whether something was sustainable or not, but not why, we can’t learn much from it. If we know why, we can at least increase the likelihood of making future interventions sustainable, though we can by no means guarantee that they will be. That seems worthwhile to me. Rightly framed, it’s also possible that rigorous and applicable ex-post studies, which is what we hope this research will be, will inspire more funding for rigorous, applicable ex-post studies. Having said that, if you have thoughts about how to mitigate such an effect, let me know.
3) Do you mean measuring sustainability after programmes have closed isn’t rocket science? If so, agreed. Or do you mean measuring the probability of sustainability during the life of a programme isn’t rocket science? If the latter, I’m surprised and would be very interested to hear how you think programmes do, or should do, that. As for the way the aid world is set up…idealism vs pragmatism – which is better, or can we have both? What do you think would be a good way of tackling the underlying cause of too few ex-post studies?
Look forward to talking more (feel free to email if it’s easier – firstname.lastname@example.org).
Short term projects were ineffective vehicles to achieve sustainable impacts on the underlying causes of poverty and social injustice. Only a program approach can contribute to a broader movement for social change through strengthening partnerships, networks, and alliances and positively transforming power relations between right holders and duty bearers. This may need a shift in the business model from a service delivery approach to a rights-based format with a long term commitment to the affected population. When projects are designed in isolation leaving out key scalers like Government and private sector by building alliances and collaborations for creating impact at scale spurring social change potentially influencing both parties on policies, charities in part, with the limitations of traditional scale models confined to creating pilots. There is also a need to link all donor-funded programs to UN’s SDG goals which have been the result of good analyses and understanding by all development actors- charities, the private sector, and Government
Thanks for your comment. It’s certainly true that long-term local players are the ones who deliver change that lasts, and I agree that the direct delivery model, whilst it has it’s attractions, in the end can’t achieve what it sets out to do. That’s why we advocate for the application of market systems development principles and are working towards building an even more rigorous empirical evidence base for measurement strategies that support those principles.
Sustainability, and its measurement, will always be elusive if tied only to the donor-funded initiatives. In my view, all development programmes should have private sector partnerships factored into their design, be it from the beginning or as part of a well-structured exit strategy. Where programmes have been institutionalised post-interventions, or better yet, where private sector partners have led implementation from the beginning, there is greater commitment to long-term sustainability when interventions are tied to core business targets and and economic objectives. The key is to ensure that a robust measurement structure is adopted during the programme intervention to ensure data is captured long after its conclusion. Fast forward five years and these local private sector allies will not only facilitate the ex-post impact evaluation but will also provide rich, contextualized data to inform the reasons behind the results.
Thanks for your thoughts. I agree with your view that when changes are led by existing players (often private sector, but I think the principle holds whether private sector or otherwise) they are more likely to be sustainable. If this research is well designed, it should provide empirical data on the extent to which that is actually evidenced in practice! What you said about a robust measurement structure that ensures data is captured long after programme close is really interesting – how do you recommend programmes do that? Have you seen it done well? I would certainly like to see more funding for ex-post impact evaluations, but I can see that the incentives to fund such evaluations are not strong enough, even when good data is potentially available.
Thanks again for your engagement!