Good practice in systems development encourages the use of early measurement in the place of heroic assumptions. In practice, this means relying on evidence over instinct to analyse whether an intervention idea is likely to work for both supply and demand-side actors.
During a recent Making Markets Work training course, a participant described an intervention in which the team had managed to persuade a financial services provider to pilot affordable loans targeted at low-income farmers. The idea was to increase farmers’ access to finance, but the project had little idea of whether or not the farmers would actually make use of these loans. The participant was rightly concerned about the untested assumptions underpinning the intervention but asked “How can I measure whether or not people will use something that does not yet exist?”
This is a valid question. Thankfully, although measuring the non-existent sounds impossible, there are some straightforward tactics that teams can apply to test their assumptions before piloting an intervention idea.
The best predictor of future behaviour is past behaviour
The most compelling basis for predicting people’s future behaviour is a solid understanding of people’s current behaviour. Consequently, the first place for programmes to start if they want to understand how people will respond to something that doesn’t yet exist is to look (and look hard!) for what does exist and measure people’s behaviour in relation to that.
- Measure underground solutions: It’s very rare that an important function is completely non-existent. Look for the ‘underground’ solutions that already exist and measure people’s behaviour in relation to these. How do farmers access finance now? Are there really no business loans available, or are they simply being offered by informal, less visible, or predatory actors? If they exist in any form, who are they being used by, and why?
- Measure outliers: Look for ‘positive deviants’ and outliers on both the supply and demand side. Have any providers attempted to enter the ‘affordable business loans’ market? What responses have they encountered? Are there any farmers innovating in access to finance? If examples can be found, asking others to comment on them can yield important data about why innovation hasn’t spread.
- Measure proxies: Consider potential proxies and measure whether these are being used, and by whom. For example, perhaps business loans do not exist, but personal loans do. Do farmers use these personal financial products? If not, why not? Do financial service providers offer other products to low-income households? Are these being used?
- Understand absence: If no examples of a particular function can be found, it is critical to ask the all-important systems development question: “Why hasn’t this already emerged in the market?” There are rational reasons in every system for why products and services that would apparently solve critical constraints don’t yet exist. Unless the obstacles that have stopped their past development have been overcome, future uptake is likely to be low.
Not All Asking Is Equal
The most obvious tactic for measuring future behaviour is to simply ask people what they would do in response to the proposed innovation. This is not a bad place to start, but it presents a problem: we know that what people say we will do and what we actually do are often wildly different. A simple survey which shows farmers expressing enthusiasm for the proposed loan products might boost the confidence of programme, donor and partner, but it won’t necessarily reflect future impact.
There are several interview techniques that can help address this problem.
- Leverage neighbourly gossip (‘asking about a friend’ technique): It’s often more effective to ask direct questions about others’ behaviour than about the interviewee’s own. Asking farmers whether they think their neighbouring farmers would be likely to use a finance product might garner more honest answers than asking whether they would use it themselves!
- Leverage casual egotism (life history interviewing): Many people will happily talk about their own experiences in great detail. Designing interview questions around concrete stories of people’s past experiences instead of around abstract questions about a hypothetical future will yield much richer data about behaviour.
- Leverage opinionated interpretations (scenario-based surveying): One way to understand perceptions is to write up future interventions as stories, and then ask for people’s opinions about the characters’ behaviour. For example, a story could be told in which a fictional farmer had refused to use an affordable loan product in a neighbouring village, and farmers could be asked about all the possible reasons why the product had been refused.
- Leverage instinctive reactions (free-listing): A quick and easy survey method is to ask a large number of people to list all the examples of a given category they can think of. For example, farmers could be asked to list all the factors that would affect whether they take out business loans. Free-list analysis treats items as more important if they are mentioned repeatedly across multiple people’s lists, and especially so if they are consistently among the first factors mentioned.
These interview techniques are creative ways of building surveys and interviews to better understand current behavioural patterns and perceptions in order to build better-evidenced assumptions about whether interventions will work.
We don’t need to know everything before we try something
Perhaps the best method for measuring whether an intervention will work is to test it. After all, if what people say is often different to what they do, why not measure what they do? Many interventions can be scaled down to low-risk, small-scale actions that enable us to test our assumptions before proceeding further.
Thus, if a programme’s research has shown that access to finance would address a critical constraint for farmers, and that affordable business loans do not yet exist, why not run a small piece of action-research to see how farmers actually behave when such a product is introduced? There’s no reason not to scale down before scaling up.
No Excuse Not to Measure
Far too often in development we see interventions in which an apparently “obvious” solution is pursued with no real evidence to back up the assumption that such a solution will work. Our colleague on the Making Markets Work training had the right idea – measurement should start at the very earliest stages of intervention design to provide teams with early and invaluable “go,” “no go” or “adjust” feedback to intervention ideas. With so many tactics in the toolbox for validating assumptions, there is no excuse not to measure early and often.