Evaluation can be defined as the systematic and objective examination of the humanitarian sanitation response to determine the worth or significance of an activity or programme. It is intended to draw lessons to improve the response and enhance Accountability.
- Share evaluations in an appropriate format with all key stakeholders so that the findings can be discussed and applied, e.g. through workshops, reports, presentations and community meetings.
- Include all partners and other actors when developing the monitoring and evaluation framework and, where possible, carry out joint evaluations.
- Budget for an evaluation in the sanitation programme. Calculate costs such as the evaluators, interpreters, logistics (e.g. transport and accommodation) and dissemination (e.g. printing, community meetings and workshops).
- Clarify the purpose of the evaluation, the type of information needed and develop a Terms of Reference with a timeline and budget.
- Develop a Logframe with indicators to enable an evaluation of the inputs (resources used), activities (what was done), outputs (what was delivered), outcomes (what was achieved) and impact (long-term changes).
- Match the evaluation methods to the requirements of the evaluation and be accessible to and inclusive of marginalised groups. Evaluation methods may include: Key Informant Interviews, Observation and Transect Walks, Pocket Chart Voting, questionnaire-based Surveys and Community Mapping.
- Develop indicators which are disaggregated by age, gender and disability.
- Collect qualitative and quantitative data from different sources (triangulation), analyse it using appropriate methods and compile the findings into a report.
- Avoid the common pitfalls of evaluations, including:
- Focusing on easy to reach geographic areas
- Not collecting baseline (‘before intervening’) data
- Not respecting data protection and or putting participants at risk, e.g.in insecure areas
- Neglecting consultation with less visible groups, e.g. women, older people and persons with disabilities
- Ignoring seasonal or geographical WASH differences
- Collecting too much or unnecessary information, which consumes time and resources and does not answer the evaluation questions
- Focusing the evaluation merely on outputs, not considering outcomes, behaviour change and impact
- Not widely sharing the results, so the information is lost and not used to adapt programming
- Not informing the target group about the results of the evaluation
There are numerous reasons for undertaking evaluations including to review innovations, gain evidence, demonstrate success or challenges as part of a learning process, assess value for money and to be accountable to key stakeholders such as donors and, especially, to the affected population. An evaluation looks at the overall changes which can be attributed to a sanitation programme and examines the outcomes achieved, the relevance, efficiency, sustainability and wider impact on people’s lives. It can produce recommendations to improve the programme (including capacity strengthening if needed) and capture learning to inform future policy and practice. It is an important aspect of Accountability. Sharing and using evaluation findings encourages transparency and Learning in the sector.
There are different types of evaluations depending on the objectives. Some evaluations are carried out at, or after, the end of the programme (or mid-way in longer-term programmes) and aim to provide accountability and influence future policy and practice. Real-time evaluations are carried out during the programme, are interactive and involve multiple stakeholders; the evaluator acts as a facilitator to generate an overview of the programme and provide immediate feedback so that issues can be addressed during the response. All types of evaluation can be external and independent or conducted by an agency with the support of an external evaluator or by staff members.
It may be appropriate to do joint evaluations in collaboration with other programme staff, partners and other organisations (e.g. within the WASH Cluster) to minimise duplication of resources. Some evaluations have a strong focus on accountability to the affected population, empowering them to play a key role in carrying out and contributing to the process in order to strengthen ownership of the programme and ensure that they are in a position to make use of the findings.
Key evaluation criteria include:
Relevance: asks whether the programme is doing the right things, e.g. is the sanitation programme meeting the needs according to the context? Does the programme target the right people in terms of geographical areas as well as vulnerabilities to WASH-related health risks?
Effectiveness: analyses whether the programme has achieved its objectives and intended results and examines the factors influencing the achievement of those objectives, e.g. has the sanitation programme achieved its objectives of providing accessible and inclusive toilets and safe faecal sludge management services? To what extent can these changes be attributed to the programme? If intended results did not occur – why not?
Efficiency: measures both quantitative and qualitative outputs in relation to the inputs, e.g. how efficient is the installed treatment technology at reducing pathogens or the organic load in faecal sludge? Does this method make the best use of the resources available? Were there alternative options to improve sanitation conditions?
Impact: examines whether there were significant or lasting changes resulting from the programme and whether they were intended or unintended, positive or negative, e.g. has the goal of the programme been achieved? Have there been any changes in public health? Has the programme made a real difference to the affected population?
Sustainability: evaluates the extent to which the net benefits of the intervention will continue or are likely to continue, e.g. have people been supported to continue using, maintaining and repairing the sanitation facilities? What behaviours have changed as a result of the intervention and how likely are these changes to last? Has local capacity strengthened?
Coherence: considers how well the intervention fits with existing country plans and local priorities (e.g. does the programme align with Government policies) as well as with programmes and interventions of other response agencies?
Participation: examines the level of active community engagement and inclusive involvement of all segments of the affected population in planning, management and decision making of the programme in order to achieve appropriate ownership over the outcomes. It includes aspects such as people’s motivation, capacities to engage and opportunities they are given to participate.
Existing National Standards, Sphere Standards, the Core Humanitarian Standard and the Code of Conduct can be used as references to assess the quality of the programme in conjunction with the programme objectives and indicators.
Still have questions?
You could not find the information you were looking for? Please contact our helpdesk team of experts for direct and individual support.