Newsroom

News

By Henre Benson, Chief Operations Office: CASME (Twitter @henreb)

The Monitoring and Evaluation experience of implementers of education interventions can be traumatic. NGOs, already stretched to capacity, face the prospect of their internal processes being brought under scrutiny, deliverables and dosages counted and outcomes tested. The reality is that in many cases the impact is unclear and attribution is nearly always uncertain. Nobody sets out to fail. Most interventions are based on a theory of change, model or idea (whether rigorously tested or just drawn from years of experience) and a belief that what is intended will work. In many cases that belief is well founded as these initiatives are changing lives. There are countless personal stories of learners, schools and teachers presented with new opportunities, brighter futures and hope as a result of training, support or an essential resource provided.

By Melissa King

Not so long ago, evaluation was considered a specialist niche area of expertise: a mysterious and highly technical undertaking only to be carried out by the initiated. There is growing recognition, however, that M&E processes and frameworks are an essential part of all education project work, and that those involved in programmes aiming to improve learner outcomes need to strengthen their understanding of the goals, methodologies and practices of M&E. There is a need to build our collective capacity in M&E, not just as individuals, but as a community working for educational change. Monitoring and evaluation is everyone’s responsibility.

Picture this scene: It is 10 am on a Monday morning, and the room is filled with evaluators, programme managers, implementers, funders, policymakers and beneficiaries. As joint stakeholders vested in the success of their programme, they have gathered to agree on an evaluation framework which will guide the evaluation. However, whilst some of them understand the meeting’s purpose and know how to proceed with the task at hand, others have never even heard of a ‘theory of change,’ let alone previously seen an evaluation framework. The meeting progresses then, with a few voices tending to dominate the discussion as the others scramble to keep up with proceedings.

By Nompumelelo Mohohlwane

Evidence-based research, monitoring and evaluation is foundational to rational decision-making when informing policies, programmes and interventions. This is imperative for public policy to have an impact on service delivery, and especially so in the education sector. The information derived from this evidence base determines the value or merit of programmes or policy options by identifying standards, performing empirical investigations using various techniques and integrating these findings into conclusions and recommendations for the sector. In the short-term, this is helpful for programme managers to improve performance and accountability. In the long-term, the knowledge generated for the sector could inform broader programme and policy design, and practices beyond the programme being evaluated.

By Professor Brahm Fleisch

Over the past, decade, there has been a resurgence of interest in evidence-informed education policies and programmes in South Africa. Specifically, there is a growing recognition that rigorous research designs, particularly designs that include ‘counterfactuals’ can provide strong findings into what works to improve learning outcomes.

Any organisation that is committed to improving the quality of evaluations understands that one of the most critical steps is getting the design right. Our own experience in M&E, particularly in designing evaluations has evolved over the years. We see evaluation design as the structure of the evaluation that will provide information/data to answer evaluation questions. The design is determined by the purpose of the evaluation, the programme theory of change, the evaluation questions and of course, the budget.

Those who work in education are acutely aware of the complexity of the problems in the South African education system. No silver bullet solution exists. Despite this, donors like the Zenex Foundation continue to implement initiatives to improve education with the expectation that evaluation should help us learn.

I am a strong proponent of M&E as a key lever in evidence-based grantmaking. It definitely adds a cost to the donor and I am sharing my perspective on four key issues to do with budgeting for, and the costs of, monitoring and evaluation (M&E):

  1. The importance of allocating grant funding to monitoring and evaluation.
  2. Budgeting for monitoring and evaluation.
  3. Funding for the dissemination of evaluation results.
  4. Strengthening M&E in the sector

The increase in the call for proposals for evaluation in recent years is likely an indication of growing understanding of the importance of evaluation. More specifically, there appears to be growing interest in understanding how programmes are actually implemented and the ‘impact’ of particular programmes. And whilst in theory, this bodes well for ensuring accountability and improving programming to maximise benefits, the process of responding to calls for proposals brings prevalent challenges to the surface.

Why are education evaluations becoming increasingly important?

There is no doubt that improving education is a high priority for all key stakeholders in South Africa. This is evident through the huge financial investment and numerous initiatives underway to improve education delivery and outcomes. The South African Government allocated R320 billion for education in 2017. This is approximately 6.4% of its Gross Domestic Product (GDP), which is higher than other developing countries. In addition, there has been a significant increase in private sector support for education development, estimated at R25 billion.

Page 1 of 6