Evaluation beats learning every time

Place comments on the Facebook page 'The Poor-Poor Divide'

One of the things that has puzzled me ever since I started in this aid business is that evaluations are rarely if ever used to learn from, modify or improve.

I have learnt this through bitter experience when working as an evaluator for DfID, World Bank, UNICEF and a whole host of NGOs.

Every time I have been asked to do an evaluation, people ask me 'to do my worst', or 'assess this objectively', and every time I have believed them. Well, perhaps not. Perhaps I should have understood 'doing my worst' as not doing my best.

I have even tried to do more than expected. I tried to show them how they could use evaluations to improve what was being done.

In almost every case (bar one or two wonderfully exceptional organisations) the projects and donors weren't interested.

Don't get me wrong. I am never worried if people disagree with my particular analysis and then enter into a dialogue to come to some conclusion. No. They never get that far. They'll take the evaluation or assessment politely (or, in some cases, not at all politely) and I find out later that I missed the real purpose of the evaluation. (For a video commentary about this problem, see the page 'Problems facing a consultant').

The political

What kinds of real purpose are there? Well, there's the straightforwardly political. I remember one assessment I made for a bilateral donor to look at the feasibility of setting up an epidemiological research centre for the Caribbean. I went round a lot of the Caribbean at that time, seeking views from a variety of ministers, organisations and individuals. Pretty much all of them were in agreement that it wasn't a good idea. I reported this. Only later did I find that the donor had already agreed to the funding of the centre and merely wanted an evaluation they could point to.

The need for money

Then there's the need for money. Yes, that ugly gritty stuff we all need. I was asked to 'do my worst' in an evaluation of a project for girls' 'empowerment' in Bangladesh once. They said they really wanted to find out who was benefiting and how to improve. So I set up their team to do their own surveys in a number of villages, and they themselves reported to me that the only girls benefiting were ones from families who were comfortable and well supported. The other girls were either too busy to attend their groups or felt that the groups 'weren't for them'.

I also had to point out that although the project was supposed to be advocating against violence against women, they weren't looking at the real problem behind many of the rapes -- namely that the girls who were being raped were from families who had the least respect and support, and that perpetrators assessed very carefully whom they could rape.

I suggested lots of modifications to their programme so that they could address these issues. Alas, I found out afterwards that what they really wanted was praise for their programme so that they could go back to the donors for more funding. They told me they wouldn't present my evaluation as it might place that funding at risk. I wish they had told me that in the first place so that way I could have refused to do the evaluation.

The lack of desire to learn

Many of you will recognise such situations from your own experience. What drives me to write about this is the constant refrain I hear from so many colleagues that nobody seems to want to learn. This is more fully explored in the essay on Critical Thinking elsewhere on this site. Suffice it to say that many of us have seen exactly the same programmes being regurgitated in a cyclical fashion, using different names, many times around over the last 40 years. How many times have we heard the boring debate about paying or recruiting volunteers? How often have we heard the debates about 'participation', 'community action', 'service provision'.

Most projects also end up the same way as they always have done. Rather than learn from this, how often have we heard that a project has failed not because of the project's bad formulation, but because it is the fault of all those escape clauses we put in the log frame in the 'assumptions' column?

So, next time you do an evaluation, and they say they want to learn from it, make sure you ask whether they really mean either of those things. It doesn't really matter if they don't because you can just use the evaluation to do their bidding but also set it up so that some real learning can be set aside for the future. There are some good examples of how to do this in other essays on this site.

Tony Klouda