Making voices count - Gathering community feedback in times of remote evaluation - IOE
Introduction
In 2020, IOE undertook a project evaluation of the National Programme Community Empowerment in Indonesia. This was a Community Driven Development (CDD) project in Papua and West Papua provinces in the eastern-most part of Indonesia. These provinces are characterized by geographic remoteness, lack of infrastructure, scant presence of government structures, civil unrest in some parts and large indigenous populations.
Due to the Covid-19 outbreak, which took place at the start of the evaluation process, international travel was not possible. The mobility of local consultants was also severely restricted, due to the risk of spreading the virus among indigenous populations in case of physical visits. As a result, the evaluation had to adopt a methodology for remote data collection, without any field visits. IOE recruited a team consisting of an international consultant highly experienced in CDD in the Papuan context and two national consultants who also had extensive experience of working in CDD programmes in the Papuan and Indonesian context.
Whilst the methodology provided some useful findings, it affected the scope and inclusiveness of the approach. Most notably, the evaluation was not able to attain its original purpose of contributing to mutual learning and community empowerment.
The challenges of “remote evaluation”
Evaluation of community driven development projects require intensive and extensive interaction with stakeholders. The remoteness of the project area and the lack of cell phone coverage made it impossible to cover a large number of community groups. A representative sampling of community groups based on nature of interventions and social characteristics was constrained by the poor quality of M&E data. There was no reliable participatory M&E and no reliable output and outcome data that could have been used by the evaluation. The only data available, even if incomplete, was the data the list of project villages and the groups within these villages.
At the same time, data collection methods could not include direct observations, interactive group discussions, focus groups or personal interviews –which would have enabled real time triangulation of findings.
The lack of mobile coverage in Papua and West Papua was another challenge, which limited the sample size. Moreover, reaching even one respondent required multiple calls to establish a connection.
The size and representativeness of the sample communities selected for this evaluation, therefore, had to be limited. Hence, the evaluation team adopted a vertical structure interviewing strategy, and decided to ensure depth rather than breadth of sampling. With this, the team was able to ensure the internal validity of the findings.
The solution: a vertical sampling and data collection approach
The evaluation adopted a bottom-up approach. It did this by examining the experiences and perceived benefits of community groups, first through interviews and then triangulating these with the perspectives and views of project staff such as village facilitators, district facilitators and regency facilitators.
The interviewing and data collection strategy followed the vertical facilitation structure, composed by community groups, village, district, regency and provincial facilitators.
The villages were first selected using a simple random sampling procedure within each province using the village database available; then the evaluation selected a random sample of groups in each selected village. Once a community group was identified for interviewing, the respective village, district and regency facilitators supporting that group, directly and indirectly, were also identified and interviewed. The questions for each level of the facilitation structure were formulated only after interviews for the level below were finished. The vertical interviewing strategy enabled the evaluation to review the issues emerging from the interviews at a lower level in the facilitation structure and validate them during the interviews at the higher level of the facilitation structure and vice versa.
Figure: Vertical sampling approach following the project’s facilitation structure
Issues encountered in the remote evaluation process
Inability to reach selected community groups due to lack of cell phone coverage. Linguistic diversity in Papua and West Papua and lack of fluency in lingua franca (Bahasa Indonesia) –
the lack of reliable mobile coverage, the local consultants made an introductory call to set up an appointment with the groups so that they could be present in an area with better mobile coverage. Given the linguistic diversity of the indigenous populations, the evaluation team also sent the interview questions in Bahasa Indonesia before the appointment to ensure that group members could be prepared with answers in Bahasa Indonesia. This ensured that the evaluation team made the most of the limited time and network coverage. In cases where calls were not sufficiently audible due to network coverage, video recordings of answers on selected questions were provided through WhatsApp.
Saturation of development interventions and inability to distinguish between programmes – Papua and West Papua have large public programmes and numerous donor-funded projects. The evaluation team found that the target groups were unable to distinguish between various programmes. To address this, the evaluation team used the introductory call to introduce the evaluation and to clarify the project it would focus on.
Speaking to female community members – The evaluation tried to reach female community members by phone. In some cases, however, the evaluation found that men took over the phone while women were speaking to the national consultants, and insisted that the evaluation team should speak to them. The evaluation team had to be careful to avoid any serious consequences for the women given the high rate of domestic violence prevalent in Papua and West Papua. In those instances, the evaluation team would continue the interview with the man by asking a question or two before closing the call. The community member was then replaced randomly with another female community member to ensure integrity of the sample.
Final reflections on the evaluation methodology
CDD gives control of decisions and resources to communities. They are expected to make informed decisions about how they want to use local resources, who will benefit and how they will benefit. Therefore, they should participate in the evaluation from the outset. Participatory M&E would have been a key element of a mixed methods suite of evaluation tools. Ideally people’s indicators should become the most important indicators of change. In this PPE, a number of statements of change were made by farmers themselves such as ‘Students came to see us and learn from us’, ‘other villages saw we had knowledge on nutmeg cultivation and recognised this’, ‘we want to form a co-operative next’; ‘we got no benefits we just know we need to be in a group to get future benefits’; ‘ nothing has changed I still have to sell produce myself’; Both positive and negative statements provided important insights on the kinds of indicators which might be valued by the community.
To conclude, this evaluation has been able to provide valuable insights into this project, despite the extraordinary challenges. To some extent, it was also able to obtain people’s feedback on what has worked and what has not. The team concluded that, in a pandemic-constrained context, a remote evaluation is better than no evaluation. However, a remote evaluation cannot replace the need for face-to-face engagement with the communities and the people concerned.