Orality Breakouts – Chapter 18 – Evaluations and Oral Cultures

The following is a chapter from the book ‘Orality Breakouts – Using Heart Language to Transform Hearts‘.  A chapter will be posted here each week.

Chapter 18 – Evaluations and Oral Cultures

By J. Peter McLain

A Christian ministry’s quarterly newsletter arrived in the hands of its donors in July 2008. It profiled, complete with pictures, a training initiative launched earlier that year in rural India. One month later, a third-party team arrived to evaluate the project’s impact. The team discovered that very little training had actually taken place in the targeted communities. The local leaders responsible for the implementation had failed to start the project, yet they had reported to their international headquarters that it was going well.

In 2006, a large, international, non-governmental organization (NGO) received a grant to train secondary school teachers in Iraq. A report was required for additional funding. Someone was hired to travel from school to school to get signatures from teachers, verifying attendance at all of the training sessions—a post-dated “sign-in sheet” so to speak. In reality, most teachers had not attended any of the training, given the volatile security environment at the time. They signed the forms anyway, hoping that it would make them eligible for future training. The organization got the additional grant.

These examples are commonplace in the non-profit and ministry world today. Donors want to know the impact of their investment in a certain development or ministry, yet it is hard to get accurate, reliable, or meaningful reports. Most ministries design projects without any thought of measuring end results. Many organizations will try, but they measure the wrong factor. For instance, a signature does not verify that someone is a better teacher. All too often, a project has no results to measure because it failed— either from flawed strategy, flawed implementation, or external circumstances beyond its control. Unfortunately, most donors do not know the project failed and repeat their investment. This can be avoided, however, by employing third-party evaluators to measure outcomes. In the case of T4 Global, our outcomes occur among oralculture learners, adding additional challenges and opportunities.

What are we measuring?

Donors are increasingly asking for greater accountability. Referring to their donations as “investments,” they want a return on their investment (ROI). Some have coined the terms “social return on investment” or “kingdom return on investment.”

The focus is no longer on inputs or even outputs. Inputs would be the money raised, the number of staff or volunteers, and the facilities, equipment, or supplies. Outputs are the number of classes taught, materials distributed, services provided, or patients treated. One ministry measured its output performance by the tons of Bibles it shipped. Understanding inputs and outputs can help determine strategy and implementation, but outcomes are the more relevant indicators of impact.

Outcomes include changes in knowledge, attitudes, values, beliefs, worldview, skills, behavior, condition, or status. Measuring outcomes determines if the program is actually making a difference in the life of an individual or a community. It is not how many Bibles are distributed, how many radio hours are broadcast, or how many training seminars are held. Instead, it is how well participants applied what they read, heard, or learned to their lives and communities. Are the lives and communities different as a result of the project?

Independent evaluation

Not only are donors increasingly asking for evidence of impact and results, but they want to hear those results from a third party. Self-reporting does not carry the same weight and believability as a report from an independent organization. Engaging a third-party evaluator adds costs to a project, but it is a cost donors are willing to pay in order to get reliable data on their investment.

Yet many organizations hesitate because a third-party evaluation may report failure, thereby damaging their position with these donors. If the evaluation is accurate, the negative feedback can actually produce positive results. A third-party evaluation is not a risk to be avoided; it is an opportunity to learn whether a program is performing well or not. If it is not, adjustments can be made. A donor might actually fund a new project if its redesign is based on findings from the third-party evaluation.

orality-breakouts-2010-138The opening example in this chapter is true, and personally verifiable. The senior leadership of an international organization designed a training program for rural India using their established grassroots network and T4 Global to implement the project. Three months into the project, a newsletter featured the project and reported its innovative success, while third-party evaluators simultaneously discovered that the network had not understood the project, much less implemented it. If T4 Global had not engaged a third-party evaluator, the project’s initial failure would have remained unknown. T4 Global was able to take corrective action and the project eventually became a true success story. The donor expressed appreciation. Lessons learned from the India project were applied to a later project design on another continent. T4 Global believes the use of third-party evaluators is key to improving and refining our strategies and implementation plans.

Evaluating projects in oral cultures

In theory, evaluation in the context of an oral culture should be easy. After all, they understand and interpret the world experientially. An oral culture person does not take what someone says at face value—he or she either has to experience it him or herself or relate it to his or her own everyday life experiences. In general, the person does not receive an idea in the abstract, nor does he or she accept an idea not connected to the community’s experience. Oral cultures have built-in mechanisms to validate whether a project is successful. Simply ask: Has it changed their everyday life experience? If they are living life differently, then the project has had impact.

Other questions to ask include: Has their story changed? How do they now tell their story? and Do they explain their world differently as a result of the project? An African proverb explains, “Everything important has its own song and dance.” Are there new songs or dances because the project was introduced into the community? If there are no songs or dances—if the story has not changed–then it is likely the project had no significant impact.

Challenges to evaluation in oral cultures

Evaluation in oral cultures, then, should be easy: assess whether the story has changed for an individual or community. Yet it is not that simple at all. Almost all evaluators approach their task from a literate culture bias. Evaluations are designed and implemented using literate culture assumptions and methodology. Those work well in letter culture environments, but they often fail when used with oral cultures.

Obviously, oral culture people cannot respond effectively to written questionnaires—they can’t read or write. But verbally conveyed interview questions will not result in good data collection— you can’t just read the questions to them. Interviewees struggle to understand what is being asked. They tend to say what they think the interviewer wants to hear, or refuse to answer the question altogether. The very act of being asked questions may not even be culturally appropriate. Answers given to questions may have absolutely nothing to do with what they really think, believe, or experience.

In 2007, T4 Global engaged an outside evaluator to assess the impact of their oral training project in Southern Sudan. The evaluator used both quantitative and qualitative methods of evaluation. The quantitative part relied on pre and post surveys. A team of local Sudanese was hired to interview three hundred individuals using a written questionnaire. In the end, the evaluator was unable to use any of the data collected from the interviews. The evaluator concluded,

It is apparent that the quantitative methods of the pre-post survey have challenged the resources and capacity of the indigenous partners. The methods may not have been appropriately designed to meet the constraints of data collection in Southern Sudan to provide timely and accurate data.1

The interviewees were very unfamiliar with the questionnaire process. They did not understand the purpose, let alone the individual questions. The interview teams took an extraordinary length of time to complete the given task, thus indicating that the process itself was unnatural to the culture. Most of the answers collected were the same. This suggested that the interviews were either done in groups or the interviewers led the interviewees (knowingly or unknowingly) to the “right” answer.

The qualitative part of the evaluation went much better, consisting of focus group interviews. However, even these were initially difficult because of the question-and-answer format. An awkward silence fell upon the group because very few individuals were willing to talk, much less answer the questions. Oral culture learners do not use question-and-answer techniques. However, when the evaluator shifted to open-ended questions, some people began to speak. And when they began asking people to tell their stories, the focus group discussions took off.

An oral approach to evaluation

orality-breakouts-2010-141The whole concept of evaluation is different in oral cultures. In his book, Orality and Literacy, Walter Ong describes a Central African evaluation of the village’s new school principal: “Let’s watch a little how he dances.”2 An American would want to look at changes in the school’s national test scores. But test scores do not fit into a villager’s everyday life experience. If they haven’t experienced it, they cannot evaluate it.

Furthermore, any “answer” to an evaluation question is also best given experientially. Oral culture people don’t sit around drinking tea in the village center saying, “yes/no” or “strongly agree/ disagree” to one another. A story, drama, song, or dance is a more natural and effective “answer.” In a recent group evaluation among the Samburu in Northwest Kenya, participants struggled to answer specific questions. However, they readily responded to open-ended questions by performing a drama or singing a song. Oral culture people rarely evaluate something individually; rather, they process collectively. The entire group or village has to agree for an evaluation to have any validity. Interviewing individuals is a literate fool’s errand; either the individual struggles to give an answer, or the group arrives to help the individual.

Keys to evaluation in oral cultures

Here are specific steps to follow when evaluating in an oral context:

  1. Establish the current community stories related to the issues, topics, or problems being addressed by the project. Where possible, seek out stories that reveal worldview (Why are things the way they are?), knowledge (How do they do things, or how do things work?), and behavior (What do they do, or how do they live?).
  2. After a project intervention, assess whether any of the worldview, knowledge, and/or behavior stories have changed, and if so, by what degree.
  3. Use indigenous people (preferably known and trusted) to collect data.
  4. Collect data in the local language. Do not use translators.
  5. Use oral methods of data collection. Do not use written questionnaires or individual surveys. Three suggested oral methods of data collection are:Observation: Observe the community in action. This is particularly helpful in determining behavior patterns.

    Focus groups: Focus groups are both a group interview and an observation technique. The evaluator should meet with groups of people in the community, not individuals. Use open-ended questions. Get them to tell stories. The data can be compared from group to group. If statistical analysis is still desired, treat each group as a unit. Talk and engage with enough groups to establish a good sampling size.

    Tests: Conduct tests as part of the focus groups, or randomly in the community. This is not a written test or a questionnaire. Simply ask people to tell a story, sing a song, or act out various topics of interest (e.g., What is your creation story? Why/how do children get sick? How do you prevent malaria?).

Following these simple keys to evaluation in an oral culture will generate effective and reliable data. Strategies and project implementation can be improved by organizations seeking to be change agents. Donors can determine if lives and communities are positively impacted as a result of their investment.

Notes

1 “MT4 Mobile School Pilot Project in Southern Sudan, An Interim Evaluation Report,” 5 February 2008, 13.

2 Walter J. Ong, Orality and Literacy: The Technologizing of the Word (London: Routledge, 2006), 55.

Biography

J. Peter McLain has extensive experience working cross-culturally in business, government, and non-profit settings. Since 2003, he has focused on training oral culture peoples, applying contextualized, indigenous-led solutions, first as the Executive Director at Voice for Humanity and then as the President of T4 Global. He has overseen dozens of orality projects implementing Christian leadership training, basic discipleship, and humanitarian efforts, and designed mixed-methods evaluations to measure impact among oral cultures in Afghanistan, India, Iraq, Kenya, Nepal, Nigeria, and Sudan.

 

Ordering Hard Copies

To order hard copies, please fill out the form below.

[contact-form-7 id="5555" title="Order Resources"]
[s2If is_user_logged_in()] [/s2If] [s2If !is_user_logged_in()]

Join the ION Community!


Find out more
[/s2If]