Walking the intersection of science and faith
Prepared by Adriana Donaldson, Charles Madinger, Nicholas Nickl, and Emily Pohl
A Primer for Research in Orality
Go ye therefore, and teach all nations
Matthew 28:19, KJV
Congratulations for having gotten this far in reading this primer. This shows you are not threatened by the word “research” in the title and therefore are open to whether research will have some relevance to your ministry.
But there is a limit to your openness, of course. After all, you are a busy evangelizer who takes seriously your responsibility to “teach all nations.” You have scriptures to study, lesson plans to develop, trips to plan, meetings to attend, budgets to finalize, sermons to prepare. The day still contains only 24 hours, and the important work of evangelization means there’s not much of your attention to spare on non-essentials. At this point your openness to research only goes so far as being willing to be convinced that research is not a waste of time.
Not to mention that the idea of research gives you the willies anyway. Statistics was never your favorite subject way back when, and research means you have to know what ∫ and Σ and √ mean. If the ancient Hebrews thought that π = 3.0 not 3.14… (c.f. 1Kings 7:23) and still received God’s favor, why get obsessive about numerical precision now?
Perhaps that’s an outline of your thinking at this point. Or perhaps you really have been anxious to consider research in your ministry but aren’t sure how to go about it. Either way, this primer is provided with you in mind. We are keeping it short, with enough information to get you headed in the right direction. First we’ll examine why research is important in evangelization (Ch. 1), then we’ll talk about the methods that might be employed (Ch. 2 and 3). We’ll next make a brief excursion into the world of outcome measures and statistics (Ch. 4) before wrapping up with how to report your work (Ch. 5).
The goal of all this, of course, is to energize your evangelization, and lend credence to your methods. The scientific method was still 12 centuries away when Jesus gave the Great Commission, but you can bet He knew all about it. He intended that we use every tool at our disposal to be as effective as possible in preaching the Good News to all nations, and He wasn’t kidding.
So let’s get started.
Chapter 1: Who Cares?
Why is research important in orality?
Let’s start by picking up where we left off at the introduction. We said that research can energize your evangelical ministry and make it more effective. Is that really true?
Here are a few objections to using research that many people in ministry may voice. You can probably add a few more of your own.
- We are already spread thin. Busy people involved in evangelization have plenty of challenges and things to occupy their time and attention. Resources are never enough, whether it’s time or materials or money. Why add additional complication by engaging in research?
- Yes, it’s neat, but will it benefit us? Is research really necessary, or just a distraction? Does it add anything? Is it worth the time and effort required? Is there any payoff to it?
- We don’t have the expertise or resources. Research is hard to do. It requires specialized techniques and training, and it’s expensive. It occupies resources better spent on the primary function of the project.
While these concerns seem prohibitive, let’s see if the benefits of orality research address and even outweigh the cost.
Identify best practices
First, research can identify the best practices that are most effective for specific circumstances and objectives, allowing ministry to be done more effectively. We recognize that “orality” is not a single method or technique or system, but a variety of tools and methods which focus on effective communication without dependence on literacy. Storytelling is certainly the most familiar of the orality methods, but there are others including music, art, dance, ritual, and environment. All these function in the context of specific cultures, and all these depend on the resources and media which are available.
However, some tools are better than others in specific circumstances or with specific messages. Having Bible stories danced out might be highly effective in one culture but elicit only laughter in another. Neighborhood oral networks might work well for health education but not for Bible study. Research can help us to identify which are the best practices for the right circumstances, the right message, and the right culture.
And here’s the important point: the more limited your resources, the MORE VITAL research becomes. If your finite resources are being stretched to the max, you can’t afford to waste them on things that don’t work. In other words, the smaller your budget or staff the more important research is to your project.
Wisely Steward Resources
Second, research allows you to appropriately allocate and utilize your resources to achieve your goals. Just like any other organizations, religious and evangelical organizations have finite resources which must be effectively utilized in the most appropriate circumstances to help achieve the best outcomes.
In stating “best outcomes,” we do not presuppose any particular type of goal. We’re not thinking like a business, which may target metrical goals like sales, profit, or number of customers. We recognize that many of our goals will be small, individual and personal: a single person who comes to faith is a perfectly good goal. Education and evangelization have more “fuzzy” objectives, which are harder to define and harder to quantify, but we can still work to identify the best practices which will achieve those objectives.
Achieve Organizational Goals Internally
Third, no matter what your organization, the reality is that, within your church, agency, or foundation, there are lots of good ideas about how to achieve the organizational goals. Somehow, the finite resources have to be allocated. The decisions may be made by a leadership individual or committee, or it may be simply parishioners considering how much to put in the plate. It may be a new project being launched, or it may be continuation of an existing project which is being questioned about effectiveness. Inevitably, however, choices are being made.
In our better moments, we naturally desire the best idea to get the resources – even if it’s not our own. But how does anybody know which is, in reality. the best idea? Unsurprisingly, data, gathered through research, helps make the point.
That doesn’t mean that it’s all about graphs and pie charts. But when those kinds of data are coupled with individual stories and personal accounts, the message becomes far more compelling.
Interact Productively with External Organizations
Once the point is understood about how the process works within your organization, it’s easy to see how it works externally as well. It’s inevitable that your evangelization will interact with other individuals and organizations. In some cases, there may be a foundation or agency to which you are applying for grant support. In other cases, it may be a partnership with a local church or NGO in which, together, you will undertake some project.
• External Funding Agencies
If you are applying for a grant or funding support, data from a pilot project or preliminary experience will be critical to successfully competing. Funding agencies usually have a structured method for evaluating grant proposals, and data will inevitably be considered in ranking the proposals for funding. Those agencies have their own donors to answer to, and need to show they are getting results. Moreover, they often have armies of statisticians who evaluate proposals quantitatively. That means that even a proposal with poor quality data may well get ranked higher than one with no data at all. The fact is that “storying” may be an effective method in orality, but stories will rarely get grant proposals funded.
That doesn’t mean that big numbers always win; showing how you can effectively achieve your goal is more important than the size of that goal. A focused proposal to more effectively teach religious instruction for a dozen villages in a small region, if well documented, can easily be more compelling – and win more funding – than a grandiose and superficial proposal that plans to briefly encounter thousands. The key is to credibly show through the data that you can accomplish what you propose if the grant is approved.
• Collaboration with External Organizations
It may be, however, that you are simply proposing to collaborate with another organization to do a project together. Perhaps there’s a church group in the area that is already involved with the community, and you’d like to tack your project onto the work they’re already doing. Or perhaps you have an idea for an orality program, but need the collaboration of a tech or craft or art group to make it work. It’s a less formal request than a grant application to a foundation, but even here data will be important.
Such external organizations may well have a less formal or structured method for evaluating collaborative proposals, but they still have to decide how to allocate their resources and they still have to be responsible to their donors. Data not only shows that the collaborative goal is achievable, but will also demonstrate that you are holding up your part of the project. Again, storying is a good way to relate your results; but storying plus data is a better way.
But maybe none of these apply to you. You’re on your own, you’re doing your work in evangelization, and you’re seeing results. Does research have any role in your work?
Absolutely. The process of self-examination is essential to the life of every Christian, just as it is essential to the work of any individual or organization which undertakes the Lord’s work. It requires the confidence to know that self-improvement is always a worthwhile goal. Secular and business organizations cannot afford to avoid critical self-examination, whether the results are good news or bad news. Can we be any less stringent in the far more important work of evangelizing the nations? We can easily convince ourselves about what a terrific job we are doing, but data give additional insights that can’t be ignored. Did we do the best job possible? Did we discharge our mission well? Did we effectively utilize the resources (time, money, etc.) that others entrusted to us? Are we really changing lives?
Perhaps the answer is yes. In that case, self-assessment is self-encouragement, and our doubts are relieved about whether we were ovedient to the Great Commission that Jesus entrusted to us.
But perhaps the answer is something less than yes. In that case, research will often identify an opportunity to be more effective in continuing the Great Commission.
The process of self-examination is essential to the life of every Christian, just as it is essential to the work of any individual or organization which undertakes the Lord’s work.
Sharing and Innovation
As if that’s not enough, there’s a sixth reason to do research, and that’s to share the findings with others. Jesus sent the disciples out “two by two” (Mk 6:7) because partnerships are always stronger. We work together best when we share ideas, learn from one another’s successes (and failures), and foster innovation. After all, the orality methods we use in our ministry were developed and inherited from our predecessors. We certainly have a responsibility to those around us and those who will succeed us in the Great Commission to learn as much as we can from them and to share with them all that we have learned. That’s how mission works.
Hopefully these thoughts have convinced the reader that research constitutes a worthy allocation of attention and resources in every individual project. Whether big or small, whether hard numbers or fuzzy qualities, whether personal or intramural or extramural, if you are committed to doing the very best you can to evangelize through the Gospel, research is essential.
But perhaps you’re still concerned about one of the points raised earlier: that research is difficult to do and requires specialized training and skills and resources. This is a legitimate concern, but a case can be made that, while it does require a adjustment in the mind-set involved in any orality project, research can be incorporated into the structure of most projects with minimum disruption and without requiring burdensome specialized resources. In the next section we’ll explore an overview of methods and techniques which can be used in orality research, and after this we’ll talk about potential outcome indicators that can be measured as well as some basics of how those indicators can be statistically analyzed.
Chapter 2: Getting Started
Research Methods & Techniques
Now that you’re a new convert to the principle of research in your orality ministry, you’re ready to become familiar with the principles of how to develop and conduct meaningful research that will achieve the kinds of goals we’ve outlined: better realization of your ministry goals, productive interaction with collaborating organizations and funding agencies, self-improvement, and innovation.
What’s the Research Question?
Research is about answering questions, so of course the most important step in research is identifying the research question. Surprisingly, however, this is the part that often gets overlooked, or gets done incompletely. If you don’t know what question you are considering, it’s predictable that you won’t come up with any sensible answers. And it happens all the time: an unfocused study that hasn’t clearly framed the research question often ends up with an unruly mish-mash of unstructured observations, an unkempt lumberyard of random boards and planks. You’ll never build a house from such a mess.
Surprisingly, even though the research question is the most important part of the research project, it’s not the first thing to be considered. We have two other items to tackle before we get to the research question itself.
What’s the problem?
The first step in missiology – and therefore in your research project – is to understand exactly what problem you are trying to solve. When Jesus told the disciples to teach the Good News to all nations, the problem was implicit in His command: they don’t know the Good News yet. Jesus could be implicit, but you have to be explicit. Formulate the problem you are tackling as clearly and narrowly as you can. Narrowly means, in this case, focusing on just the aspect you are trying to address. “Nobody around here knows their scripture” is too broad; instead try, “Children aren’t familiar with the key Gospel stories.” That’s a bite-sized problem that you can tackle with expectation of success.
What’s the intervention?
Once you have a clear understanding of the problem you are trying to address (but not before), you’re ready to articulate how you plan to tackle solving it. Unfortunately, too often missionaries get the problem/intervention steps reversed. They come up with a terrific idea for a program or a method, but haven’t thought through exactly what problem it is supposed to address. Westerners are particularly prone to the bad practice of knowing the solutions before they understand the problems.
Here we’ll introduce a new term: intervention. The intervention is the thing you are doing in order to address the problem. In the case of children who don’t know their Gospel stories, it might be the use of a new video cartoon series, or perhaps a new innovative coloring book. The point is that you must be able to clearly articulate the direct connection between the problem and the intervention.
What’s the research question?
Now that we have clearly identified the problem and have clearly articulated why the proposed intervention is likely to address the problem, we’re ready to identify the research question. The research question is a technical term, and it means a very precise formulation of exactly the question your project is trying to answer. It often takes the form of a statement you are trying to prove to be true, such as: “The Acme Gospel Video Series for Children is a more effective way to establish Gospel familiarity among children living in rural west African villages.” You might well imagine a skeptic who hears that claim and says, “Oh yeah? Prove it.” And that’s just the point: the purpose of your research project is to provide the evidence which answers that challenge.
So far, we’ve gotten this far with designing our research project:
Problem ▶ Intervention ▶ Question
However, we’re not ready to move on yet. Let’s consider further some concerns about that all-important research question.
Keep your question simple.
Trying to do too much is a common mistake in research. There are so many possibilities, it’s tempting to go overboard. To avoid that pitfall, it’s best to formulate ONE primary research question, and make sure that one question drives the project. You may want to consider secondary questions as part of your project, but make sure they are directly related to the primary question, bolstering or clarifying the primary question, not distracting you from that primary question.
Keep your question directly and closely tied to the intervention.
Another related way of introducing unnecessary complexity into a project is to look at a result that’s too far downstream of your intervention. To understand this, consider the following example. You’ve developed a new orality-based format for conducting a revival, because the revivals in recent years have been boring and unproductive. In this case, the intervention is the new revival format. You want to know how effective the new format is. You might be tempted to think that the best measure of effectiveness is subsequent church membership, participation in prayer groups, and the like. But those outcomes are too far removed from the intervention, and there are too many other factors which might affect those that you can’t measure. Instead, when trying to determine how effective the new format is, ask your question about something that’s immediately downstream of the intervention. That might be a survey or poll about how much people felt moved by the end of the revival, or it might be how many people come back for day 2 of the series. So the research question might be: What is the effect of the new format on the number of participants who have a positive opinion about its quality at the end of the revival compared to prior years? The direct relationship between the intervention and the question will produce a credible answer.
Keep the question do-able.
Avoid the temptation to bite off more than you can chew by developing elaborate questions you can’t reasonably answer. Weigh your resources and be realistic about what you can finish (remember Jesus’ admonition in Luke 14:28 about the builder who couldn’t finish the tower he started). In the example of the revival we just considered, for instance, tracking church attendance the following Sunday may be impossible because the attendees might end up at any of a dozen churches in the vicinity. You need to think through whether the question you are considering can be reasonably answered using the resources you have.
Write it down in outline form.
Now that you’ve taken the trouble to formulate the problem, the intervention, and the question, take a few minutes to write them down clearly. You might assume that you (or, even worse, “we”) understand it all without obsessively writing it out, but that’s a big mistake. The process of writing it down forces you to zero in on the goal with precision, and forces you to discuss it with your colleagues to ensure that you’re all on the same page. Moreover, you can keep that written question in front of you every step of the way to ensure that you stay on target. It’s surprising, as the project moves forward, how often problems and new ideas evolve into tangents that threaten to take you off focus. Being able to go back to the research question for mid-course corrections becomes invaluable.
In research terms the outcome is the result that you plan to measure or tabulate that will answer the research question.
What’s the Outcome You Plan to Track?
So far we’ve considered two new terms: intervention and research question. Now let’s consider another: outcome. In research terms the outcome is the result that you plan to measure or tabulate that will answer the research question. This means that there must a direct connection between the intervention and the outcome that answers the research question. It has to be clear that the intervention caused the outcome, so that collection of the outcome data will answer the research question.
Let’s go back to our revival example. Our project might look like this:
- Problem: Recent revivals have been poorly attended, and few people seem to experience meaningful revival through them.
- Intervention: A new orality-based format will be implemented for this year’s revival.
- Study question: The new revival format will produce positive responses regarding (1) immediate personal benefit of the program and (2) intention to deepen religious life in the short-term future, when compared to the previous year’s revival.
- Outcome: the outcome of the interview responses immediately following the conclusion of the revival program among up to 50 volunteer individuals who attended this year’s and last year’s revival. Volunteers will be asked to indicate whether the current revival was better, the same, or worse than last year’s, and will also be asked to indicate whether they intend to deepen their religious life in the short-term future by choosing from among standardized options (church attendance, individual prayer, scripture study, etc.)
As this example makes clear, there is a very direct and natural progression among the elements of this study, in which each component is directly linked to the one before it. In particular, you’ll note that:
- The problem is important but focused.
- The intervention is an attempt to directly address the problem.
The study question is focused on the direct results of the program, addressing factors that clearly speak to the problem of ineffective revivals.
- The outcomes being measured are focused, and include a study group of people who are able to compare the new revival format with the earlier format. By using volunteers who attended last year’s revival, a comparison of the new and old formats can be made.
Types of Outcomes to Track
We’re not going to attempt a comprehensive review of the kinds of outcomes that can be measured, but we will consider that they can be divided into two categories: qualitative and quantitative outcomes.
• Quantitative outcomes
Quantitative outcomes are the kind of outcomes that can be evaluated using numbers. Sometimes they are things that can be counted or measured: how many people attended a program, how much did knowledge scores improve on a test, etc. In the example of the revival, a quantitative outcome measure might be: How many people reported that this year’s program was (a) better, (b) equal, (c) worse compared to last year’s program with respect to personal religious or spiritual benefit? Since you can count how many respondents fall into each of those three categories, this is a quantitative outcome.
• Qualitative outcomes
Qualitative outcomes come more naturally to us in our work as evangelists because it’s in the nature of the work we do. Qualitative outcomes can be defined as outcomes that cannot be readily measured or counted, usually and normally describing people’s opinions, knowledge, attitudes or behaviors. Qualitative outcomes might look like this:
- At the last church general meeting, the parishioners expressed the greatest concern about . . .
- A focus group discussion among religion teachers identified two major problems in youth religious instruction . . .
- During a series of structured interviews, respondents generally agreed about . . . and disagreed most about . . .
This is probably a good place to point out that qualitative vs. quantitative outcomes is NOT the same as objective vs. subjective. It’s an important point, because the implication is that subjective information is not as reliable as objective. The truth is that qualitative outcomes can – and should – be held to just as stringent standards of objectivity as quantitative outcomes. Qualitative information may not fit into a pie chart or bar graph, but it’s no less important. Indeed, since “teaching the nations” is a qualitative charge for all Christians, the best research will track both qualitative and quantitative outcomes.
It is a characteristic of orality research that the results are strongest when both quantitative and qualitative results are presented together. Quantitative results provide the hard facts, while qualitative results lend nuance and texture and personal insight to the data. Rather than just reporting the bare statistics, you can add subtlety and perspective with qualitative details, like this:
In the questionnaire survey, 78% of respondents reported being very concerned about youth disaffection. In the subsequent focus group in a very robust discussion of the issue, participants voiced a variety of concerns. Typical was Ms. P, who said: “I am heartbroken when I see the young people I knew since they were born drifting away from the faith. My own daughter hasn’t been to church in years, and she isn’t teaching my grandson any Christian faith at all.”
The first sentence, quoting the 78% statistic, is the quantitative result; but the subsequent information, with the quotation, provides qualitative outcomes that give a voice to the anonymous statistic. When these two outcomes are reported together, they build a more complete picture than either can alone.
How Do You Plan to Gather the Outcome Data?
The final step in planning your research project is to determine how you are going to gather the outcome data you identified in the last step. Obviously this will depend entirely on what kind of data you will be tracking. But you need to work out ahead of time exactly what that will look like and what the steps will be, so you can be prepared with the supplies, forms, and other materials you will need. Important considerations are:
- Who will be the subjects of your study? How will you find those individuals? In the case of our revival project, how will you identify 50 volunteers who were at last year’s revival, and how will you recruit them to be interviewed?
- Who will gather the data, and where will it happen? Will you have enough people? Again, using our revival example, where will you conduct the focused interviews, and how many individuals will each person interview?
- How will the information be documented and collected? Will there be paper forms? If so, you need to design the forms ahead of time with places to enter the data.
This is called the Methods part of your study, and it’s one of the last elements of the study design that you will plan ahead of time.
Let’s pick up where we left off in our revival study:
- Outcome: Interview responses immediately following the conclusion of the revival program among up to 50 volunteer individuals who attended this year’s and last year’s revival.
- Methods: Volunteers will be identified at the beginning of the revival among attendees as they enter. Immediately following the conclusion of the revival, volunteers will be interviewed in groups of 10 by five trained study coordinators, who will enter the results onto paper data collection forms (one form for each volunteer). Forms will be subsequently collected for aggregation and analysis.
So, to summarize, we have seen an organic flow of every step in our study design.
Problem ▶ Intervention ▶ Question ▶ Endpoint ▶ Methods
The elements flow logically in a chain and give an integrated way to understand how research works.
You’ve now completed the basic elements of designing your study. It wasn’t that hard, only requiring a bit of clear step-by-step thinking. In the next section we’ll talk about some specifics for building a powerful research study that will lead you to draw meaningful conclusions.
Chapter Three: Asking Questions
Qualitative Data, & Gathering Methods
As we discussed in the previous chapter, you need to clearly write out what you want to learn. You need to begin with the end in mind. What do you think is happening? How is it changing people’s attitudes or opinions? Have they learned anything? Often our gut instincts or one-off conversations with people give us the impression that something is going well or not. Sometimes we have read reports, see older statistics, and wonder if what we learned in academic papers is similar to the group we work with. We know a lot about the people we work with, but there is always more we can learn! We want to take what we intuitively know and prove it with data. Essentially, we want to prove our hypothesis.
In this chapter we will discuss quantitative evaluations. Remember, qualitative date is information that is typically about qualities and difficult to capture in numerical form (although qualitative data can be quantified). Qualitative data typically consist of words and normally describe people’s opinions, knowledge, attitudes, or behaviors.
Qualitative Evaluation via Focus Groups
Focus Group is a term used to describe a group of similar people that meet and share their opinions or experiences with an evaluator. This type of evaluation is helpful to get the honest opinion of those involved in the work, project, or ministry. It provides an opportunity for people to answer open-ended questions and participate in a conversation. This conversation allows the evaluator to track themes over the course of multiple groups.
Setting Up a Focus Group Evaluation: The People
Once you have a decided that a focus group evaluation is the best method for your work, you need to identify the participants and place them into groups. A sound focus group evaluation should have 3 – 5 groups of people with 5- 8 people per group. A minimum of three groups is needed to notice trends and similar patterns across groups. Any less and it is hard to tell if it is a pattern or simply one group with an opinion. Keeping each focus group small, with a maximum amount of eight people, is intentional as well. Five to eight people per group helps the group stay small and encourages everyone to participate. Too many in a group and you will not get to hear everyone’s opinion, you will just hear the unofficial leader’s opinion.
In non-western cultures, to not be included can be offensive. Therefore, if it is easier to divide up all the people who have participated in the ministry into many focus groups and interview everyone that way, you can. It will give you more information that you must sort through, doesn’t save time, and has no statistical advantage. However, it might keep relationships with people! Just keep in mind that you then must take into consideration the results from each group, not just a selection of them.
There are two ways to divide people into focus groups. The first is to list their names in a spreadsheet and randomly assign them a number which corresponds to a focus group. This will give you completely random group of people. Many social scientists go about forming groups in this way and it is a good idea if the people participating are all similar (i.e. 30 year old women with 5+ children). However, if you have a diverse group of people you need to keep in mind social dynamics and power plays. It is advised to divide these people into their own like groups.
Do men usually speak up more than women? Then divide by gender. Do older women usually speak up more than younger women? Then divide by gender and age. Do the women in village #1 have more social standing than women in village #2? Then divide the different locations into different groups. Try to think through differences that could possibly influence others in the group. The goal is to hear from everyone, not just those who have the most social capital.
Setting Up a Focus Group Evaluation: The Questions
The questions you ask in a focus group are critical since you have a limited amount of time, attention span, and questions. It is vital to limit the questions you ask to four to six good open-ended questions. Too many questions and you lose the attention span of the group. Your questions need to be open-ended to facilitate conversation. For example, you could ask, “Did you like learning this bible passage in story form?” This is a yes or no question, and therefore doesn’t give participants an invitation to discuss what they are thinking in any detail. Such “closed” questions should be avoided in focus groups. Instead you could ask, “ Tell me what you liked about learning the bible in story form.” This gives room for the people in the group to provide their opinions.
Often evaluators have so many things they want to learn from focus groups that they try to squeeze more questions onto their list than they should. The goal of a focus group is to be focused on one topic. Over time you can ask more questions on different topics. For now, be clear on what you want to know and how that information will impact your work. What does a donor need to know? What do you need to know in order to improve your work? You must be focused on what you want. It is advisable to limit your questions to one or two categories or topics.
Once you have narrowed your questions down, you need to look at the order of the questions and how you will ask them. Order is important as it will play a role in how comfortable the participants feel within the group. We recommend that you start with the broadest question and work your way down. A broad opening question allows the participants to feel comfortable with the setting, one another, and the evaluator. If people are more comfortable, they are more willing to give their honest opinions. If you start with a hard or personal question, many will not be as honest as they could. If language is a challenge, make sure you test your questions with a few individuals who will not be participating in the formal focus groups to make sure translation and comprehension are in place. Finally, your questions must be the same for each group and asked in the same order, so write them down and then use that same script for every group. You want to keep as many variables the same so that you can compare answers across groups.
Focus Group Evaluation: The Actual Evaluation
Once you have your participants and the exact questions finalized, it is time to bring people together for discussion. However, leading a focus group a specialized task and there are a few things you should keep in mind. To facilitate a group, it is ideal if you have two leaders: one who leads the conversation and the other who is in charge of recording the conversation in both written and audio (recorded) format. The person who leads the conversation will ask the pre-determined questions, making sure everyone participates, i.e. encouraging conversation. The person who is in charge of documentation should have an audio recording device to record the entire conversation so they can go back and listen. They also take notes on the conversation as it occurs. The recorder shouldn’t jump into the conversation but rather sit back and record everything. Having the conversation recorded is critical, especially as you work with multiple groups. It can be challenging to remember what was said in which group. Having the documentation of the conversation will help you significantly when you analyze the conversation.
When you begin, start with explaining why you have called them together. Make it clear that you want their honest opinion and feedback Communicate that what they say has no negative consequence and you hope to improve on what you do. Then start with the first pre-determined questions, working your way through them. Don’t be afraid of silence or dead space. Often people need to think through the question, perhaps translating it to a different language, and time to formulate an answer.
After your group has gone through all of the questions and the participants have left, it is important for the leaders to discuss how it went. Often during this time, the evaluators will take the time to transcribe the recording. It is ideal to transcribe each group meeting before you move onto the next group. You repeat this process for all of your groups.
Congratulations! You have collected all of your qualitative data. Now the question is, what do you do with it?
Focus Group Evaluation: The Information
At this point you should have three to five sets of group transcripts. As you read or listen through each transcript, begin to notice themes. What are things that are repeated by everyone in the group? Typically in oral cultures, a focus group will come to a general consensus on a question. It might take a while to arrive at that consensus and so it is important to listen to the recording (or read the transcript) and see if the final answer really reflects the entire group. If not, record multiple answers for the group. Do this for each focus group, taking notes of the answers and themes that come through in the individual groups.
Now it is time to begin comparing answers between focus groups.
Did three out of the four groups mention something and others did not? Write fast sentences to capture these themes such as:
- Women thought it was hard to memorize the story
- Women thought that once they actually memorized the story, they could repeat it word for word.
- Older men thought memorizing was easy.
- Older men thought they could repeat it word for word
- Younger men thought it was hard to memorize.
- Younger men thought they were good at repeating the story word for word.
As you do this, you begin to see trends and are able to come to conclusions. This is where you get your data and learn a lot! With our example above, you could say that of those who participated in focus groups, 2/3 of the groups thought it was difficult to memorize the story but once memorized, every focus group found it was easy to repeat the story word for word. Clumping like-minded results together allows you to know if it is a trend across the entire sampled population or just an individual group’s idea. Often through focus group discussions you get fantastic quotes that will validate your work or validate your hunch on how to improve your work. Those quotes are worth a lot when speaking with donors or team members to gather finances or prove that there is a need to change the model of work.
Any data, whether quantitative or qualitative, are able to be manipulated to say what you want. As believers, it is important to have integrity and share what the data actually said. Therefore, as you sift through qualitative information from a focus group, it is critical to continue to ask yourself if your bias, hope, or opinion is influencing how you sort the data. Trust that what the individuals in the focus groups said needs to be heard to make your ministry more effective, even if it is hard to hear! Data is only as reliable as the person using it. As believers we have to answer to the highest authority for our integrity.
Working through a systematic way of asking groups of people questions can have a huge impact on how you are able to communicate your results to those of influence. It empowers individuals and shows that you care about their opinion while giving you information on how to improve (or find funding for) what you do.
Quantitative Evaluation via Questionnaires
At times, we need quantitative evidence or data to prove a certain item. Conducting an evaluation with questionnaires is one of the easiest ways to obtain this information. In this section we will discuss how to develop a questionnaire and carry it out.
How to develop a questionnaire:
It is an overwhelming process to consider putting together a questionnaire, however these steps make it relatively easy.
- Identify indicators for which data is required
- Formulate questions to capture the data for the indicators
- Number the questions
- Record possible responses
- Code responses
- Translate and back-translate the questionnaire
- Test the questionnaire
- Finalize and print the questionnaire
For the purposes of this discussion we will assume that you are administering the questions verbally, since this is an orality-based project.
Identify indicators for which data is required
The purpose of the questionnaires is to collect information to be able to measure and track the indicators. An indicator is a data point which communicates where something stands.
An easy indicator to understand is the standard grade for an examination. An indicator could be “% of students who score an ’A’ on the mid-term examination”. If a student has a test with 100 questions, and they answer 90 – 100 correct, they will be given an “A”. If a student answers 80-89 correct out of 100, they would be given a “B”. To obtain the indicator information you simply calculate how many students received “A”s out of the total student population.
For a more relevant indicator, you could have a scenario where are working with a group of women with a storying project and want to know the percentage of women who have memorized your foundational biblical stories. Therefore, an indicator could be “% of women participants who can recite 3 foundational biblical stories.”
Indicators are endless in how you write and evaluate them. The key is to establish which indicators will help validate your work and define the components of the indicator (i.e. define what “foundational biblical stories” means). Then you need to track this information over time to see any improvement with your group. Once you have your indicators written and defined, you need to formulate questions on how to capture the information for each indicator.
Formulate questions to capture the data for the indicators
Each question in the questionnaire needs to be thought through carefully in order to capture the information needed for each indicator. For some indicators, multiple questions will need to be used in order to arrive at the indicator. Essentially, think through how you would calculate the indicator and what information you need from individuals to obtain the indicator. For the indicator “% of women participants who can recite 3 foundational biblical stories” you could ask: Do you remember any of the bible stories you have heard from x group in the last two months? If yes, then ask “What are the bible stories you remember?” and then “are you able to recite those stories for me?” Those three questions provide you the information you need for this indicator. Although, you could also just ask “Are you able to recite the bible stories you remember for me?” and get your result. You need to think through how you want to calculate the information.
Number the questions
Each question must be numbered to allow for easy identification and data entry. Simply using one, two, three, etc. can work for your numbering system. This number will be then tracked through to your spreadsheet when you analyze the answers.
Record possible responses
As you write out your questions, you need to think through the possible answers to each question. You don’t want a surprise answer without a way to record it.
|1||Do you remember any of the bible stories you have heard from x group in the last two months?||No
Refused to answer
|2||What are the bible stories you remember?||Creation
Women at the Well
|3||Are you able to recite these stories for me?||Creation
Women at the Well
It is critical to think about how your data will be entered into a spreadsheet (such as Microsoft Excel) after it has all been collected. One way to ensure this is at the design stage by coding each response on the questionnaire carefully. Coding simplifies data entry. Without coding, each answer has to be typed in full. Using a code provides a set of standard numbered answers to be entered.
|1||Do you remember any of the bible stories you have heard from x group in the last two months?||No
Refused to answer
|2||What are the bible stories you remember?||
|3||Are you able to recite these stories for me?||
*777 is the standard way to record a “no response” response.
Translate and back-translate the questionnaire
It is common to develop a questionnaire in your native language and then translate it to the operational language. If English is your native language, you then need to translate it to the local language and back again. Comparing the original version and back-translated version allows any significant differences to be addressed in the translated version. This ensures that the meaning of the questions are maintained through the translation process.
Test the questionnaire
Once you feel you have a solid translation then you need to test the questionnaire on a few individuals so that you can find out if the questionnaire provides you the desired information and allows you to discover any misunderstandings in a question. It also helps to practice before you begin your evaluation.
Finalize and print the questionnaire
Once you are satisfied with your questionnaire, finalize it and print it! The questionnaire is ready to be used.
Qualitative data is an invaluable component of orality-based research, and we’ve covered two methods of obtaining data: focus groups and individual questionnaires. Both can provide organized methods of gathering data, and ensure that you obtain a broad range of opinions from which consensus ideas or underlying themes can be harvested. In this respect, qualitative research can be an extremely powerful method to achieve the goals we discussed at the outset, and therefore forms an essential part of your methodological armamentarium.
Chapter 4: Getting the Results
Outcomes and Statistics
As already discussed, outcomes can be of two general types: qualitative and quantitative. Qualitative outcomes come naturally to orality-based workers, because it’s typical of oral (non-text) cultures. Since we discussed qualitative outcomes in the last chapter, we’ll spend this chapter talking about how to understand quantitative outcomes.
Descriptive Outcomes and Inferential Outcomes
Remember that we defined quantitative outcomes as those that can be counted and tabulated, and which can undergo statistical analysis. Those quantitative outcomes can be thought of in two ways: descriptive outcomes and inferential outcomes.
Let’s start with descriptive outcomes. They tabulate characteristics of a group and include things like number of individuals and proportions within the group. Examples:
- This church contains 243 members in 84 families.
- The average age of this church membership is 43 years.
- This high school has 412 children, 52% girls, 48% boys; the average GPA is 1.4.
- This village contains 43% Christians, 25% Muslims, 18% “ancestral” (traditional) believers, and the remainder hold other beliefs.
- In the survey of congregation members, 41% rated outreach as the most important new project the church should start, followed by youth ministry (34%), retiring parish debt (12%), and senior care (6%). Note that descriptive statistics simply tell us facts about the group being considered. They don’t try to infer anything about relationships or causation, or attempt other deductions.
The statistics we just used as examples are small groups, so you can count all the people in a church or village. But suppose the group is very large, and you can’t count them all? What percentage of the people in Accra, Ghana (pop. 1.6 million) are Christian? To answer that question, you need to count a sample of the whole population. Sample is a technical term, and it means that you are gathering data from a selected group of the large population, and then concluding that the rest of the group fits the same profile. So to find the percentage of Christians in Accra, you don’t ask all 1.6 million; you ask a sample of the people, and infer that the proportion of Christians in the sample mirrors the proportion in the whole population.
While methodologies to ensure a representative sample is more than we want to cover in this overview, you need to be aware of the problem and be creative in ensuring that the sample is typical of the whole group
The reason sample is a technical term is that there are two important technical considerations in dealing with them. The first is that you need to ensure that your sample is representative of the whole population. In the famous “Dewey Defeats Truman” headline from the presidential election of 1948, the Chicago Tribune projected Dewey the winner based on a poll (another word for sample) of who people voted for. The problem was that the sample wasn’t representative of the voter population, because they relied on telephone surveys. Telephones weren’t common in 1948, so this introduced bias toward telephone owners (who tended to be wealthier) that invalidated the survey. A ministry-related version of this error is “preaching to the choir.” If you want to describe a characteristic of the church membership, you can’t just survey those that come to church on Sunday since that leaves out those who attend less regularly but still are members. So a Sunday-morning satisfaction survey, for instance, can give you information about the Sunday church-goer satisfaction, but don’t kid yourself that you have information about the whole membership; you’ve missed those who stayed home because they are dissatisfied. While methodologies to ensure a representative sample is more than we want to cover in this overview, you need to be aware of the problem and be creative in ensuring that the sample is typical of the whole group.
A second issue in dealing with samples is that it has to be big enough. If you want to find out the age distribution of a population, you have to survey enough people to get some from each group. Determining the sample size needed to reach valid conclusions is more than we can cover in this primer, and you may need a statistician to help calculate how big your sample should be. For simpler outcomes (only two choices, for example, like male/female) small samples are fine; but for more complex outcomes (a larger number of categories, like Christian denominations for example) you’ll need a bigger sample.
Basic statistical measures
Now that we’ve discussed descriptive outcomes, we should give attention to some statistical measures that come in handy. These are statistical measures that you’ll use in both descriptive outcomes and inferential outcomes (which we’ll discuss next), so let’s check some basic definitions.
- Mean (or average) is number that characterizes the “centrality” of a set of values. To calculate the mean of a list of numbers, all the numbers are added and the sum is divided by the count of those numbers. Example: There are seven people in the room, ages 18, 29, 34, 43, 52, 71, and 87. The mean (or average) age is the sum of those six numbers divided by 7:
- 18 + 29 + 34 + 43 + 52 + 71 + 87 = 334
- 334 ÷7 = 47.7
- Therefore, the mean age of people in the room is 47.7.
- Range is distance between the lowest and highest numbers in a list. It’s often reported after the mean, in parentheses. In the case above, we could report: the mean age is 47.7 (range 18-87).
- Standard deviation (abbreviated std. dev.) is a measure of how closely a set of numbers clusters around the mean. A small std. dev. means most of the numbers are very close to the mean, while a big std. dev. means the numbers are spread out and many are very far from the mean. In the example of the ages in the room, the std. dev. will be large since the ages are widely spread apart (only 2 of the 7 people have ages near the mean of 47.7). But if the room were in a senior center, the std. dev. would be small because most of the people will have ages pretty close to the mean.
- Median is a number that represents the mid-point of a list of numbers arranged in numerical order. Example: In the list of seven ages above, the middle age is 43; three of the ages are less than 43 and three are above 43. Therefore 43 is the median age.
- Prevalence is the proportion of a group that has a specific characteristic. It is sometimes also called the frequency. When multiple categories are considered, it is also called the proportion. Prevalence (and frequency and proportion) are values that represent the prevalence at a given moment in time.
- The prevalence of atheism in this community is 12%.
- The proportions of age groups in this community are: 18% children, 23% teenagers, 48% adults, and 11% seniors.
- Incidence is the proportion of a group which develops a characteristic or transitions into the category over a specific interval of time. Note the difference between prevalence and incidence: prevalence is a statistic at a single moment of time, incidence is a statistic over an interval of time.
- The prevalence of atheism in this community is 12% (now).
- Each year approximately 6% of teens in this community lose faith and become atheists. The incidence of conversion toatheism among teens is 6% per year.
Inferential outcomes are among the most powerful outcomes in research, because they allow us to make inferences (draw conclusions) regarding association, correlation, and causality. Consider the following statements:
- The incidence of atheism among teens is 6% per year.
- The incidence of atheism among teens who attended a church youth program regularly is 3% per year, while the incidence among those who did not is 8%.
You may leap to the conclusion that youth groups prevent atheism among teens, but be careful. Let’s go through the two steps we take in reaching that conclusion.
The first step is to infer a correlation between two factors, in this case youth programs and atheism incidence. Correlation is a statistical term that means that two factors are associated or linked with each other. It’s not just random chance or an accident that the statistics about youth programs and atheism parallel one another; the two factors are linked together. Correlation refers to this association in the form of a mathematical computation that demonstrates that the apparent association is (almost certainly) a real association, not just random chance. Those mathematical analyses can be very complex, though in a moment we’ll come back to a couple of very common types of analysis which are both very powerful and easy to do. The point is that when we say two factors are correlated, we are saying something very specific: that statistically it’s highly probable that the two factors are truly associated (or linked) and it’s highly improbable that they appear associated because of random chance. (For math geeks: the statistical computation is to determine that there is a >95% chance that the two factors appear correlated because they truly are associated, and a <5% chance that the apparent correlation is just a random accident of the data.)
But we still need to examine the second step in the inference we made above. Our first inference was that youth groups and atheism incidence are correlated. The second (and totally separate) inference is that the correlation between them is causal: attendance at youth groups caused a lower incidence of atheism. This may seem to be a very straightforward and undeniable conclusion once you see that they’re correlated, but in fact it’s not so simple. Once we see a mathematical correlation between two factors, A and B, there are four possible reasons for this correlation:
- A caused B.
- B caused A.
- Both A and B were caused by another (unknown) factor, X.
- A and B are not truly correlated, it just looks that way by random accident, which is always a possibility.
So in the case of the correlation between youth group attendance (A) and atheism incidence (B), it’s possible that A caused B. But it’s also possible that B caused A, assuming that teens who are already atheists are not likely to attend a church youth group. And it’s also possible that both A and B were caused by an unexamined X. Perhaps when kids grow up in a religiously faithful home (call that X), they are both more likely to go to youth group and less likely to become atheists. In that case X (religiously faithful home upbringing) is the primary cause of both A (youth group attendance) and B (avoiding atheism). That explains why A and B are associated without one of them causing the other.
The point is to emphasize that correlation and causality are two completely separate conclusions to draw from inferential outcomes, and just because you’ve shown correlation doesn’t automatically mean causality. But this example has also demonstrated the primary characteristic of inferential outcomes: by examining the data we draw correlative and causal inferences.
Basic Methods to Analyze Inferential Outcomes
Have a look again at these examples of outcomes that were mentioned earlier in this chapter:
- The average age of this church membership is 43 years.
- This high school has 412 children, 52% girls, 48% boys; the average GPA is 1.4.
- This village contains 43% Christians, 25% Muslims, 18% “ancestral” (traditional) believers, and the remainder other beliefs.
- The proportions of age groups in this community are: 18% children, 23% teenagers, 48% adults, and 11% seniors.
You may have noticed that there are two different kinds of measures reported:
- Some measures are on a continuous scale which can vary across a range. The average age the church survey is 43; ages of the members can be anything from 0 to 100 (or so), so the average can also be any of a continuous range of numbers. The same concept applies to GPA; the average is 1.4, and the actual values can span the range from 0 to 4. When outcomes can be any number across a range or span of values on a continuous scale, they are called continuous outcomes.
- Some measures are in discrete categories. By discrete categories we mean carefully defined non-overlapping groups into which the subjects fit. Girls vs. boys is one example; so is Christians vs. Muslims vs. ancestral believers. When outcomes are in discretely defined categories like this, they are called categorical outcomes.
- Categorical outcomes can be further defined. When there are only two possibilities (e.g. boy vs. girl) the outcome is sometimes referred to as dichotomous.
- When the categories are in a specific order (e.g. children, teenagers, adults, and seniors) they are called ordinal categorical outcomes (youngest to oldest). When there is not in any specific order (e.g. Christians, Muslims, ancestral) they are called non-ordinal categorical outcomes.
Why is this important? Because the statistical methods you will use to analyze the outcomes depends on what kind of outcomes you are measuring. Each type of outcome has its own kinds of analysis methods. While there are several kinds of statistical tools (some of which are quite complex), we will discuss two very powerful methods that are easily within the reach of even amateurs.
Comparing dichotomous outcomes
Consider the following example:
You have interviewed 84 adults in a village, and have discovered:
- 51 have attended a revival in the last year, of whom 37 (51%) are now regular churchgoers and 14 are not.
- 33 have not attended a revival in the last year, of whom 15 (45%) are now regular churchgoers and 18 are not.
There seems to be a relationship here, because 51% of those who have attended a revival are churchgoers, while only 45% of those who have not attended a revival are churchgoers. But that’s not a very big margin of difference. Can you validly claim that the two factors are related? How will you analyze this to know if there is a mathematical correlation between attending a revival and churchgoing?
One answer is a test called χ-square (also chi-square, because the Greek letter χ is chi). You can visualize the data graphically by creating a 2 x 2 table, in which one side is one outcome (revival: yes or no) and the other is the other outcome (churchgoing: yes or no). It’s conventional to put the outcome you think is the cause on the left, as the row headers, and the outcome you think is the effect across the top, as column headers.
|attended a revival||37||14|
|not attended a revival||15||18|
Now go to any on-line chi-square calculator (just do a search using your favorite search engine) and you will see a blank table like this where you can fill in the column headers, the row headers, and the numbers. Some sites ask for the “significance level” and the standard is 0.05 (either check the appropriate box or enter the 0.05 in the correct field). Many spreadsheet program such as Microsoft Excel can do this computation as well, but the formatting to make it work can be tricky so an on-line calculator is easier.
When you hit “calculate” the calculator will show you two numbers. One is the χ-square value, and the other is the “p-value” (sometimes called the “significance” or just “p”). The p-value is the important number for our purposes; if the p-value is less than or equal to 0.05 then you have a confirmed correlation between the two factors (remember our definition of correlation above: a statistical measure which confirms that two factors are related or associated with each other). In this example, the p-value is 0.012. So our data have established a correlation between attending a revival and being a churchgoer.
It’s pretty straightforward to calculate whether there is a significant correlation when you have two dichotomous outcomes by using this simple statistical method.
Comparing continuous outcomes
The χ-square method is useful for analyzing dichotomous outcomes, but what about if your outcomes are continuous (numbers over a range)?
Consider this example:
You’re doing a study to determine whether Bible understanding correlates with participation in a youth group among high school students. You give 96 high school students a 10-question test (score range 0-10) and find that:
- Among 46 students who participate in a youth group, the average Bible test score was 7.3 (std. dev. = 2).
- Among the remaining 50 students who do not participate in a youth group, the average Bible test score was 7.1 (std. dev. = 2).
The second group had a lower test score, but not by much. Is the difference between the two scores meaningful?
To answer this question, you’ll use the “student’s t-test.” It’s a bit more difficult to calculate than the χ-square, but again there are on-line web sites that will do the calculation for you (as will spreadsheet programs like Excel, if you have the patience to set up the format correctly). On some web sites, you will enter all the values into two boxes or columns: one box or column will be the 46 scores of the youth-group students, the other box or column will be the 50 scores of the non-youth-group students. Other sites will ask for the number of cases in each group (in this example 46 and 50), the mean of each group (7.3 and 7.1), and the std. dev. for each group (use a spreadsheet to calculate this).
As was the case with the χ-square, you choose 0.05 as the level of significance and then hit “calculate.” Depending on the site there may be several numbers reported, but just like with χ-square the number you want to find is the “p-value” (or just “p”). If the p-value is 0.05 or less, you have shown a statistical correlation; otherwise, the there is no correlation. In this (fictional) study of the Bible students and youth groups above, the p-value is 0.62 so we do not have a correlation. This means that, in this example, you cannot validly claim that the study shows less Bible knowledge for one group compared to the other – even though one score was lower. You’re still welcome to think that if you like, but your study has not proven it.
There are dozens of ways to analyze quantitative outcomes, many of which require advanced calculations. But we have seen here that the beginner researcher can do many powerful analyses of data using simple statistical methods. With these tools, many important types of research can be successfully completed.
Chapter 5: Tell Your Story
Reporting Research Results
Let’s go back to Chapter 1 and reconsider the reasons we decided that research was important in the first place:
- Identify best practices
- Appropriately allocate resources
- Achieve organizational goals internally
- Interact productively with external organizations
- Sharing and innovation
If you’ve completed a project with a research component that achieved any of these goals, then it’s likely that you’re excited about getting it all down on paper so that you can remember and learn from the experience and so that you can share it with others. Yes, research is exciting. It’s exciting to objectively see what you’re accomplishing, it’s exciting to know how far you’ve come, it’s exciting to see from your research where the next step should be, and it’s exciting to tell others about what you’ve done so that they can also learn and grow from it. It’s fundamental to Christianity that everything of real value only increases when you give it away: faith, love, peace, friendship, and joy all grow when you share them. What a nice coincidence – if it is a coincidence – that research fits that same pattern.
There are a number of ways to report your results, the most popular being oral presentations, written reports, and papers submitted to a journal for publication. But the outline for most reports will follow a typical format.
Baiting the Hook: The Problem
Your report tells a story, so you want to move as quickly as possible into the plot for your story. Remember from Chapter 2 that, “What’s the problem?” was the first question in starting your project? It’s also the first element of your report. The beginning of your report is when you convince your hearer or reader that what you’re doing is important. Really important. Teaching the nations important. Kingdom of God important.
This is often called the Background section of your report. Describe in concise but compelling ways exactly what problem you’re tackling.
- Who: Who are the people involved? How many? What are they like?
- What: What challenge or obstacle are they facing? How big is the problem? What will happen if the problem is not corrected?
- When: What is the timeline for the problem, both past and future?
- Where: Not only what is the geographic location, but how does this location fit into the regional or global picture?
- Why: Why is all this important? What is at stake?
In bringing your hearer or reader into the story, use both qualitative and quantitative data to make the impact as clear as possible. This section doesn’t have to draw only on your own experience; use other sources of information as needed to make the point. Some statements in your background might look like this:
- A recent Barna Group poll among teens in the US showed that . . .
- Data regarding suicide rates obtained from the Bureau of Vital Statistics shows . . .
- Researchers at the Pontifical Catholic University of Ecuador working among indigenous peoples published a report which documented . . .
- Missionaries from our church who have worked in rural villages of southern India have repeatedly encountered . . .
- Anecdotal reports among workers in Native American communities have indicated that . . .
- At a recent ION meeting, participants voiced concerns about . . .
You will note that the first three examples are quantitative, while other three are qualitative. Using both together when they reinforce each other is a powerful way to dramatize the problem. Illustrations and examples are also vital to making the problem clear. Remember your orality roots: charts and graphs lend clarity to quantitative data, while photos and quoted material provide depth for qualitative data. Conclude your Background discussion with a succinct restatement of the problem along with a clear (not overdramatized) assessment of the options: either we solve this problem, or else . . .
The Cavalry Comes Over the Hill: Your Intervention
Having drawn your hearer or reader into the problem, you’re now ready to unveil your solution. In fact, if you’ve done your job in describing the problem, your audience should be begging you to show them the way out. State in a few short but clear sentences your intervention which proposes to address the problem and why you think it should work. This section should be short and sweet; if the Background has described the problem clearly, the reason for your intervention should be obvious without much verbiage. This is also the time to clearly, in one sentence, state your research question. Since your research question was formulated back when you started the project, you should be able to use it unchanged at this point in your report.
What You Did: Methods
You are now ready to describe what you did. Be straightforward and factual in this section, usually titled Methods. There’s no need for dramatic illustrations here; you want your reader or hearer to understand your project. How much detail you provide will depend on whether you are preparing an oral or written report. For oral reports, be brief but clear. You want your hearers to understand what you did without getting bogged down in overmuch detail. How much detail is that? Enough so that they will believe that the results you are about to provide are credible. If you can get them to understand what you did, there’s every chance they will be convinced when you tell them your findings.
For a written report, more detail about the methods is usually needed. Especially if the report is to be submitted to a peer-reviewed journal for publication, you’ll need to include enough information so that reviewers and readers can understand your project thoroughly. In fact, the standard is that there will be sufficient information that somebody else could repeat the study based on your report. For a less formal report – say, to a church board or oversight committee – you can give an overview of the methods at this point and then include the smaller details in an appendix.
The Climax of the Story: The Results
By this point the readers or listeners should be on the edges of their seats. Now it’s time to give them the answers they’ve been waiting for. Logically and clearly provide the Results of your study. Stick to just the facts. How you organize it will vary from study to study and person to person, but be sure that someone who doesn’t know anything except what you’ve already presented can follow the results. A few principles that usually work:
- Give the broad overview results first, then drill down to more specific results next.
- If the problem follows a logical sequence (in time, by age group, or by some other factor) follow that same sequence in ordering results. If you’ve described a logical sequence in the Background part of your report, follow that same sequence in the Results.
- Intersperse quantitative and qualitative information when they support each other; but which comes first is up to you. For example:
- 72% of respondents said that the new revival format was either “better” or “much better” than the old format. Indeed, two senior church members said they’d been attending revivals for 28 years together, and had never seen better.
- Two senior church members particularly benefitted, saying they’d never seen a better revival in 28 years together. This view was validated by the survey, in which 72% said the new format was “better” or “much better” than the old format.
- Present the basic statistics first, then calculations regarding significance and correlation.
- 51% of regular churchgoers in the community had previously attended a revival, compared to 45% of non-churchgoers. A χ-square analysis showed this was statistically significant, with a p-value of 0.012.
What It Means: Interpretation and Discussion
The Results, interestingly, is not the most telling part of your story. The Discussion section is where you interpret those results, put them in the context of the Problem, self-critique your project, and speculate about future directions. Organize the Discussion section along these lines:
- First, give your direct interpretation of the Results, specifically with respect to inferences. For example: “We found that regular churchgoers were significantly more likely to have attended a revival than non-churchgoers. We believe that revivals are an important element of subsequent church attendance in this culture.” Note that the first sentence is the statement of correlation, and the second is your inference about causation (review Ch. 4 if you’ve forgotten what these terms mean and how they are different).
- Next, apply your results to the Problem you described at the beginning of your report. How do the results help or solve the problem? Or not? If possible, list specific elements of the problem and then describe how your results address those components (or don’t). Close the loop by returning back to where you started.
- Now self-criticize your study. What are possible flaws or errors? What might you have overlooked? What would you have done differently if you were to do it over again?
- Finally, speculate about where to go from here. If your solution is only a partial fix, what else is needed to extend your work? How might the fix be applied to the big picture? What further research might be helpful at this point?
In the Discussion you bring the story you’ve been telling to a solid close.
Wrapping it all up
You may have noticed that in this Primer we have been following much the same pattern that’s been advocated in conducting research. First we started with the problem: Why should anybody care about research? What can it contribute to the world of orality and evangelization? Second, we described our proposed intervention: familiarize those dedicated to the great project what research is all about and how it might be done so they can more effectively “teach the nations.” Then we covered methods. We talked next about analyzing results and making deduction from the findings.
So here we’ll circle back to the problem to close where we began. How does research address the problem we started with?
We live in the Age of Reason, the Age of Science. And we must realize that Reason and Science are not the enemies of Religion and Faith; on the contrary, they are complementary. Reason and Faith both converge on truth; and if truth is true, then Reason and Faith cannot contradict each other. Ever. They may converge on truth from different directions, but since truth cannot contradict itself it follows that Reason and Faith cannot contradict each other.
And what is the truth that they converge upon? Wrong question. Truth is not a what; Truth is a Who. Truth is the One Who said, “I am the Truth” (Jn 14:6). He has charged us to bring truth – in other words, to bring Him – to all nations. If we are to engage modern post-Enlightenment skeptics we need to use all the tools at our disposal to be as effective as possible. There are, after all, immortal souls at stake.
Research is fun, research is exciting. It’s intellectually stimulating, it’s energizing. It’s in fact revolutionary. Research changes everything it touches, makes it better and makes it stronger.
All of which is to say that research is part of the Kingdom of Heaven. And that’s, perhaps, the best reason of all to do it.
Use the button below to download this article as a PDF.