Interview Approach
This study employed techniques from the field of cognitive task analysis (CTA), a field that specializes in understanding how “cognition makes it possible for humans to get things done and then turning that understanding into aids–low or high tech–for helping people get things done better” \cite{crandall_working_2006} 2. Cognitive task analysis attempts to capture experts’ thoughts, what they pay attention to, their decision-making strategies, their goals, and what they know about how a process works. The three components of cognitive task analysis are knowledge elicitation, data analysis, and knowledge representation (7).
Structured interviews are a common method used for data elicitation in cognitive task analysis. They allow for a diverse range of issues and skills to emerge, but are best treated as exploratory data \cite{crandall_working_2006} 13. An exploratory approach was appropriate as this project is an initial systematic look at textbook development across regions and languages. The interview structure was a modified version of the Critical Decision Method, an elicitation technique that employs case-specific retrospection from the participant’s own experience to organize the interview \cite{hoffman_use_1998} 257.
In the Critical Decision Method elicitation technique, the interviewer guides participants through a series of steps, starting with incident selection. The interviewer describes a situation and asks for an example where the participant's skills were challenged. Next, the interviewer asks the participant to tell the story of the situation, with the participant structuring the account rather than the researcher. This is followed by clarification through incident retelling, timeline verification, and decision point identification, which allows the participant to add details and correct misunderstandings. Once clarity is achieved, the interviewer deepens the interview with probing questions about the participant's decisions, goals, and actions. The interviewer may also ask "What-if?" questions to elicit reflections about what might have happened if the participant made different decisions.
There are many strengths to this method. The first is that experts love to tell stories. They may even expect younger practitioners to learn from these stories \cite{hoffman_use_1998} 269, so my position as a younger member of the field of international education establishes me as a natural listener, a person who is seeking to understand their experiences. Another is that structured interviews have been found to be somewhat more efficient than unstructured interviews, and the probe questions yielded significantly more information about the expert’s cognitive activities and which cues they paid attention to (265, 269). The third is that by allowing participants to structure the interview around a specific event in their own lives, the researcher’s biases are minimized (267).
This study used a modified form of the Critical Decision Method. The first two steps, incident selection and incident recall, proved very helpful in framing the discussion. From the variety of conversations they elicited, allowing participants to direct the topic seems to have brought up a wider range of issues than more targeted questions would have. The next step, incident retelling, would have taken up a significant proportion of the interview time because the process of making materials is so involved. Instead, the researcher asked questions about any unclear points and proceeded to the progressive deepening stage with probing questions that followed the topical threads that arose during the narration. Each interview concluded with the researcher asking whether the participant had any additional information to share. This helped minimize researcher bias by giving participants freedom to bring up other topics or additional details, increasing the chances of gathering new topics that occurred to the participant during the course of the interview. Overall, most of the interviews were very different from each other. The benefit of this was the chance to explore different issues that might not have arisen with a more rigid structure. This freedom did limit the comparability of some of the data, which was addressed by e-mailing the participants with follow-up questions to ensure that there was data for certain key points, topics, or details that might have been misunderstood.
Because of the geographic dispersion of potential participants, most interviews were conducted using video technology such as Skype or Google Hangouts to avoid incurring large travel costs. Although using video interviews for academic purposes is relatively new, the body of research around it is growing \cite{deakin_skype_2014} 604. In one study, several participants actually chose to interview over Skype, indicating that its use was normal to them \cite{deakin_skype_2014} 607. Video interviews are recommended in cases where participants are too far apart to allow for face-to-face interviews \cite{seitz_pixilated_2015} 2, and work well for projects with less-personal topics such as study abroad experiences \cite{seitz_pixilated_2015} 5. Because this study involves mostly professional decisions, rather than personal life experience, video interviews were a good methodological choice.
Researchers suggest that there are some weaknesses in the video interview medium. The most serious one is that the limited contact inherent in video interviews can make it harder to establish rapport, but Seitz suggests that slowing down and clarifying talk, repeating answers and questions, paying close attention to facial expressions, and e-mailing before the interview to establish rapport can all contribute to more successful interviews \cite{seitz_pixilated_2015} 5. Deakin and Wakefield did find that a lack of rapport made Skype participants somewhat more likely to be absent at their interview time, and suggest that this effect can be lessened through e-mailing participants several times before the interview \cite{deakin_skype_2014}. Following this advice, the researcher e-mailed back and forth with the participants beforehand, ensuring they knew what the research was about, why it mattered, and what they needed to do. Many of the participants already had accounts on Skype, showing that they were familiar with it.
In practice, most of the interviews were audio-only because using video degraded the sound to the point of unintelligibility. In two cases, due to bandwidth and accessibility issues, even using audio was not possible. In their use of video interviews, Deakin and Wakefield found that when video and audio were both non-functional, they were still able to carry on with the interview by using Skype’s text function \cite{deakin_skype_2014} 611. Writing back and forth did work as an alternative to audio, although time lags between messages made the process more difficult, and questions and answers did not arrive in sequential order. The data gathered was still helpful, but this method is not recommended unless absolutely necessary.
All of the interviews were recorded, which allowed the researcher to focus on the participant, rather than worrying about capturing all of the information in notes. Instead, brief notes were helpful for conceiving and remembering follow-up questions.
Three methods were used for recording. In two cases, it was possible to interview education consultants in person, so the interview was recorded using a microphone. In cases where audio communication was not possible, the messages typed back and forth served as a record. During virtual conversations, the audio was gathered using Call Recorder for Skype. As its name implies, the application records conversations over Skype, capturing both the researcher's and participants' voices. This approach avoided setting up complicated hardware that might slow down the process or distract the participant. While in some cases, the audio contained segments that were difficult to make out, the problem was even worse with one of the in-person recordings. Sound quality issues had more to do with the location of the participants than the recording methods used.
This approach does have some limitations. The interview approach of this process is what allows a large number of projects to be compared without surpassing financial and logistical limitations, but it also limits the type of data that can be collected. Although participants were drawn from the same project in some cases to allow for multiple perspectives and data triangulation, there was no way to verify how closely the participants’ descriptions corresponded to actual events. In addition, participants were located through my professional network, which is biased toward grassroots, non-governmental, language-based work, so it is possible that the responses obtained were not representative of the field as a whole. Using my professional network mean that education consultants with a more local focus and less connection to international networks were less likely to participate. Similarly, all interviews were conducted in English, which might have excluded potential participants, and may have made it more difficult for those that did participate to communicate their experiences. Another weakness is that in some cases, participants chose to narrate events that occurred long ago, which might have lead to overrepresentation of unusual or emotional occurrences and a neglect of the more mundane aspects of the project.
Analytical Approach
After the interviews, each recording was transcribed word for word and input to Nvivo, a qualitative research software program. After repeatedly listening to the interviews during the transcription process, a number of commonalities arose. Most of them were stages of the materials creation process resembling the process described in the literature review. Within those broad categories, other categories emerged. For example, in the "creation" stage, when teams are actively drafting materials, it became clear that pictures were a key part of the content, so the interviews were coded for information about pictures. Once each project had been coded for the various stages, each stage was examined for themes, commonalities, and differences. This allows for the research to generally be organized as a story, woven out of 22 threads of experience, showing the different possible contexts and decisions involved in the process. In addition to this description of the procedure, technology emerged as a key element in the interviews, and a chapter analyzing its use is incorporated into the findings.
Finding Participants
This study used a purposive snowball sampling approach to find twenty two participants with experience in creating teaching and learning materials in non-dominant languages. The population that this study addresses is small, dispersed, and very busy, so snowball sampling was the best choice for finding participants. Snowball sampling entails beginning a research project with a few selected participants, then asking those participants to recommend other potential participants. The primary benefit of this approach is being able to find a sufficient sample of hard-to-find people. Another is that it increases the likelihood that potential participants will agree to take part in the study because they share a connection with the person who referred them. In this case, it also made it easier to find participants who have worked together, which provided different perspectives on the same project. That was simpler than expected because this community is small and well networked, which became clear when participants referred to some of the same people in the course of their interviews.
One weakness of snowball sampling is that it biases the sample; in this case, there was no randomness at all in how the participants were contacted. To counter this issue, purposive sampling was employed to ensure that the participants and projects covered a range of organization types, regions, funding models, language families, and scripts. Through examining these variations, it was possible to detect overarching themes, groups of projects with common traits, and unusual aspects of particular projects.
The Sample
The participants consisted of 22 people who had worked on 19 projects. Some of these projects involved multiple language communities, so the total number of languages was 61. An analysis of these projects and languages demonstrates sufficient diversity in the study on a variety of factors, including characteristics of the languages, materials, projects, time frame, and funding. A description of these factors follows, both to demonstrate the wide range of the data, and also to set the scene for the analysis.
Geography
Participants described projects from 13 different countries and six world regions that included East Asia, South Asia, Southeast Asia, East Africa, West Africa, and North America. Unfortunately, none of the participants chose to describe projects in Latin America, but enough regions are included to raise key issues and provide a clear outline of the processes involved in creating materials.
Language Characteristics
Language Status
The vast majority majority of languages discussed in this study have strong language vitality, meaning that they are used by all generations and are not in danger of dying off. This is an important point. The researcher often encounters people who assume that non-dominant languages are endangered and that a major reason for making education materials is to preserve them. Although creating materials can further strengthen a language, the main purpose of the materials creation projects in this study was to provide more effective education for the children who speak those languages.
Although all but five of the languages are used by all generations, there was considerable variation in how the communities used them. A total of 41 languages had little or no literature prior to the study. Eleven of the languages are used in education with institutional support, but are not used more widely in the region. Ten of the languages are used for communication at a broader level, either with or without official status. Of these, two were even official national languages. This topic will be discussed in more detail in the section on language development, but the major point here is that most of the language communities in this project have begun to write their language fairly recently.