StoryEngine is influenced by narrative research methods (Global Giving’s Storytelling Project, CognitiveEdge’s SenseMaker, and TASCHA’s evidence narratives) and participatory principles. It draws on grounded theory and developmental evaluation. Our aim: to better understand what is different now (impacts), how change is happening, and how we might adapt our strategies and practices.
This is a snapshot of the StoryEngine methodology. Following the developmental evaluation approach, our design will evolve as we learn:
“Methods can be emergent and flexible; designs can be dynamic. Contrary to the usual practice in evaluation of fixed designs that are implemented as planned, developmental evaluation designs can change as an innovation unfolds and changes.”
— Michael Quinn Patton, Kate McKegg, and Nan Wehipeihana (editors), Developmental Evaluation Exemplars: Principles in Practice, Guilford Publications, 2016, p.12.
We use purposeful sampling, focusing on Mozilla Foundation partners, fellows, volunteers, and people and organizations in the broader Mozilla ecosystem. Mozilla Foundation staff can submit a tip, individuals can submit their own stories, and we also do some snowball sampling. We actively seek out respondents from different programs, initiatives, demographic groups, and geographies.
StoryEngine was also designed to analyze stories drawn from content on the open web, such as community-generated blogs, podcasts, or videos. At this stage, however, all of our data comes from interviews.
Interviews conducted via Skype and recorded, with the respondent’s permission. The recording is then:
- Transcribed and edited for clarity by the StoryEngine team
- The edited text is sent to the respondent to see if they have any changes — we emphasize that we are interested in the final text expressing what they want to say, rather than an exact account of our conversation, especially since stories are posted publicly
- We ask for some high-resolution photos of the respondent, to accompany their story
- We request that respondents to fill out a consent form, since stories are used in Mozilla communication efforts and may appear in media outlets
If audio recording is not possible, we follow the same process using detailed notes. We do not ask for consent to analyze community-generated content although, depending on the license, we would request permission to reproduce it for communication purposes.
Coding and analysis
We’re using RQDA, an R package for Qualitative Data Analysis, to analyze StoryEngine texts. We selected this application because it is free and open source and an active user community. Also, because is is based in R and SQLite, RQDA allows us to access the coding database and take advantage of R text-mining packages.
Texts are imported into RQDA are assigned a set of attributes:
- Date, Gender, Interviewer
- Internet Health Issue
- Mozilla Program
Attributes categorize entire texts, whereas codes are attached to specific chunks of text passage. During the analysis phase, we are purposefully looking for issues, impacts, and artifacts (things people have made).
- Internet Health Issues — In order to serve Mozilla staff quickly, we first review the texts for content that illuminates current internet health issues (Decentralization, Digital Inclusion, Online Privacy & Security, Web Literacy), with the aim of collecting examples and quotees. View Mozilla’s Internet Health Report »
- Impact — We also look for reports of impact: What has changed for this person or their organization? How? This coding is iterative and will evolve as our work progresses. That said, we do have some ideas about potential impacts »
- Artifacts — Tools, approaches, methods — generally “things” network leaders make or use are coded so that they can be flagged as potentially useful to others.
The insights from our analyses are summarized and released periodically.
NOTE: We may at some point explore some of the techniques developed by CognitiveEdge (SenseMaker) and evolved by Marc Maxson and the team at Global Giving. We’re particularly interested in how these approaches involve participants, asking them to “signify” their own stories — indicating what the story is about and what is important. This is attractive because it is participatory, provides an extra layer of information, and removes some external coder bias. To learn more about this method, check out Global Giving’s Story Realbook.
StoryEngine attempts to be sensitive to and useful for participants. We are not interested in extractive research or evaluation. We also understand that an interview is an important touchpoint and opportunity for engagement. So we look for opportunities to make connections and support participants’ work.
All stories are shared under a Creative Commons Attribution-ShareAlike license, so that others can build on this work, including researchers who want to analyze this dataset further.
As with the rest of this project, the ways we give back will evolve as we hear from our respondents; we’ll update this section accordingly.