StoryEngine is influenced by narrative research methods (Global Giving’s Storytelling Project, CognitiveEdge’s SenseMaker, and TASCHA’s evidence narratives) and participatory principles. It draws on grounded theory and developmental evaluation. Our aim: to better understand what is different now (impacts), how change is happening, and how we might adapt our strategies and practices.
This is a snapshot of the StoryEngine methodology. Following the developmental evaluation approach, our design will evolve as we learn:
“Methods can be emergent and flexible; designs can be dynamic. Contrary to the usual practice in evaluation of fixed designs that are implemented as planned, developmental evaluation designs can change as an innovation unfolds and changes.”
— Michael Quinn Patton, Kate McKegg, and Nan Wehipeihana (editors), Developmental Evaluation Exemplars: Principles in Practice, Guilford Publications, 2016, p.12.
For Mozilla, we use purposeful sampling, focusing on foundation partners, fellows, volunteers, and people and organizations in the broader Mozilla ecosystem. We actively seek out respondents from different programs, initiatives, demographic groups, and geographies. Options for sampling include staff tip submissions, individuals submitting their own stories, and snowball sampling.
StoryEngine was also designed to analyze stories drawn from content on the open web, such as community-generated blogs, podcasts, or videos. At this stage, however, all of our data comes from interviews.
Interviews conducted via Skype and recorded, with the respondent’s permission — we ask that they fill out a “Consent to Interview” release in advance. Then, we follow these steps:
- Transcription — The audio file is sent for transcription
- Editing & photos — The transcript is edited for clarity and then sent to the respondent to see if they have any changes — we emphasize that we are interested in the final text expressing what they want to say, rather than an exact account of our conversation, especially since stories are posted publicly; at this time we also write a short bio for the respondent, and ask for several photographs to accompany their story
- Approval & publishing — Once the story is approved and ready to publish, we ask the respondent to fill out a “Consent to Publish” form and then post it to StoryEngine.io
If audio recording is not possible, we follow the same process using detailed notes.
Coding and analysis
During the analysis phase, we use qualitative data analysis tools to look for emerging patterns and themes, and purposefully look for issues, impacts, and artifacts (things people have made).
- Internet Health Issues — In order to serve Mozilla staff quickly, we first review the texts for content that illuminates current internet health issues (Decentralization, Digital Inclusion, Online Privacy & Security, Web Literacy), with the aim of collecting examples and quotees. View Mozilla’s Internet Health Report »
- Impact — We also look for reports of impact: What has changed for this person or their organization? How? This coding is iterative and will evolve as our work progresses.
- Artifacts — Tools, approaches, methods — generally “things” network leaders make or use are coded so that they can be flagged as potentially useful to others.
The insights from our analyses are summarized and released periodically.
StoryEngine attempts to be sensitive to and useful for participants. We are not interested in extractive research or evaluation. We also understand that an interview is an important touchpoint and opportunity for engagement. So we look for opportunities to make connections and support participants’ work.
Unless otherwise noted, all stories are shared under a Creative Commons Attribution license, so that others can build on this work, including researchers who want to analyze this dataset further.
As with the rest of this project, the ways we give back will evolve as we hear from our respondents; we’ll update this section accordingly.