Trevor Timm is a co-founder and the executive director of the Freedom of the Press Foundation. He is a journalist, activist, and lawyer who writes a weekly column for The Guardian on privacy, free speech, and national security. Dedicated to transparency and accountability of the government, he supports and defends journalists and whistleblowers. His work on SecureDrop exemplifies his passion by creating space where sources and journalists can communicate and send documents safely and securely.
Start by giving me just a broad overview of your work and then perhaps highlighting some specific projects that you’re working on.
I’m the Executive Director of Freedom of the Press Foundation. We are a nonprofit based in San Francisco that supports and defends journalism dedicated to transparency and accountability. We really see ourselves as a press freedom advocacy organization for the 21st century.
We do a bunch of things related to protecting journalists and whistleblowers communicating with each other and getting important stories out to the public. A big way that we do this is through technology.
Almost all of our employees are technologists, engineers, and digital security experts who help journalists use digital security and encryption tools to protect both their privacy online and the privacy and security of the people they talk to.
Our main projects that we’re most known for these days is SecureDrop, which is an open-source whistleblower submission system that allows journalists to anonymously and safely accept documents and tips from whistleblowers.
It is in use at over three dozen major news organizations worldwide right now, including “The New York Times,” the “Washington Post,” “The Guardian,” “ProPublica,” “The Intercept,” “The New Yorker,” and others. So far, it has produced countless stories for news organizations on a whole variety of subjects that are related to the public interest.
We think projects like SecureDrop are important because of increased use of surveillance capabilities by both the US government and governments worldwide where it is very easy to compel third parties like Google, or AT&T, or Verizon, or Facebook to hand over all sorts of data they have on communications.
Sources may be risking their livelihoods to get information to the public and using email or phone calls to talk to reporters can sometimes put them at risk of prosecution, arrest, or worse. SecureDrop tries to provide a safer space for those communications to happen — so it’s harder for governments or anybody else to have access to that information.
Along with SecureDrop, we also train journalists how to use a variety of encryption tools — whether it’s encrypted texts and phone calls, encrypted email, anonymous web browsing — and to use basic techniques to secure online accounts so that they are less likely to be hacked or their information be stolen.
We also have a few smaller software projects that we’re working on that will, hopefully, help solve a few other niche problems that journalists have in securing communications in their work.
This seems to be more urgent under the Trump administration — people are worried about what the US government can do with its vast surveillance powers. There’s been a huge debate over the last three years, thanks to Edward Snowden, on what the US government actually has and what it can do.
I think it’s become very clear to people that that power can be abused in a lot of ways that maybe people didn’t think about before. Journalists are really on the front lines of this and of this and have some of the most sensitive communications out there. We want to be there to protect them.
Thinking about this work, can you hone in on a specific example? I’m looking for an anecdote here, a time where you really felt a sense of success.
For decades in the 20th century, journalists were largely able to protect their sources in court and prevent prosecution of their sources because of the concept of reporters’ privilege and their willingness to, if it comes to it, go to jail rather than give up their sources to prosecutors or police officers.
About 8 or 10 years ago the government realized they no longer need to rely on reporters to testify against their sources because they can go to Google or AT&T and get this type of information in secret, without journalists ever coming into the equation.
A classic example of this is James Risen, a Pulitzer Prize winning reporter for The New York Times, who was under subpoena during the Bush administration and then the Obama administration. They subpoenaed him to testify against an alleged source of his, Jeffrey Sterling, who was a former CIA officer.
The legal battle was substantially long. They went up to an Appeals Court where James Risen was protesting the order for him to testify. He said he would go to jail rather than give up his sources. As it turns out the government didn’t actually need James Risen to testify.
Instead, they went after him and his alleged sources, their phone records, his email records, his financial records, his travel records and put together this huge detailed picture of the activities both of them were engaged in. Then dropped the subpoena to James Risen and used the surveillance data to convict his source.
We’ve seen this happen over and over again in the past eight years. The Obama administration prosecuted more leakers and whistleblowers than all of the administrations combined. They were able to do this because of these increased surveillance powers. Occurrences like these demonstrate the urgent need for things like SecureDrop and other encrypted communication tools.
As far as particular successes with SecureDrop, there’s a Columbia Journalism school study that looked at 8 to 10 user organizations who’ve been using SecureDrop over the past year. While they didn’t talk about specific stories, they published because of it, basically to protect their sources. They did talk about how it’s been very useful for their journalism. There have been a few stories since then that have been published for news organizations which attribute their communication abilities to SecureDrop. The Intercept received leaks from people inside the government — which became important stories that needed to be told.
Unfortunately, we can’t know the full story about how successful SecureDrop really is. It’s sometimes frustrating on our end because we would like to tell the world about how successful it can be — but even we don’t know because of the way it’s designed. The way we developed the software ensures that we don’t know what news organizations are using it for for the explicit purpose of protecting sources. It’s really is catch-22 where we don’t want them to say out loud because we want them to protect their sources, but we would also like them to give us an idea of how it’s helped so we can spread the word.
We know it works in the sense that it’s been a robust project for about two years now. None of the news organizations that have installed and used it over the past two years have stopped using it, and they have been trying to onboard more journalists into using it. Also, there’s been an increase in the number of news organizations that want to install it — so in this sense we can say it’s been successful.
How about an example of a challenge? I’m thinking about one that is persistent for you or top of mind.
I think the challenges, when we’re talking about digital security or privacy, are awareness — number one, and two — the ability to learn and utilize new techniques and tools. For a long time, there’s been a problem in this space as open-source security tools are often considered the best to use for security, but can be difficult to use. The classic example is PGP encrypted email, which has been around for decades.
We know works in the sense that, if it’s used correctly, it can protect the content of your communications — yet, it is incredibly annoying to set up and to use, people often make mistakes when doing so, and it’s seen as not worth the effort.
I think the challenge in this space is to be able to teach these tools so that people can use them correctly and understand when they should be using one tool and when they should be using another. There’s also a challenge when developing these tools, to be able to develop them in a way that can actually make them easier to use so there isn’t such a high bar for entry.
At this point, everybody is programmed to use email, phone calls, and regular text messages. Many people don’t think their communications are safe anymore but they don’t know what to do about it. We need to reprogram how people think about this — and that’s a constant challenge.
The ultimate goal for any of these tools is to have them be so easy to use that people actually aren’t even using them for privacy. They’re just using them because they are the best way to communicate with their friends and family or whoever they want to communicate with, and the privacy and security features are just an added benefit.
I think we’ve seen a couple tools moving at direction. Signal, for example, is an encrypted application that does secure messaging. WhatsApp, another example of secure messaging, is end-to-end encrypted by default so that users don’t need to do anything at all. There are a billion users of WhatsApp. They’re just using it like they always would, yet their communications are protected much better than a regular SMS communication. I think that that’s the ultimate goal for all of these projects — at least it should be.
How do you deal with vulnerabilities in the tools themselves?
I think that’s a concern for whatever communication platform you’re using. It’s important for people to realize communications won’t ever be 100 percent secure on any level — even if you’re using end-to-end encrypted communications. If somebody is able to hack into your computer or into your phone so they can see what’s on your screen, then encryption isn’t going to matter because they can just read it off of your device.
The good thing is, even though ordinary non-technical people can’t look at the code and determine vulnerabilities, when a project is open-source, there are often many security experts who can look at it and give recommendations on which tool they think is most secure.
We shouldn’t be looking at tools on a binary level. It shouldn’t be, “This is 100 percent secure. This is the zero percent secure.” It’s a sliding scale of what is more secure and what is less secure. There will always be vulnerabilities found in one product or another, but we can make informed decisions based on expertise.
Turning now to the broadest issue in the Mozilla universe, internet health. What, for you, is a healthy internet?
There are two main components to a healthy internet as far as I see — an open internet that is resistant to censorship and secure internet where people feel safe using it. When I say resistant to censorship, — it needs to be free from government censorship or censorship by large corporations, so that people have the ability to express their feelings, and their thoughts, and their ideas without being stifled.
Then there is the privacy and security aspect — people should feel secure using the internet so that they don’t feel like they’re constantly being watched by governments, or criminals, or anybody in between. Surveillance has a chilling effect on what people do online. They might be afraid to read certain news articles or afraid to talk about certain subjects. Whether they’re talking about those on a public forum or a private forum, they might feel that, by saying something wrong, they might be put on a list or a police officer might knock on their door.
Having both the ability to say what you want and also feel secure in saying what you want are the two most important aspects of a free and open internet.
Shifting now to the concept of working open, what does that mean for you?
It means a lot of things for a lot of different people. For the Freedom of the Press Foundation, and all of the projects that we work on — whether it’s software projects or websites – we make all of the code a free and open source so that people can take our work and remix it, create new projects with it, and use our ideas to both further our goals and improve on them.
The same thing goes for the writing we do in our blog on advocacy. We license everything via creative commons so it can be reposted elsewhere and our message can be spread far and wide. This philosophy is very helpful in spreading our message — it encourages people to be more engaged in the work we do.
Do you have examples of people forking your code or remixing your content?
When we post a blog post, it’s often reposted within a day on three to five different sites, depending on the subject. With SecureDrop, we’ve had dozens of open source contributors from people we have never met in real life but were interested in the project and decided to contribute, or give feedback, or add new features.
It’s exciting to see because it means that not only are journalists interested in this type of project, but engineers and people who care about these issues are also interested in helping. Anytime this happens, it’s very heartening to see.
Getting more specific about Mozilla, how did you get involved with them, and what has that been like for you?
We’ve had a relationship with Mozilla from nearly the beginning. When we first adopted the SecureDrop project, Mozilla gave us one of our first seed grants to run a few Hackathons — it was only a prototype at this time. It gave us a nice boost in getting the word out there about SecureDrop and obtaining a bunch of open source contributors.
Since then, we’ve worked with them in a few different capacities. At the moment, we have a Ford Mozilla Fellow who is working with us for a year, full-time. She’s working on SecureDrop and a couple other technical projects that we have. She’s been an amazing addition to our staff and we’re lucky to be a part of the fellowship program.
More recently, we received a large grant from Mozilla to work on a redesign of SecureDrop that seeks to improve it’s ability to accommodate hundreds of these organizations.
Other than the boost to SecureDrop and additional resources to help scale it up, are there any other details or any other impact you’d like to mention?
Those are the two major impacts — I don’t there’s anything that can beat those.
Do you have any feedback to give Mozilla?
I don’t know if I have any criticisms offhand or feedback for Mozilla.
Lastly, I have a hypothetical question for you — If you had access to 10 skilled volunteer collaborators and/or contributors, what would their skills be and what would you ask them to do?
In general, we are interested in developers who are interested in security, as well as in design and user experience (UX). We have this dual goal of making our tools more secure, but also easier to use — sometimes those ideas are in conflict.
We need subject matter experts that can dig down on one issue or another, and look at it from an outside perspective that maybe we don’t have. Sometimes we’re so focused and involved in something that we can’t actually see what other people can see. We have so much demand for the things that we make, and so many ideas for what we want to do next, that we could use all the help we can get.