ten years ago, WIRED published a news story about how two little-known, slightly ramshackle encryption apps called RedPhone and TextSecure were merging to form something called Signal. Since that July in 2014, Signal has transformed from a cypherpunk curiosity—created by an anarchist coder, run by a scrappy team working in a single room in San Francisco, spread word-of-mouth by hackers competing for paranoia points—into a full-blown, mainstream, encrypted communications phenomenon. Hundreds of millions of people have now downloaded Signal. (Including Drake: “Cuban girl, her family grind coffee,” he rapped in his 2022 song “Major Distribution.” “Text me on the Signal, don’t call me.”) Billions more use Signal’s encryption protocols integrated into platforms like WhatsApp.
That origin story is, perhaps, a startup cliché. But Signal is, in many ways, the exact opposite of the Silicon Valley model. It’s a nonprofit funded by donations. It has never taken investment, makes its product available for free, has no advertisements, and collects virtually no information on its users—while competing with tech giants and winning. In a world where Elon Musk seems to have proven that practically no privately owned communication forum is immune from a single rich person’s whims, Signal stands as a counterfactual: evidence that venture capitalism and surveillance capitalism—hell, capitalism, period—are not the only paths forward for the future of technology.
Over its past decade, no leader of Signal has embodied that iconoclasm as visibly as Meredith Whittaker. Signal’s president since 2022 is one of the world’s most prominent tech critics: When she worked at Google, she led walkouts to protest its discriminatory practices and spoke out against its military contracts. She cofounded the AI Now Institute to address ethical implications of artificial intelligence and has become a leading voice for the notion that AI and surveillance are inherently intertwined. Since she took on the presidency at the Signal Foundation, she has come to see her central task as working to find a long-term taproot of funding to keep Signal alive for decades to come—with zero compromises or corporate entanglements—so it can serve as a model for an entirely new kind of tech ecosystem.
Whittaker has been based in Paris for the summer, but I met up with her during a quick visit to her home city of New York. In a Brooklyn café, we ended up delving deepest into a subject that, as outspoken as the privacy exec may be, she rarely speaks about: herself, and her strange path from Google manager to Silicon Valley gadfly.
Andy Greenberg: Is it OK to say here that we had planned to talk on the actual 10th anniversary of Signal but had to reschedule because you were hospitalized with food poisoning?
Meredith Whittaker: Yeah, that’s fine.
OK. So you’re not quite a privacy person like [Signal Foundation cofounders] Moxie Marlinspike or Brian Acton …
No, I’m a woman, for one thing.
True! But also, there’s no way that either of them would let me mention something personal like that. They’re much more guarded in the way that they present themselves. It seems like you’re a different kind of leader for Signal.
I think the Venn diagram of our beliefs has some significant overlaps. We all have a clear analysis of surveillance capitalism and the stakes of mass surveillance in the hands of the powerful. But in terms of my personal guardedness around my own life: I am a private person. There’s not that much on the internet about me, because from a young age, I’ve had a fundamental instinct not to tell too much. But I think it comes more from just a long-standing tendency—and thinking about the stakes—than a position of ideological purity.
You’re also much more out there in public than anybody from Signal has ever been before.
Yeah. That’s true. We’re at a different phase of Signal right now, as well.
How so?
Well to begin with, Signal started 10 years ago as this virtuosic hacker project that was pushing against a dominant paradigm that was almost universally celebrated by everyone at the time.
What paradigm would that be?
Surveillance. The surveillance business model.
Right. And what phase is Signal in now?
Now Signal is established critical infrastructure for militaries, for dissidents, for journalists, for CEOs, for anyone who has private confidential information.
So I think we’re in a different place, where we need to be out there. We can’t have our story told by proxies. It’s time to define it for ourselves.
Well, before we get to that story: You’ve been spending the summer in Paris. Why Europe? Why France? Is that a Meredith thing, or is that a Signal thing?
It’s a Signal thing. We’re focusing on the EU, and growing our market, and figuring out who potential partners could be.
I think it’s good for any tech company right now to be thinking, how can we be flexible, given that we’re looking at a very volatile geopolitical environment.
Are you saying you’re looking for an escape route, in the event of a second Trump administration?
It’s more than that. There are a lot of possible futures on the table right now.
Let me just ask it this way: There’s an election coming up in the US. Are you thinking about a new administration, Democrat or Republican, and the possibility that Signal needs to find a new home?
My answer to that would be, I think we’re always aware of shifting political sands. Given that governments in the US and elsewhere have not always been uncritical of encryption, a future where we have jurisdictional flexibility is something we’re looking at.
Does it really make sense to look for that kind of jurisdictional flexibility in Europe when Telegram founder Pavel Durov was just arrested in France? Does this give you pause about Signal’s future in the EU?
Well, to start: Telegram and Signal are very different applications with very different use cases. Telegram is a social media app that allows an individual to communicate with millions at once and doesn’t provide meaningful privacy or end-to-end encryption. Signal is solely a private and secure communications app that has no social media features. So we’re already talking about two different things.
And as of today [August 27, 2024] there are simply too many unanswered questions and too little concrete information about the specific rationale behind Durov’s arrest for me to give you an informed opinion. On the broader question, let’s be real: There’s no state in the world that has an unblemished record on encryption. There are also champions of private communications and expression everywhere in the world—including many in the French government and in Europe beyond. Those of us who’ve been fighting for privacy for the long term recognize that this is a persistent battle, with allies and adversaries everywhere. Trying to prioritize flexibility is not the same thing as idealizing one or another jurisdiction. We’re clear-eyed about the waters we need to navigate, wherever they are. We see a huge amount of support and opportunity in Europe.
And there are really big differences between states, even in Europe. Germany is considering a law mandating end-to-end encryption, while Spain has been at the tip of the spear on pushing for undermining encryption. So again, it’s not a monolith.
What does the US election mean for Signal, its operations, and its growth?
Everything is up and to the right. I think general cultural sensitivity to privacy has never been more acute, and it gets inflamed—you see a lot of people joining—in moments of political volatility. So Ukraine used to be a market that was near the bottom. It’s now one of our top markets, following the Russian invasion. That’s just one example. We also see growth in response to things like what we call a Big Tech Fuckup, like when WhatsApp changed its terms of service. We saw a boost in desktop after Zoom announced that they were going to scan everyone’s calls for AI. And we anticipate more of those.
We also see a boost when a big organization mandates the use of Signal, so the European Commission in 2020 endorsed Signal as the only messenger they recommended for members. So with things like that, effectively everyone switches at once.
Elections can be moments where that happens. But often, those moments are less for us to predict and more for us to be prepared for. Forty years of history seem to be happening every other week.
Going back to your sense of Signal’s new phase: What is going to be different at this point in its life? Are you focused on truly bringing it to a billion people, the way that most Silicon Valley firms are?
I mean, I … Yes. But not for the same reasons. For almost opposite reasons.
Yeah. I don’t think anyone else at Signal has ever tried, at least so vocally, to emphasize this definition of Signal as the opposite of everything else in the tech industry, the only major communications platform that is not a for-profit business.
Yeah, I mean, we don’t have a party line at Signal. But I think we should be proud of who we are and let people know that there are clear differences that matter to them. It’s not for nothing that WhatsApp is spending millions of dollars on billboards calling itself private, with the load-bearing privacy infrastructure having been created by the Signal protocol that WhatsApp uses.
Now, we’re happy that WhatsApp integrated that, but let’s be real. It’s not by accident that WhatsApp and Apple are spending billions of dollars defining themselves as private. Because privacy is incredibly valuable. And who’s the gold standard for privacy? It’s Signal.
I think people need to reframe their understanding of the tech industry, understanding how surveillance is so critical to its business model. And then understand how Signal stands apart, and recognize that we need to expand the space for that model to grow. Because having 70 percent of the global market for cloud in the hands of three companies globally is simply not safe. It’s Microsoft and CrowdStrike taking down half of the critical infrastructure in the world, because CrowdStrike cut corners on QA for a fucking kernel update. Are you kidding me? That’s totally insane, if you think about it, in terms of actually stewarding these infrastructures.
So your focus is in preservation of this role for Signal.
Preserving and growing. This is not a sclerotic kind of museum piece. This is an adolescent animal that is about to double, triple in size. Our user base has been steadily growing, and I think it’s going to keep growing with the geopolitical volatility of the world and a new generation that is much more hip to the perils of Big Tech controlling infrastructure.
My job is to make sure Signal has the room to do that and to make sure that people who need to be paying attention, who need to be paying up, who need to be putting their shoulder behind the wheel of this vision, are lined up to do that.
As Signal becomes a mainstream app, what about that hacker scene that was once the core audience? It’s become very stylish, among some of the hackers that I talk to, to say “Signal’s blown. There’s a backdoor in Signal. Or the intelligence agencies have cracked Signal. We need to move to my preferred, obscure, ultra-secure messaging platform.” But how do you answer that, and how do you live with this issue of proving a negative all the time, that there’s not a vulnerability or a backdoor in Signal?
I would push back on it being stylish among hackers. On the whole, we love and work very well with the security research community. You’re talking about a few loud, callow security researchers, some of them “security researchers” in quotes. But it’s very disappointing to me to see that kind of discourse. It shows, to me, a kind of abdication of responsibility.
Where I get really frustrated is when that over-claiming and selfish fame-seeking behavior collides with an information environment where there are state actors trying to move people away from private communication onto less private communication platforms. We have desperate civil society groups, desperate human rights groups, journalists, immediately calling us after one of those things goes viral saying, “Oh my God, is there a problem with Signal? We’re moving all our people to some alternative”—which is less secure.
There are actual existential stakes here for people around the globe, 99 percent of whom can’t actually validate random security researchers’ claims but nonetheless are taking it seriously, because it’s a life-or-death issue.
I think we’re talking in some part about Elon Musk here. He contributed to this recently when he vaguely alluded to “known vulnerabilities” in Signal in a post on X.
It concerns me to see the Elons of the world jumping on that bandwagon. Elon’s been a longtime supporter of Signal. He tweeted in 2021 he used Signal, right? He’s been a fan. So I don’t know what changed. What I do know is that, as far as we know, the claim was completely baseless. There’s no serious report that backs it. But that was two nights of me not sleeping, just dealing with Twitter stuff because we had to take it seriously, because it freaked out a lot of people.
As Signal becomes more mainstream, I increasingly find that I’m using Signal in completely trivial, normal, everyday communications with people and sending them videos, sending them entire slideshows of images of my kids or whatever. And I keep thinking, I’m costing Meredith so much money right now.
Andy, it is an honor. [Laughs.] It is an honor to send your slideshows and videos.
But this is all very expensive for a nonprofit. WhatsApp, of course, would love you to just post as much data as you can on their platform. They can stomach the cost, because they’re making money. But Signal—I worry, in terms of the cost of all that data, are you the dog that caught the car at some point?
It’s a net positive. Encryption requires a network effect. Our goal is that everyone, everywhere can easily pick up their device and use Signal to talk to anyone else.
We’re well supported. We are a nonprofit—not because we want to exist on coins thrown at us in a hat. We’re nonprofit because that kind of organizational structure is, at this point in history, critical to focusing on our mission. In our industry, profit is made through monetizing surveillance or providing goods and services to those who do. There isn’t a business model for privacy on the internet.
Signal is a nonprofit because a for-profit structure leads to a scenario where one of my board members goes to Davos, talks to some guy, comes back excitedly telling me we need an AI strategy for profit. Or another one of my board members comes in, gets really nervous that our revenue model, whatever it is, isn’t bringing in something that meets our goals and says, “Well, maybe we can start collecting metadata. Maybe we can reduce the focus on privacy, because of course our primary objective function, as a traditional for-profit, is revenue and growth.” And privacy in an economy powered by surveillance will necessarily hamper those.
So we’re looking now at how we grow the model Signal is building into something sustainable. And the type of money we’re talking about isn’t huge for tech—it’s huge for other sectors, but we’re pretty lean for tech. And how do we extend that model as a template that others can adopt in service of building infrastructure, applications, and alternatives to the concentrated surveillance business model at the heart of the tech industry?
This is a very rude question, but on this subject of being lean, I looked up your 990 and you pay yourself less than some of your engineers.
Yes, and our goal is to pay people as close to Silicon Valley’s salaries as possible, so we can recruit very senior people, knowing that we don’t have equity to offer them. We pay engineers very well. [Leans in performatively toward the phone recording the interview.] If anyone’s looking for a job, we pay very, very well.
It feels taboo to even be talking about this. But it really captures the weirdness of Signal.
Well, look, it captures that we’re doing what we can to build a model that works in opposition to a near-hegemonic model that we are up against. Right? It’s going to look weird because the norm is not what we’re about.
I wouldn’t imagine that most nonprofits pay engineers as much as you do.
Yeah, but most tech is not a nonprofit. Name another nonprofit tech organization shipping critical infrastructure that provides real-time communications across the globe reliably. There isn’t one.
This is not a hypothesis project. We’re not in a room dreaming of a perfect future. We have to do it now. It has to work. If the servers go down, I need a guy with a pager to get up in the middle of the fucking night and be on that screen, diagnosing whatever the problem is, until that is fixed.
So we have to look like a tech company in some ways to be able to do what we do.
If I could get into the actual story of your career, you said in your initial blog post when you took the president role that you’ve always been a champion of Signal. I think you said you used RedPhone and TextSecure?
I did.
I tried those at the time, enough to write about them. But they were pretty janky! I’m impressed or maybe a little weirded out that you used them back then.
But I was in tech. Right? All the cool people in tech were already using them.
And you were at Google at that time?
Yeah. I was with Google then.
What was somebody like you even doing at Google, honestly?
Have you ever heard of needing money to live and pay rent, Andy? [Laughs.] Have you heard of a society where access to resources is gated by your ability to do productive labor for one or another enterprise that pays you money?
I get that! But you are now such a vocal anti-Silicon-Valley, anti-surveillance-capitalism person that it’s hard to imagine—
I’m not anti-tech.
Yeah, I didn’t say that. But how did you end up at Google?
Well, I have a degree in rhetoric and English literature from Berkeley. I went to art school my whole life. I was not looking for a job in tech. I didn’t really care about tech at that time, but I was looking for a job because I graduated from Berkeley and I didn’t have any money. And I put my résumé on Monster.com—which, for Gen Z, it’s like old-school LinkedIn.
I was interviewing with some publishing houses, and then Google contacted me for a job as something called a … what was it, consumer operations associate?
Consumer operations associate?
Yeah. What is that? None of those words made sense. I was just like, that sounds like a business job.
So I set up a Gmail account to respond to the recruiter. And then I went through, I think, eight interviews and two weird sort of IQ tests and one writing test. It was a wild gauntlet.
What year was this?
I started in July of 2006. Ultimately what a “consumer operations associate” meant was a temp in customer support. But no one had told me that. And I was like, what is this place? Why is the juice free? The expensive juice is free. I’d never been in an environment like that. At that point, Google had hit an inflection point. They had a couple of thousand employees. And there was a conviction in the culture that they had finally found the recipe to be the ethical capitalists, ethical tech. There was a real … self-satisfaction is maybe an ungenerous way to put it, but it was a weird exuberance. I was just really interested in it.
And there were a lot of blank checks lying around Google at that time. They had this 20 percent time policy: “If you have a creative idea, bring it to us, we’ll support it”—all of this rhetoric that I didn’t know you shouldn’t take seriously. And so I did a lot of maneuvering. I figured out how to meet the people who seemed interesting. I got into the engineering group. I started working on standards, and I was just, in a sense, signing my name on these checks and trying to cash them. And more often than not, people were like, “Well, OK, she got in the room, so let’s just let her cook.” And I ended up learning.
What were you working on? I don’t actually know the last job you had at Google, but it was not in customer support.
My God, no. No. I founded a research group.
So it wasn’t a fantasy, the 20 percent thing. It sounds like you actually really lived that Google dream. You made those side hustles and explorations your whole job, eventually. This all sounds very pro-Google, pro-Silicon-Valley. It’s, like, the dream of every young person who wants a job at Google.
If I only fucked with my own success, I would be an SVP at Google right now with five houses.
I was working with some of the smartest people I’ve ever worked with. I shared an office with the coauthor of the C programming language! And people were really generous with their time and expertise. So all of that was great.
And I can hold that in a balance with the fact that ultimately the business model, intentionally or not, is deeply toxic. And we’ve seen the derivatives of that over the past 10 years play out over and over and over again.
Yeah. Not to make this sound like Dave Eggers’ The Circle or something, but at what point did this utopia start to sour for you? How did you make this shift to who you are now and what you’re doing now?
I cofounded an effort called Measurement Lab around that time, the world’s largest source of open data on internet performance. At the time it was a hypothesis project: Can we put some teeth on the net neutrality debate by creating a numerical benchmark for “neutrality” and begin to hold internet service providers to that standard? It was really where I cut a lot of my technical teeth, got deep into networking. We were able to show through this mass data collection, through years of work, that there were actual issues happening at interconnections.
So all of that was right around the time when machine learning was becoming a new hot thing.
There’s an inflection point in 2012 that I’m sure you’re familiar with: There’s this paper that got published, called the AlexNet Algorithm, that basically brought a bunch of ingredients together and ignited the current AI moment after a long winter. What it showed is that with massive amounts of data and powerful computational chips, you could make old algorithmic techniques—techniques that dated from the 1980s—do new and impressive things.
OK … I guess I maybe see where this is going.
I am hypersensitive to data. I’ve been in the measurement wars. So I’m like, “Wait, what is machine learning? Oh, so you’re taking trashy data that you claim represents human sentiments—or things that are much more difficult to measure accurately than the low-level network performance data that I was very familiar with—and you’re putting that into some statistical model, and then you’re calling that intelligence?”
I was like, “Wait, no, you can’t do that.” So that animated a lot of my concerns around AI.
And of course throughout this time I’m learning more and more about what the business model actually is. I’m situated in the technical infrastructure group, and what I began to realize is: That’s where the money is. I’m looking at the balance sheet, the Measurement Lab server infrastructure, more than 10 years ago now, cost $40 million a year just in uplink connectivity.
It gave me a lot of sensitivity to just the capital involved. I’m like, “Oh, this is not innovation. This is capital.”
$40 million is basically Signal’s entire annual budget right now.
It’s a little under that. But yeah, I think the capital intensiveness of tech and the consolidation of tech infrastructure was something I was sensitized to pretty early.
What was new to ignite this AI boom right then? It was the presence of massive amounts of data—training data and input data—and powerful computational chips, the more of them strung together, the better. Now, what are those? Those are exactly the affordances that have accrued to the early platform companies that have built out their social media networks, built out their data centers. With artificial intelligence, we’re basically relaundering a lot of this shit through broken models that are giving Google more and more authority to claim intelligence when what they’re actually doing is issuing derivatives of the shitty data they have. And what was AI used for? Why were they into it? Because it’s really good at tuning ad algorithms, at targeting ads. It’s not an accident that the three authors of this AlexNet paper were immediately hired by Google.
Through a number of paper cuts, I was becoming sensitized to the problems with surveillance, the problems with this mass-scale approach, the platform approach—where poison salts the earth for any other competitor—and the problem with that concentrated power.
Was there any single turning point for you?
No, there was no one moment. There’s a lot of sedimentary layers as I learn these things: That seems weird. That seems iffy. That doesn’t seem aboveboard.
By 2017 I’d already cofounded the AI Now Institute. I was pretty well known in the field and within the company as a vocal critic. My job was very cool. I could say whatever I wanted. I thought I had found the magical formula.
Then I realized, yeah, everyone loves it because you’re not actually in the room informing decisions. You’re just providing, well, in the most cynical sense, a pretext that Google can point to and say, “We listen to heterogeneous voices across the spectrum. We’re a very open company.”
But in 2017, I found out about the DOD contract to build AI-based drone targeting and surveillance for the US military, in the context of a war that had pioneered the signature strike.
What’s a signature strike?
A signature strike is effectively ad targeting but for death. So I don’t actually know who you are as a human being. All I know is that there’s a data profile that has been identified by my system that matches whatever the example data profile we could sort and compile, that we assume to be Taliban related or it’s terrorist related.
Right. Like, “We kill people based on metadata,” as former NSA and CIA director Michael Hayden said.
That’s it, exactly.
Google had vocally, many times in the past, disavowed doing military work. Because yoking the interests of a massive surveillance corporation to the world’s most lethal military—which is what the US military call themselves, not my term—is a bad idea. And the marriage between overclassification on the government side and corporate secrecy on the tech industry side would be a disaster for any accountability around the harms of these systems.
That was the point at which I was like, look, I can’t make my reputation and my money on offering an analysis of why this might be bad without actually pushing back using a little bit more muscle.
We’re talking about Project Maven now, the DOD contract that led to your organizing walkouts at Google.
I mean, it wasn’t just me. I was somebody who put my reputation on the line and did a lot of work for this, but it was thousands and thousands of people within Google. It was a sustained effort. It was many of the most senior AI researchers coming out and saying “fuck this.” One, it doesn’t work. Two, I don’t want to contribute to it. And three, this is a bad path to go down.
Because let’s be clear: It doesn’t work. Bloomberg’s reporting in Ukraine recently was pretty categorical on this. The people who picked up that contract, the Maven contracts, have built out the systems, and they’re buggy, they’re fallible, they’re overly complex.
What does it feel like to have been looking at that in 2017, and now not only is AI the buzzword of the moment but also we’ve seen evidence that the IDF is bombing Gaza based on the output of AI tools?
Well, I don’t feel like I was wrong! I mean, if being right were a strategy, we would’ve won a million times over.
I think one of the things we see in Gaza is the interlocking of mass surveillance and these targeting systems. The latter is reliant on the former.
In order to create data profiles of people, in order to even have the pretext of targeting them algorithmically, you first need data. And Gazans are some of the most surveilled people in the world. That then becomes the fodder for training these models—however that’s done—to determine that if a given data profile looks enough like the profile that’s been flagged as a terrorist profile, you should then bomb them.
It’s a tragic example of at least part of what we were warning about then.
On this question of how surveillance and AI are intertwined: Do you have people say to you, “Meredith, please stick to your job, your focus is supposed to be privacy. Why are you talking about AI all the time? Aren’t you the encryption person?”
Maybe people say that about me. I would say I’m well established enough in my position that few people say it to me. If you were going to say that, you’d have to back it up with, why do you think those are unrelated?
The short answer here is that AI is a product of the mass surveillance business model in its current form. It is not a separate technological phenomenon.
When I go back and I listen to your congressional testimony on AI, you were talking more about the ability of AI to do scary things for surveillance. But what you’re talking about now is the ways that surveillance provides the data and the infrastructure for AI. It sounds like a chicken and egg thing.
Well, AI is the narrative. It’s not the technology. Surveillance and infrastructure are the material conditions.
So you’re saying that AI and surveillance are self-perpetuating: You get the materials to create what we call AI from surveillance, and you use it for more surveillance. But there are forms of AI that ought to be more benevolent than that, right? Like finding tumors in medical scans.
I guess, yeah, although a lot of the claims end up being way overhyped when they’re compared to their utility within clinical settings.
What I’m not saying is that pattern matching across large sets of robust data is not useful. That is totally useful. What I’m talking about is the business model it’s contained in.
OK, say we have radiological detection that actually is robust. But then it gets released into a health care system where it’s not used to treat people, where it’s used by insurance companies to exclude people from coverage—because that’s a business model. Or it’s used by hospital chains to turn patients away. How is this actually going to be used, given the cost of training, given the cost of infrastructure, given the actors who control those things?
AI is constituted by this mass Big Tech surveillance business model. And it’s also entrenching it. The more we trust these companies to become the nervous systems of our governments and institutions, the more power they accrue, the harder it is to create alternatives that actually honor certain missions.
Just seeing your Twitter commentary, it seems like you’re calling AI a bubble. Is it going to self-correct by imploding at some point?
I mean, the dotcom bubble imploded, and we still got the Big Tech surveillance business model. I think this generative AI moment is definitely a bubble. You cannot spend a billion dollars per training run when you need to do multiple training runs and then launch a fucking email-writing engine. Something is wrong there.
But you’re looking at an industry that is not going to go away. So I don’t have a clear prediction on that. I do think you’re going to see a market drawdown. Nvidia’s market cap is going to die for a second.
So it’s not a self-correcting thing. Are you arguing that regulation is the solution?
Regulation could be part of it. Things like structural separation, where we begin to separate ownership of the infrastructure from the application layer, would perturb these businesses. I think meaningful privacy regulations could go a long way.
Stopping the collection of massive amounts of data, curtailing the authority these companies have claimed for themselves to define our world based on the data they collect: All of that becomes really interesting territory, because it’s curbing the tributaries of infrastructural power that is animating this boom.
You can see Signal is doing that in some sense. We don’t collect any data. We are effectively creating a system where, instead of all your metadata going to Meta, your metadata is going to no one.
Yeah, but it’s hard to point to Signal as the solution. You’re an advocate for structural change, but really you’re leading an organization that is so sui generis in the world. How do those things work together? I think you’re saying that you’re providing a model that hopefully other people will adopt? Because it’s not like Signal alone is going to solve surveillance capitalism.
No, no, no. Signal is not a solution to the problem. It is proof that we can do things differently, that there’s nothing natural about the paradigm that exists.
The Signal model is going to keep growing, and thriving and providing, if we’re successful. We’re already seeing Proton [a startup that offers end-to-end encrypted email, calendars, note-taking apps, and the like] becoming a nonprofit. It’s the paradigm shift that’s going to involve a lot of different forces pointing in a similar direction.
We need to build alternatives, and this is something I’m working on with a coalition in Europe and folks in the US and elsewhere. But what does significant capital poured into building independent tech look like? How do you disarm the massive platforms, draw down their cloud infrastructures, the fact that they control our media ecosystem, the entire nesting-doll of toxic technologies that we’re seeing from this, while building alternatives that actually interconnect the world?
What do communications networks that support this vision look like? What does an independent cloud infrastructure look like? How do we openly govern technologies that have been closed and captured by these large companies? And how do we do that at the level of actually building things? I’m really invested in that, because I think we’re going to need it for the world.
Does that mean that, in another 10 years, there’s going to be Signal Search, Signal Drive, Signal whatever?
There’s no road map for that. We don’t have to do everything. Signal has a lane, and we do it really, really well. And it may be that there’s another independent actor who is better positioned to provide some of those services. As a nonprofit, we’re not looking to poison the ground for others and do it all ourselves. We’re looking for a teeming ecosystem of people who are actually innovating, not just providing financializable startup infrastructure to be acquired by Big Tech at some point.
It’s almost a bad habit to think of a single company needing to dominate everything.
Yeah.
But still, Signal serves as a model for all of these things you want to see in the world. So what will Signal itself look like in 10 years?
I see Signal in 10 years being nearly ubiquitous. I see it being supported by a novel sustainability infrastructure—and I’m being vague about that just because I think we actually need to create the kinds of endowments and support mechanisms that can sustain capital-intensive tech without the surveillance business model. And that’s what I’m actually engaged in thinking through.
I think most people will use Signal. I think Signal will be known almost like the power company. And I think we will see that model take off enough that it’s common sense to not talk about tech as Big Tech but to talk about a much more heterogeneous landscape with many, many more privacy-preserving options.
That’s a lovely vision. I guess basically no one but Signal has been actually making this work, though. So far the for-profit model just keeps winning with this one exception.
Keeps winning what?
Keeps winning users.
So a monopolistic hegemony is a really good way to do that. But it does not win hearts and minds. And we have now fully turned in terms of public sentiment toward Big Tech. People have to use it because you can’t participate in society without it, but that’s not winning users. That’s coercion. We’re talking about lock-in, where other options have been foreclosed by state abandonment or monopoly. The demand for an alternative has never been stronger.
Signal is a heroic example that evolved in a moment of historical contingency and happened to involve some genuinely genius individuals who were committed and had a work ethic that carried it over that period of time. So I know that tech done differently is possible. I don’t think it’s fair to say other alternative models just haven’t worked because people prefer Big Tech. No, these alternative models have not received the capital they need, the support they need. And they’ve been swimming upstream against a business model that opposes their success.
It’s not for lack of ideas or possibilities. It’s that we actually have to start taking seriously the shifts that are going to be required to do this thing—to build tech that rejects surveillance and centralized control—whose necessity is now obvious to everyone.
Correction: 8/28/2024, 5:25 pm EDT: A question in a previous version of this interview incorrectly characterized Whittaker’s income and how it compares to the income of her engineers.
This interview has been edited for length and clarity.
Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.
En savoir plus sur L'ABESTIT
Subscribe to get the latest posts sent to your email.