The Annoying Path to Privacy Herd Immunity

In this issue:

  • The Annoying Path to Privacy Herd Immunity—The problem with privacy-preserving technologies is that they're often inconvenient, and this means that using them indicates having something to hide. So the best public service you can perform, if you care about privacy on principle, is to arrange your digital life as if you have more secrets worth keeping than you really do.
  • Robots—Humanoid robots have been a sci-fi trope for a while, but one reason they're interesting is that by default they gather information about the real world from a human perspective.
  • Competition—Brutal competition in the delivery app business.
  • Model Moderation—OpenAI promises a less fussy model.
  • Eggs—A market with fewer speculators is a market with more volatility.
  • Sentiment—xAI demonstrates how fast companies can catch up in AI, but also demonstrates why catching up is different from getting ahead.
The Diff February 18th 2025
0:00
/738.925714

The Annoying Path to Privacy Herd Immunity

Online privacy was very nice while we had it, and is, for the most part, something that no longer exists for economic, social, and technological reasons. To the extent that it does exist, it's the result of ongoing inconvenience, and it has an annoying decay function—information-leaking behaviors get retroactively more irresponsible as long as analysis tools improve faster than the rate of link-rot, and for the last few years, that's been the case.

Talking about privacy means talking about the perception of privacy. Ask someone how much they'd have to be paid in order to share detailed information about their likes, dislikes, or interests with a stranger who's trying to make a buck, and you get a high number. Give people the opportunity to barter exactly the same information away in exchange for getting access to a search engine, a social network, or endless short videos, and you get the real market price of privacy.

But that perception argument cuts both ways: we mostly don't feel like we're always being watched. There are a few edge cases like seeing an ad for a product you were just talking about but hadn't searched for.[1] Big tech companies have a massive financial interest in preserving the appearance of privacy, and the best way to do that is to preserve actual privacy, at least in the sense of another person looking at personal information that's tied to a personal identifier. They have the data, and they use the data, but they don't have a business case for manually looking at it—and they have a strong interest in building systems that make it hard for them to do this. Being reluctant to work with law enforcement, or designing end-to-end encryption that precludes them from assisting law enforcement, is both a side effect of this and a very effective ad; if Tim Cook isn't going to violate the privacy of a dead terrorist even at the request of the FBI, he's probably not reading your iMessages.

But there are some cases where privacy is hard to preserve, slightly by design, and there are inconvenient countermeasures people can take to achieve it anyway. If you really don't want anyone knowing what you're up to, you can use PGP, only communicate on end-to-end encrypted apps, only browse while running Tor, etc. The problem with this approach is that you basically fit the profile of a drug dealer or terrorist. You also fit the profile of a political dissident, but some of the time the "dissident" versus "terrorist" continuum is blurred, at least in the eyes of the political establishment.

All of these privacy-preserving behaviors end up being red flags; someone who made a bomb threat to Harvard using Tor was caught because he was the only person using Tor on campus at the time, Bitcoin transactions have eliminated plenty of alibis, "X set disappearing message time to 1 hour" is a message that shows up in Signal even after the hour has passed, etc. (Disappearing messages alone don't implicate people—the default for an in-person conversation is that the disappearing message time is set to instantaneous—but they do indicate something to hide, particularly if that setting is toggled after they've been informed of an obligation not to destroy evidence.) This is just folk Bayesianism, i.e. updating your opinion in a reasonable way in the face of new evidence: people with nothing to hide have little incentive to inconvenience themselves to hide things, so doing this is evidence of misbehavior.

But that standard, broadly applied, makes it very hard for people who have a legitimate interest in privacy and good reasons to stay cautious: political activists, people fleeing from abusive situations, journalists working on a big story, the whistleblowers in communication with them, people dealing with potentially embarrassing mental or physical health issues, etc. There is a form of herd immunity for these groups, and it comes from a different set of people: the ones who insist on maintaining incredibly tight opsec as a matter of ideology or personal preference rather than any legitimate reason to hide things. It's an archetype that's existed for a long time; a decade ago XKCD was joking about people who had elaborate security protocols in order to protect their personal data, which consisted of emailing about cryptography. As long as there are beliefs that 1) might be true, 2) have unknown popularity, and 3) could be subject to a preference cascade if compelling arguments were made for them, then it's prosocial to contribute to society's privacy herd immunity by acting very squirrely about your digital paper trail even if you have nothing whatsoever to hide.

Early 2025 is an absolutely fantastic time to talk about political dissidents, since the perceived political affiliation of tech has flipped so recently, and because norms about what speech gets people fired have flipped, too. So it's a lot easier to see pseudonymous political speech as a general principle rather than something that happens to be to the advantage of one side for the moment. They're useful, and one of the reasons they're useful is that the market for information about optimal policy is inefficient, and the reason for that is that supporting specific policy proposals is a way to signal membership in political coalitions, and that criticizing these proposals as bad means to achieve desirable ends sounds a lot like criticizing the intended ends. That's not true, of course. There's broad agreement about many end goals: people want poor people to have more material wealth (and disagree over whether the best way to do that is a generous safety net or a more fluid labor market); people want nice neighborhoods (and differ over whether that means single-family homes with big yards and ample parking or apartment buildings that can cram far more families into a given desirable area). It even applies to social issues, where the steelman of each side is "let them do what they're in a position to know they want to do" and "set up broad incentives to cooperate rather than defect."[2]

All of this gets more important in an environment where tools for analyzing text at scale are more widely available. It's harder than it used to be for someone to maintain pseudonymity, or to get away with being a little more freewheeling in some of their earlier podcast appearances. All of those comments are part of the corpus that models are trained on, and it will only get easier to identify linguistic quirks.

The saving grace for privacy is that increasingly, big corpuses of text or images are valuable assets whose owners auction them off to the highest bidder, so different models will be trained on different texts. So preserving privacy will mean having different personae that participate in different ecosystems: you might live your professional life on OpenAI-aligned LinkedIn, your family life on Llama-friendly Instagram, and your slightly embarrassing hobby on Gemini-ready Reddit. This is a weirdly feudal way to think about the world, but feudalism is a system that made more sense when property rights were unstable and needed to be actively defended, and AI certainly has that effect.


  1. Which has some reasonably innocent explanations: friends are very similar, and purchase data or models thereof is very effective for targeting ads. So if your friend talks about a new set of bluetooth headphones they love, and suddenly you see an ad for the same headphones, there's a decent chance that their purchase informed the ad, and a good chance that you and your friend are in similar enough demographic buckets that you're both more likely to see the same ad in a different context. It's possible that some knowledge about human behavior exists entirely as a feature of some ad-targeting system, and isn't explicitly known by anyone. ↩︎

  2. This is a neutral in the sense that whether to cooperate or defect depends very much on whether the equilibrium you're in is good or bad. But it's worth biting the bullet and saying that the rules social conservatives push are only worth pushing if there's an incentive not to follow them, so you are defecting and it's important to recognize this—a lot of harm has been done by elites realistically assessing the personal impact they'll face from e.g. divorce or drug use, and extrapolating this to people who simply have less margin for error. The behaviors that can ruin your day when you have a trust fund can ruin your life when you live paycheck-to-paycheck, but the people more in a position to set norms and narratives about life decisions tend to skew towards the trust fund set. ↩︎


The Diff’s trade/investment-pitching contest is coming up soon. If you’re interested, please sign up here, and let us know if you’re part of a group that would like to participate. We’re aiming to finish the contest in late March and would like to get a finalized list of participants over the next week.

Diff Jobs

Companies in the Diff network are actively looking for talent. See a sampling of current open roles below:

  • A diversified prop trading firm with a HFT bent is looking for experienced traders. (Singapore or Austin, TX preferred)
  • A premier proprietary trading firm is looking for smart generalists to join their investor relations team, working with external investors, rating agencies, and the internal finance team. Investment banking and/or investor relations experience preferred. Quantitative background and technical aptitude a plus. (NYC)
  • Ex-Ramp founder and team are hiring a high energy full-stack engineer to help build the automation layer for the US healthcare payor-provider eco-system. (NYC)
  • An OpenAI backed startup that’s applying advanced reasoning techniques to reinvent investment analysis from first principles and build the IDE for financial research is looking for software engineers and a fundamental analyst. Experience at a Tiger Cub a plus. (NYC)
  • A hyper-growth startup that’s turning customers’ sales and marketing data into revenue is looking for a forward deployed engineer who is excited to work closely with customers to make the product valuable for them. (SF, NYC)

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

Robots

Meta is planning to invest in humanoid robots, with a plan to mostly offer software and sensors to other robot companies. There's an interesting symmetry here with Meta's investments in both AI and the Metaverse: the Metaverse is a bet that replacing normal sensory input with something else is the next interface, whereas a humanoid robot is a tool for gathering entirely human-optimized sensory input. The question on robots is: aren't augmented reality sunglasses a lot cheaper, both for Meta and for end users? There's some upside to reaching a different market, and for tracking the set of tasks people want done but don't do themselves rather than the set of tasks that are visible from a first-person camera angle. Meta's implicit bet is some combination of believing that there's a real market for robots and believing that there's a real market for reality, and that having humanoid robots is the best way to avoid being left behind.

Disclosure: Long META, though I admit that "Let's make I, Robot real" was not a key part of the thesis.

Competition

In a two-sided network, one of the risks that early winners run is that they'll charge high prices to connect each side, those prices will induce friction, and someone else will beat them on price and rebuild the same network. Uber is suing DoorDash over the latter's efforts to avoid this. DoorDash is, at least according to Uber, asking restaurants to offer them exclusive access and threatening them with higher fees if they don't comply. This is especially interesting because the Uber of not-too-long-ago would have been on the other side of this: since Uber could source drivers for both food delivery and ferrying passengers, they could theoretically price their delivery service lower and still come out ahead. DoorDash got to the scale where delivery alone gave them this option, and part of what they're doing is amortizing the cost of acquiring new delivery customers over the restaurants they work with. Exclusive access to deliveries is worth more to a delivery platform than competitive access to the same, and there's a price that sets an equilibrium. But it might not be a price that the biggest delivery app is allowed to charge.

Model Moderation

In the last few months, a few companies have vocally adopted a more relaxed attitude towards moderation. One place this hadn't extended to was AI models, where the market was 1) the big US labs, which had fairly consistent restrictions on which questions they'd answer, 2) DeepSeek, which had a different set of taboos, and 3) Grok, which ostentatiously ignored them. OpenAI recently announced updated model specs, which seem to offer a more relaxed attitude and are also explicit that the responses the model gives are partly constrained by legal and financial constraints. The new spec also promises that the model won't deliberately try to change users' minds. One reason for this evolution is the general political shift in tech, but another reason is more prosaic: when a product takes off in the US, but mostly among tech people, there's a consistent set of norms that everyone expects it to enforce (even if people don't disagree with those norms, they're probably bored of pointing them out every single time). But a global product can't afford this luxury, and its choices are either bespoke moderation policies for different user groups or an expectation that if everyone's roughly equally offended, nobody's really offended at all.

Eggs

Egg prices have been spiking recently, which has been an especially popular topic because they went up about as much during the post-Covid inflation, so a lot of people have, well, egg on their faces because of things they said about how responsible the President is or isn't for the price of eggs. (The actual reason they're up this time has little to do with policy and a lot to do with avian flu-induced culling.) One reason egg prices have such extreme moves is that eggs mostly aren't traded on speculative exchanges, with the vast majority sold on long-term contracts and only 5% traded on the spot market, through a specialized exchange, The Egg Clearinghouse ($, WSJ). Because the marginal eggs are the ones being priced, the market is unusually sensitive to supply and demand imbalances. And because the exchange connects real users to producers, it's harder for other market participants to temporarily warehouse risk. The existence of this exchange also demonstrates how fragile some markets can be; eggs used to be traded by what's now the NYMEX (in the late 19th century, it was briefly known as the Butter, Cheese, and Egg Exchange), but once a network effect unwinds, it's hard to reassemble and may come back into existence somewhere else.

Sentiment

xAI has released its latest model, Grok 3, which has impressive benchmark results, and xAI is in talks to raise $10bn at a $75bn valuation. One surprise in AI is how many companies have managed to stay competitive, but one reason for that is that they're graded on a curve: the biggest companies have the most surface area for model misuse, or for customer complaints if they launch something half-baked. xAI is smaller, and they have some control over how much usage they get because they can always make Grok less prominent in the Twitter interface. So one reason the industry feels so competitive is that catch-up growth is more straightforward than extending the frontier, which keeps the relentless commoditization of older models moving.