Risk: Too Much, Too Little, the Wrong Kind, or the Wrong Measure?

Plus! OpenAI's Oops D'Etat; Meta; Unit Economics and the Japanese Labor Force; Cruise; Growth and Corruption

In this issue:

Quick note to readers: we're now offering a better RSS experience, including full-text RSS feeds of paywalled posts for paying subscribers. You can access your unique RSS link here.

Risk: Too Much, Too Little, the Wrong Kind, or the Wrong Measure?

Risk is full of paradoxes: we take too much, in the sense that casinos and zero-days-to-expiration options are both thriving businesses, and we take too little, in the sense that insurance, too, is a huge business. More money than is optimal gets deposited in FDIC-insured savings accounts (instead of put to more useful work), and, among those with who invest, more money than is optimal gets invested in a) equities in general, and b) specific high-risk cult stocks.

More prosocially, many people who could have better risk-adjusted returns working at a big company choose to quit and start their own thing; many investors who could just buy QQQ with a little leverage instead go through the arduous process of making early-stage, high-risk investments instead.

So the world exists in a muddle of risk, at every conceivable level, from people biking without helmets while wearing masks to nation-states fixating on one set of existential risks while ignoring or worsening others.

There's even a fractally fraught relationship with risk in terms of people's decisions balancing career and personal advancement. Years ago, Berkshire Hathaway's heir-apparent, David Sokol, left after recommending that Berkshire acquire a company he'd made an undisclosed personal investment in. He put his potential to become CEO at one of the world's most respected companies on the line in order to compound his net worth a bit faster.[1]

We tend to argue that we're too risk-averse, in books like The Complacent Class, The Decadent Society, etc. Stocks for the Long Run even extends this argument to asset allocation, noting that over sufficiently long periods, stocks outperform bonds and an unlevered investor is actually taking more risk by choosing fixed income.[2] There are three broad arguments for why investors are irrationally risk-seeking:

  1. The "Lottery-Ticket" Effect: if you slice up a given asset class by riskiness, you'll often find that there's a fair risk/reward tradeoff most of the time, but that the riskiest tranche has a worse risk-adjusted return, and sometimes a worse absolute return. This is especially striking in bonds, because there's a kink in the risk/reward graph: the worst-rated investment-grade bonds have poor risk-adjusted returns, as do the best-rated non-investment-grade bonds. This makes sense if there's a large population of bond investors who have a mandate to do either investment-grade or junk bonds, but not both, and who can't use leverage. If they want to stand out, they have to take risk, but there's a lot of competition to do so.
  2. In The Missing Billionaires, Victor Haghani and James White make the striking observation that if the US's richest people circa 1900 had put all of their money into a standard medium-risk diversified portfolio, spent normally, and paid the usual taxes, their heirs would include roughly 16,000 billionaires. The US doesn't have that many billionaires. Clearly, some combination of high spending and bad investment is taking a big chunk out of returns.[3]
  3. Finance always has a data problem. For long-term returns, one of the problems is that there's a fairly narrow range of bad outcomes that does push equity values to zero but doesn't eliminate most of the relevant records. The German CDAX index was trading close to its all-time highs when prices were frozen, and that market outperformed Russian stocks, which hit an all-time high during the First World War before getting wiped out. Even more broadly, we have better records from the civilizations that survived, and it's entirely possible that the long-term return from the global equity-like portfolio is -100% because every market eventually gets zeroed by some unpredictable catastrophe. Equity markets didn't really start developing until German mines sold equity interests in the 15th century. But ancient Rome had a sort of local index fund in the form of tax collection rights, which were bought in advance and represented a claim on local economic output. The Visigoths and the like were not especially interested in the sanctity of Roman contracts, so these, too, were a long-term zero.

So there's a good statistical argument that risk assets are riskier than they look, and that most people—even very rich people who can afford sophisticated advice—will do badly in the long run when they manage their own money. One response to this is to manage money with a specific focus on worst-case scenarios. But try talking to goldbugs, crypto maximalists, or people who split their asset allocation between cash and index puts, and you'll start to think that, considering their view of the distribution of outcomes, their portfolio is overweight optimistic bets like "the monetary system will collapse and only my chosen asset class will retain any purchasing power," and under-allocated to ammunition, compounds in remote areas, a personal militia, etc.

So there's a good statistical case that, on average, people take more risk than they really should, given their long-term goals and explicit preferences. That we ought to be happier with a lower return at a lower standard deviation, or with a steady job that has a good health insurance plan over an uncertain startup. But all that stability is contingent on, and basically subsidized by, the risk-takers. Every company in your index fund was once an idea that likely had a lower ex ante risk-adjusted return than the founder working a regular job. And if you do invest in an index fund, the reason those prices are fairly accurate is that there's an enormous effort, with lots of time, stress, office space in expensive cities, etc., devoted to generating alpha—even though, by definition, total alpha is zero before taxes and transaction costs. Risk-seeking is good, because it creates the positive externalities that the rest of the world free-rides on. Markets will equillibriate if the financial reward for risk-taking goes up. But the market for status is less efficient. So one of the great sources of positive externalities in the world is to praise risk and risk-taking; we won't necessarily get up to the socially-optimal amount, but given the financial headwinds, more of it is better.


  1. Warren Buffett himself has been criticized on similar grounds, since leaked tax documents indicate that he'd sometimes trade stocks that Berkshire Hathaway owned. In this case, there's only imperfect circumstantial evidence, and certainly not any evidence that Buffett bought ahead of Berkshire accumulating a stock and then flipped it for a profit, the specific thing such policies discourage. It's not a great look, even if it's nothing worse than forgetting to do some paperwork. ↩︎

  2. Two things to note here are that 1) this summary is probably a bit unfair to the nuances of the argument, and 2) as presented, the argument is not true, because an investor with a mix of different asset classes who rebalances between them will be able to buy stocks when they're cheap by selling appreciated bonds, and vice-versa, and, ignoring taxes and transaction costs, will compound faster with a diversified portfolio than not. ↩︎

  3. In general, it's hard to square the idea of r>g with the low persistence of extreme wealth. There are still plenty of rich Rockefellers running around, but they're not at Musk or Arnault levels of wealth. Presumably in a century or two, the typical Musk will be objectively quite rich, but the richest people won't be Musks. Of course, predicting a negative that can't be checked until after my death is pretty safe, so please take this incentive incompatibility into account if you're making multi-century investing decisions predicated on some level of Musk family mean reversion. ↩︎

Diff Jobs

Companies in the Diff network are actively looking for talent. A sampling of current open roles:

Even if you don't see an exact match for your skills and interests right now, we're happy to talk early so we can let you know if a good opportunity comes up.

If you’re at a company that's looking for talent, we should talk! Diff Jobs works with companies across fintech, hard tech, consumer software, enterprise software, and other areas—any company where finding unusually effective people is a top priority.

Elsewhere

OpenAI's Oops D'Etat

A few people in the AI world had a more interesting weekend than usual. Blow-by-blow reporting is a) widely-accessible, and b) often obsolete before it comes out, but the short version is that [OpenAI announced Friday afternoon during market hours that they were undergoing a "leadership transition," i.e. Sam Altman was fired, effectively immediately (he was "not consistently candid in his communications with the board,"), while Greg Brockman would be leaving the board but remaining at the company. A few hours later, Brockman stepped down. Meanwhile, in further annals of inconsistently candid communications, they'd informed Microsoft of this decision with one minute's notice. On Saturday, the board was discussing bringing Altman back following high-profile defections ($, The Information). On Sunday night, they named Twitch cofounder Emmett Shear the new CEO, while Altman and Brockman were joining Microsoft. And then earlier this morning a majority of employees signed an open letter threatening to leave for Microsoft if the board didn’t step down.

The part of this process where the board was driving the action is a good launching point for the Candor Question. OpenAI's board has a very good defense for this: if they view Microsoft as part of the Altman faction, then giving them advance warning means giving Sam Altman advance warning, which makes executing a quick firing difficult. (If you invite someone to the Google Meet in order to fire them, and their lawyer signs in instead, you're probably losing.) Clearly, there are times when perfectly consistent candor is inconsistent with broader business goals.

It's very unclear what the specific issue the board fired Altman over was. There are many plausible theories, from business issues to ideological disagreements about the pace of AI development, to personal ones. The main thing to keep in mind is that, while all the speculation is fun, any specific theory is either betting that the board of directors didn't see the same pre-defenestration tweets you did or that widely-publicized issues had some sort of smoking-gun evidence of dishonesty that was just revealed in the last few days. Assume that any confident claim about what really happened will look foolish in a few days (or weeks, or, if we're unlucky, in a couple decades when everyone involved is retired and anxious to get their side of the story out before they die).

We tend to assume a certain level of professionalism and planning from announcements made by multi-billion dollar institutions. For example, it's usually safe to think that when they make a big decision, they've run it past the other institutions they collaborate with that might have feedback. If, to take a more specific example, a company puts out a press release announcing that someone is changing their job title but staying at the company, it might be safe to assume that they'll be quietly leaving in a year—but not in a few hours.

One argument that's come up from this is that OpenAI's complex governance structure ($, The Diff) made this inevitable. A weird system of checks and balances can create a strange nexus of power—if you find this topic interesting and have a hundred or so hours to spare, I recommend the complete works of Robert Caro for further elaboration. But these complicated systems are often better for incumbents; the tenth-largest media conglomerate in the world, Paramount, is technically controlled by a private holding company that originated as a chain of drive-in theaters, and at one point the linchpin of the Lee family's voting control of Samsung was their ownership of Samsung C&T, parent company of their theme park business among other things. It's not so much the complexity of a structure that makes it risky, it's the asymmetry in how well different parties can understand and manipulate that complexity.

OpenAI's board is the board of a nonprofit, and has a completely different set of assumptions from a for-profit board. There are plenty of for-profit companies that have some kind of mission, with varying levels of commitment, and one way to model them is that the mission is easier to accomplish with money, so their social goals set a hurdle rate for business decisions. The company is implicitly treating the mission as a residual claimant on company profits—not the lowest priority, but the last one, because that's synonymous with saying that they get all of the upside. Giving a nonprofit veto power over a for-profit entity is trickier, especially when the nonprofit's goal is to avoid the downsides of the for-profit entity's behavior. It's as if an oil company were controlled by a charity committed to accelerating net zero, or if tobacco companies were owned by a group trying to eliminate lung cancer. It's no wonder this will lead to weird outcomes. If nothing else: the optimal number of cigarettes to be sold in that scenario isn't zero, as long as the contribution profit from one more cigarette can pay for research or lobbying that produces 11 minutes of aggregate global life expectancy. But even if the math works out, it's a bad look. And boards that are explicitly nonprofit have to care about appearances, whereas for-profit boards need to make sure the numbers add up. But one thing this demonstrates is that governance policies are, in extreme cases, more suggestions than enforceable mandates. If the board follows proper procedure and decides that the company will do X, and then the company’s biggest supplier, most famous employees, and two thirds of their entire staff disagree, then the motion to do not-X carries.

Disclosure: Long MSFT.

Meta

Speaking of tech companies going through rough patches, The Pragmatic Engineer has a good piece on the turnaround at Meta ($). He quotes an employee who was working on ad targeting models after Apple started restricting tracking: "A research team would bring a new model to us, and we had to productionize it. It was a pressure-cooker environment as they needed to test it quickly. We worked around the clock; set it up and deployed it. And every month or two, one of these models would bring results like ‘increased the relevancy of ad targeting by 20%." It's a good reminder that any smooth growth trajectory, or even a not-so-smooth trajectory that still leads to good outcomes, is the result of lots of continuous effort; any ongoing research process that periodically yields 20% improvements in a key metric is also a research process that usually yields no improvement at all. Meta's problems at their low point were magnified by morale issues, and managing morale is essential in power law-driven fields where incrementing the sample size by a bit can radically improve outcomes, but where each increment is likely to lead to a demoralizing result.

Disclosure: Long META.

Unit Economics and the Japanese Labor Force

Companies pay attention to unit economics at many levels because aggregate measures can be misleading. For example, here's a playbook that any growing SaaS company can use to improve margins next quarter without any meaningful impact on sales: stop hiring salespeople. What will happen, in the first quarter, is that you're not paying for unproductive new people, and that your more experienced staffers are spending more time talking to prospective customers and less time training new people. What happens in two years, of course, is that many of your experienced salespeople have left, and there isn't anyone new to replace them. The memory of the margin beat will always be with you, of course.

Sometimes similar to this is happening at large Japanese employers, which are seeing growing attrition from smaller startups ($, FT). At first, this isn't a huge problem, since the big companies are very structured, and don't give their newest employees much room to maneuver. But if churn is low overall, but highest among potential CEO successors, it's a bad sign long before the results show up in financial statements.

Cruise

This weekend was perhaps the best time since March 2020 to quietly drop disappointing news. So, Kyle Vought of Cruise has resigned. This is a good demonstration of how much of a momentum bet new tech companies are: an autonomous vehicle business that doesn’t have cars on the road is continuously weakening its data advantage compared to competitors, so it has a short timeline to fix its model and get back to collecting data before competitors get an unbeatable lead. But during that wait, the company is also losing talent, and in the absence of good news from their operating business, the only news left is the bad kind.

Growth and Corruption

The investment-and-export-driven growth model has had impressive results in many countries, but it doesn’t work forever. One problem it runs into is that large investments financed by bank loans create a temptation to steal some of the money being invested, which is a particularly tricky problem to deal with because in such an economy, the banks are closely tied to the state and aren’t strictly trying to maximize profits. There can be a period where corruption is growing faster than GDP in percentage terms, but slower in incremental dollar terms; compound interest means that eventually, the level of corruption will be unsustainable and the economy will need reforms. Vietnam seems to be hitting that point right now: a property developer may have embezzled a total of $12bn, or about 3% of the country’s GDP.