516: Perplexity Financials + Apple M&A, Intel Passed on 30% of OpenAI, TSMC, Zuckerberg, Microsoft, China's Starlink, Oil & Gas, and Psilocybin
"too many soothsayers benefit from the broken clock effect"
Those who bring sunshine into the lives of others cannot keep it from themselves.
–James Matthew Barrie
🛫🍁🌊⛱️🐟🦞 I’m back from vacation. I hope you’re having a nice summer. I can’t believe it’s already late August. It’s a cliché to complain about how fast time flies, but man… subjective time perception is indeed logarithmic —
When you’re 8, a year is 1/8th of your life, or more like 1/4th since you probably don’t remember much about the early years. But when you’re 40, a year is a much smaller fraction of your subjective experience. Add to that the “U” shape of memory, where we tend to remember well our early, formative years, along with the more recent years, but everything in the middle gets a bit mashed together…
— anyway, it was nice to focus on my family for an extended period and let go of the 57 other projects I usually have swirling around my brain (although I couldn’t stay entirely offline…).
We went to Nova Scotia. We spent a few days walking around downtown Halifax, then we visited Peggy’s Cove and spent a couple of days in Lunenburg.
It was our first time there, and the first plane trip for my kids. We weren’t sure what to expect, but it went well — in fact, my oldest actively *liked* turbulence! It was like a rollercoaster ride to him (wheee!).
I like maritime places. The proximity to a large body of water changes the local culture in countless ways, and to an inlander like myself, that feels exotic and romantic. A friend of mine who was in the Navy and worked on ships told me she’s the opposite, so it may be a case of ‘the grass is always greener…’
🚫📲✈️ The flight got me thinking about how the request by the flight crew to put mobile devices in airplane mode was probably antiquated at this point.
It would be *crazy* to rely on compliance for actual safety. Modern avionics must be able to handle all this RF just fine. Knowing how slow regulations move, the rule must come from the era when cellphones were bricks that transmitted with much higher power due to the cruder signal processing technology of the time.
But I learned that there’s another reason for this, one that I hadn’t considered:
When phones are not in airplane mode during flights, they can cause significant disruption to ground-based cellular networks. As the plane moves at high speeds, phones constantly attempt to connect to multiple cell towers, rapidly switching connections. This behavior can overload ground networks, potentially affecting service quality for users on the ground. The Federal Communications Commission (FCC) cites this as a primary reason for maintaining airplane mode regulations, as the cell towers are designed for static communities rather than fast-moving aircraft passengers.
There’s also an impact on battery life.
When trying to reach a faraway tower, phones ramp up signal strength, which uses more power. So if you don’t want your phone to be drained by the time you get to where you’re going, you know what to do.
🔮📄✍️🤖 Here’s an idea sparked by a discussion between Morgan Housel and Jim O’Shaughnessy (💚 🥃 🎩).
Morgan: “Read more history and fewer forecasts.”
Jim: “As an addendum, you might want to read old forecasts to see how many were wildly wrong. This could potentially inoculate you against current ones.”
I think it would be great to create an AI system that scours the internet and media and keeps track of predictions and forecasts.
It would compile them in real time, and then keep track of what eventually happens (whenever possible). It would give success rates by individuals and organizations, and try to establish rough base rates for various categories of events. 📊
Just having some tracking and accountability would be a new feature for public discourse. Currently, there's pretty much none and too many soothsayers benefit from the broken clock effect 🕰️ (they make all kinds of predictions, and once in a while when it finally works, they coast for a long time on that "correct call" while everybody forgets all the incorrect ones).
💚 🥃 🚢⚓️ Ideas matter to you. 💡
You recognize that ideas are often undervalued, even among those whose job it is to make decisions, solve problems, or have the right idea at the right time.
Consider this: Even just one good idea per year can be INCREDIBLY valuable.
The impact of ideas extends far beyond the financial: Making better decisions with your spouse or about your kids or health can have huge lifelong positive effects. 💪
This newsletter provides exploration as a service and a world of diverse ideas. We can’t know in advance what will be useful, but we can have fun learning and thinking things through together.
Your next favorite thing might be just one newsletter away. Thank you for your support! 💡
🏦 💰 Liberty Capital 💳 💴
🔍🤖 Perplexity Usage up 7x This Year + Valuation Tripled Since April to $3bn 💰💰💰 + Apple M&A Speculation 🍎
This is from a tiny base, of course, but it does show some product-market fit. And while purely anecdotal, for me, it’s probably the LLM-based product that I use the most (or maybe #2).
One thing I love about Perplexity is that (for paid subs) they allow you to pick which model you want to use, and they very rapidly add new versions. I was able to test Llama 3.1 405bn almost as soon as it came out (I used it for a couple of weeks, and then switched back to Claude 3.5 Sonnet, still my favorite).
Back to the news:
Perplexity AI, an artificial intelligence search start-up, has increased its monthly revenues and usage seven-fold since the start of the year, after closing a new $250mn round of funding.
The AI-powered search engine answered roughly 250mn questions in the last month, compared with 500mn queries for the whole of 2023 [...]
Perplexity recently closed a new $250mn investment from investors including SoftBank’s Vision Fund 2, said people familiar with the deal, tripling its valuation from $1bn in April to $3bn.
Google probably performs 250 million queries in the time it takes me to type this, but Perplexity doesn’t have to reach Google's scale overnight. There are many paths to success (though there are always *many more* paths to failure 🫣).
I wouldn’t be surprised to see them acquired, but I’m kind of hoping that they stay independent and mature into their own thing. It gets a bit boring when everything in tech is controlled by a handful of companies — especially as they transition from founder-led to more bureaucratic — even if there are customer benefits to plugging new technologies into their massive distribution pipes, thus speedrunning the adoption curve.
Perplexity is still early when it comes to monetization. At first, they just had paid subscriptions, but now they’re going more heavily into advertising.
At least revenue seems to be going up somewhat in synch with usage:
Perplexity started the year with $5mn in annualised revenues — a projection of full-year revenues based on extrapolating the most recent month’s sales — and is now making more than $35mn on the same basis, according to a company insider.
As LLM models get optimized, hopefully the cost to serve a query goes down, or at least remains stable (as new larger models eat up the efficiency gains from algo and hardware improvements).
The money they’re raising seems to be going into building infrastructure to be less reliant on others:
Perplexity’s search engine was originally powered by a licensed version of Microsoft’s Bing index of the web, like many would-be Google competitors. But Shevelenko said it no longer uses Bing as its core system.
“We have our own proprietary search index and ranking system,” Shevelenko said. “We use signals from all kinds of engines but we have our own crawler and ranking system.”
That’s good to see.
Wouldn’t it be interesting if regulators struck down Google’s search deal with Apple, and this made Apple turn around and acquire Perplexity, invest a few billions into its infrastructure, and then have it installed as a default on every iPhone and Mac out there in a few years? 🤔 🍎
🐜 Seven years ago, Intel passed on a 15-30% stake in OpenAI for $1bn 🤔💸
Back when Intel wasn’t struggling so much and OpenAI was a small non-profit (right?), this happened:
About seven years ago, [Intel] had the chance to buy a stake in OpenAI, then a fledgling non-profit research organization working in a little-known field called generative artificial intelligence, four people with direct knowledge of those discussions told Reuters.
Over several months in 2017 and 2018, executives at the two companies discussed various options, including Intel buying a 15% stake for $1 billion in cash, three of the people said.
They also discussed Intel taking an additional 15% stake in OpenAI if it made hardware for the startup at cost price, two people said. [...]
OpenAI was interested in an investment from Intel because it would have reduced their reliance on Nvidia's chips and allowed the startup to build its own infrastructure, two of the people said.
In a cruel twist of fate, Intel — long one of the world’s most important companies — now has a market cap about the same as OpenAI’s (both around $80bn) 😬
Because of worse-than-expected results lately, Intel is laying off 15,000 employees and stopping work on all kinds of “non-essential” projects 😬 😬
So why didn’t they do the deal?
[Intel] ultimately decided against a deal, partly because then-CEO Bob Swan did not think generative AI models would make it to market in the near future and thus repay the chipmaker's investment
Short-term financial orientation vs long-term engineering-based vision.
🏦 vs 🛠️
It reminds me a bit of Boeing.
Not that it necessarily was obvious at the time — hindsight is 20/20. But this could’ve been so good for Intel, especially if they had designed accelerator chips for that extra 15%, putting them in a better position to compete against Nvidia and AMD, and possibly making more of the industry standardize on their platform (though that’s also a long-shot, since CUDA has been developed for a long time and was already well-established then).
💵🦖 Is AI Capex a Dollar Auction? 🤖💰⚖️💰
Friend-of-the-show and supporter Doug at Fabricated Knowledge (💚 🥃) has a great analogy for the situation that Big Tech CEOs find themselves in.
It’s based on this classic webcomic from 2009:
Doug writes:
As Sundar said, the risk of underinvesting is much higher than that of overinvesting. If you are locked out of the next platform because your competitor owns it, your business will be relegated to taking a secondary seat in the AI wave. [...]
The problem is that everyone is racing to that next big thing. [...]
Competitive dynamics create many adverse impacts if viewed from a loss mitigation perspective. As the five largest tech companies in the world have everything to lose and not much more to gain, this can distort the auction dynamics. The “risk” of underinvesting is seen as losing the dollar auction, and a guaranteed win you might have overpaid for is much better than not getting anything at all.
🐜🏗️🇹🇼 TSMC’s Massive Capex
Speaking of big bucks for small chips:
In terms of capital budget, TSMC said that it approved a scale of US$29.615 billion for a long-term capacity plan formulated in response to market demand and technology development blueprints, and approved an increase in its Arizona subsidiary in the United States within a quota of US$7.5 billion. [...]
Compared with the capital budget of US$17.356 billion approved by TSMC's previous board of directors on June 5 and the capital increase of TSMC's Arizona subsidiary of no more than US$5 billion, the capital budget has increased by US$14.75 billion (about NT$478.4 billion) in just two months [...]
The total capital budget and capital increase of overseas factories approved by the board of directors are 37.1 billion US dollars (Source)
TSMC says it’s responding to market demand signals.
As Dan Nystetdh points out: “The $37.1 billion is not necessarily a new capex figure. The money is for long-term capacity plans and it is unclear what will be booked in 2024. TSMC guided $30-$32 billion”
🗣️🎙️ Interview: Mark Zuckerberg
I thought this was a pretty good one:
I listened to it while on vacation, so I don’t have detailed notes on it. I just remember it was good!
🤖 Status Update: Microsoft’s AI Diversification Strategy 🤖
After the clusterf**k with OpenAI last November, Satya Nadella clearly decided to diversify and become less dependent on OpenAI. Here’s the progress made in the past 8+ months:
In February, Microsoft announced a multiyear partnership and investment into French AI start-up Mistral; the following month it paid another peer Inflection — led by Google DeepMind co-founder Mustafa Suleyman — $650mn to license its technology and hire most of its talent; and then in April invested $1.5bn in Abu Dhabi AI group G42.
That same month, it also announced it had built its own family of generative AI models known as Phi-3 — software that is smaller in size and complexity, and cheaper to run than so-called large language models such as OpenAI’s GPT-4. Microsoft has said its Phi-3 models are being used by the likes of BlackRock and Epic, and have outperformed GPT-3.5, an earlier version of OpenAI’s model, which ran its chatbot ChatGPT. (Source)
But the biggest change since then has probably been that Meta has released a series of open Llama models that are competitive with proprietary frontier models. This further commoditizes LLMs.
Microsoft has other advantages — its various distribution channels like GitHub, Office, Windows, and its infrastructure with Azure — but a lot of what they wanted out of the OpenAI was likely to be access to cutting-edge models that were ahead of others.
Of course, the field is highly dynamic, and everything could be very different in a few months.
Maybe GPT-5 will be much better than the competition and include advances that take a while for others to figure out and replicate. Or maybe GPT-5 is a big 🥱 and by then Anthropic’s new large Opus model will be the leader of the field. Maybe Grok 3 comes out of nowhere thanks to Elon’s giant training cluster, or maybe Llama 4 is another big leap ahead (though that won’t be out for a while). We’ll see!
🧪🔬 Liberty Labs 🧬 🔭
🇨🇳 China to Launch 15,000-Satellite Mega-Constellation to Compete with Starlink 📡🛰️🛰️🛰️🛰️🛰️🛰️🛰️
A few days ago, China successfully launched the first batch of 18 satellites for what will be a constellation that they hope will rival Starlink (and Amazon’s Kuiper, if they succeed with it):
The Long March 6A upper stage deployed 18 flat panel Qianfan (“Thousand Sails”) satellites into polar orbit for Shanghai Spacecom Satellite Technology (SSST).
SSST has *big* plans:
108 satellites this year
648 satellites by the end of 2025
15,000 satellites deployed before 2030
They hope to provide global coverage by 2027.
It’s kind of ironic that China, the country known for strict limits on the internet via its Great Firewall, is trying to offer internet access to the rest of the world.
Who will trust this Big Brother in the sky? ¯\_(ツ)_/¯
Clearly, the main goal is military:
Chinese researchers in the People's Liberation Army (PLA) have over the past two years studied the deployment of Starlink in the war in Ukraine and repeatedly warned about the risks it poses to China, should the country find itself in a military conflict with the United States.
But if they can convince other countries to use their constellation, they could also turn it into an offensive weapon (ie. get people dependent on this infrastructure and retain the ability to turn it off selectively during conflicts to exert pressure).
SSST’s “Thousand Sails constellation” is one of three “ten-thousand star constellation” plans China is hoping will allow it to close the gap with SpaceX.
I can’t help but wonder how many satellite constellations can simultaneously be up there safely.
Don’t get me wrong, space is a big place, but tens of thousands of high-velocity objects whizzing around in low-Earth orbit, many of them managed by different companies and different countries… I’m pretty sure that risk doesn’t scale linearly.
There’s got to be a point at which things become dicey up there!
Digging around a bit, I found that Starlink satellites performed “26,037 collision avoidance maneuvers” over two years. and it’s expected that by 2027-2028 the Starlink constellation might need to perform “hundreds or thousands of avoidance maneuvers daily.” 🤯
Now imagine in the 2030-2040s when there are many more tens of thousands of sats up there. It’ll take a powerful navigation overseer AI just to keep everything safe.
🇨🇳🔌⚛️ China approves 11 new nuclear reactors (and other countries should follow)
China's State Council, the country's cabinet, on Monday greenlighted five new nuclear power projects, a record, as the country is revving up nuclear power construction [...]
The newly approved five nuclear power projects included 11 nuclear power units, which utilize a mix of China's self-developed third- and fourth-generation nuclear technologies, with total investment expected to surpass 240 billion yuan ($33.3 billion) [...]
These new reactors will be located in coastal provinces including East China's Jiangsu, Shandong and Zhejiang provinces, and South China's Guangdong Province, and South China's Guangxi Zhuang Autonomous Region
We keep hearing about how electricity demand is going to rise significantly for the first time in decades as AI datacenters keep multiplying, some industries are moving some production out of China, EVs displace ICEs, and heat pumps displace some natural gas heating.
We need a lot more reliable electricity if only to stay at the forefront of AI — but the other less sexy things are just as important to keep growing the pie for everyone. A world of clean energy abundance is way better for all than one of scarcity.
One of the most interesting aspects of this announcement is this reactor:
One key project is the nuclear energy heating power plant in Xuwei, Jiangsu Province, the world's first to combine high-temperature gas-cooled reactors with pressurized water reactors. It will provide low-carbon industrial steam to the Lianyungang petrochemical base and advance the decarbonization of energy-intensive industries, operator China National Nuclear Corp said.
Industrial heat is a HUGE sector that we rarely hear about.
Even if we moved the power grid and transportation away from fossil fuels (which is a gargantuan undertaking), we’d still need ways to produce a lot of high-grade heat for various industrial processes.
Right now most of it comes from natural gas. We’re not going to displace that with solar panels, so micro-reactors are probably the best bet (there are already a few projects for that underway in the US, I wrote about them in the past).
🛢️🌎 Podcast: A World Tour of Oil & Gas
I really enjoyed this interview with Jimmy Fortuna of Enverus, a company that provides data and software to the energy industry:
It’s a very sober look at all the main players (OPEC, Russia, Canada, USA, etc) and where things stand right now. Interestingly, while in decades past the most impactful players were OPEC, lately I find Canada and the US to be where the action is.
If Canada can get more egress for its oil, it can be a dependable source for the US, helping counter-balance some of the less friendly nations out there. And the US’ fracking revolution has been nothing short of incredible, making the country swing from decline to the biggest producer in the world in a relatively short period of time.
We still need to move away from fossil fuels, but it’ll take a while. During that period, we need to ensure security of supply or it’s the most vulnerable people on the planet who will suffer the most (energy is life, never forget it).
🧠 How Psilocybin Resets Neural Networks 🍄
Very cool to be able to start to understand what is going on:
Psychedelic drugs can reliably induce powerful changes in the perception of self, time and space via agonism of the serotonin 2A receptor (5-HT2A receptor). In clinical trials, a single high dose of psilocybin has demonstrates immediate and sustained symptom relief in depression, and addiction [...]
Persisting effects of psychedelics include increase in the expression of genes that contribute to synaptic plasticity14,15, and increase in growth of neurites and synapses in vitro16, and in vivo [...] Rodent models have suggested that the burst of plasticity in the medial frontal lobe and anterior hippocampus may be key to psilocybin’s antidepressant effects [...]
Psilocybin acutely caused profound and widespread brain network changes [...]
Psilocybin-driven desynchronization was observed across association cortex but strongest in the default mode network (DMN), which is connected to the anterior hippocampus and thought to create our sense of self. [...]
The acute brain effects of psilocybin are consistent with distortions of space-time and the self. Psilocybin induced persistent decrease in functional connectivity between the anterior hippocampus and cortex (and DMN in particular), lasting for weeks but normalizing after 6 months.
Persistent suppression of hippocampal-DMN connectivity represents a candidate neuroanatomical and mechanistic correlate for psilocybin’s pro-plasticity and anti-depressant effects.
🎨 🎭 The Arts & History 👩🎨 🎥
🇺🇸🪦 The Tomb of the Unknown Soldier has been guarded continuously, 24/7, since 1937 🫡
The Tomb of the Unknown Soldier is a historic funerary monument dedicated to deceased U.S. service members whose remains have not been identified. It is located in Arlington National Cemetery in Virginia, United States.
It has been guarded *continuously* for 87 years:
A civilian guard was first posted at the Tomb on November 17, 1925, to prevent, among other things, families from picnicking on the flat marble slab with views of the city. A military guard was first posted on March 25, 1926. The first 24-hour guard was posted on midnight, July 2, 1937.
It’s a high honor to guard the tomb and fewer than 20% of volunteers are selected for training and only a fraction of those become guards:
Since 1948, the tomb guards, a special platoon within the 3rd U.S. Infantry Regiment (The Old Guard), work on a team rotation of 24 hours on, 24 hours off, for five days, taking the following four days off. A guard takes an average of six hours to prepare his uniform—heavy wool, regardless of the time of year—for the next day's work. In addition to preparing the uniform, guards also conduct physical training, tomb guard training, participate in field exercises, cut their hair before the next workday, and at times are involved in regimental functions as well. Tomb guards are required to memorize 35 pages of information about Arlington National Cemetery and the Tomb of the Unknown Soldier, including the locations of nearly 300 graves and who is buried in each one.
The Tomb of the Unknown Soldier Guard Identification Badge is the third least-awarded qualification badge of the United States Army; as of December 26, 2023, they number 868, including 26 which have been revoked and 9 "administrative errors".
It is preceded by the 103 Military Horseman Identification Badges and the 17 Astronaut Badges.
Another cool bit of trivia:
The soldier "walking the mat" does not wear rank insignia, so as not to outrank the Unknowns, whatever their ranks may have been.
🎥 Why ‘Lawrence of Arabia’ Still Looks so Good 62 Years Later 🎞️
I have to admit that I still haven’t seen it. I’ve been meaning to for a long time, and I know it’s a big hole in my film knowledge.
I intend to rectify that soon and I will share my thoughts about it with you when I do.
The dollar auction analogy for AI capex is very interesting. As an investor with an unlimited universe of options, wouldn’t the logical approach be to sidestep this dangerous game? And yet the vast majority of investors are doing the opposite…….