514: Zuck’s AI Strategy, SearchGPT vs Google, GE & Danaher Follow-Up, Neutron Star, Deep Sea Dark Oxygen, And Howl’s Moving Castle
"Have I accidentally started a multi-generational family tradition?"
Your life is purchased by where you spend your attention.
-James Clear
🇫🇮 🥱😴💤🏊♂️ Humanity is full of wonderful local traditions.
Here’s one from Finland that I recently learned about:
National Sleepy Head Day (Finnish: Unikeonpäivä; Swedish: Sjusovardagen) is a yearly celebration in Finland observed July 27. This holiday is related to the legend of the Seven Sleepers of Ephesus, but rather than a religious festival, it is more of an informal celebration.
The tradition of the Sleepy Head Day traces back to the Middle Ages, when the belief was that the person in the household who slept late on this day would be lazy and non-productive for the rest of the year.
Here’s the part I love:
In the old days, the last person sleeping in the house (also dubbed as the "laziest") could be woken up by using water, either by being thrown into a lake or the sea, or by having water thrown on them.
Thrown in a lake. That’s real stakes!
In the city of Naantali, a Finnish celebrity is chosen every year to be thrown in the sea from the city's port at 7 a.m. The identity of the sleeper is kept secret until the event. People who are chosen have usually done something to the benefit of the city. Every city mayor has thus far been thrown to the sea at least once
We should start doing this in more places. Even if silly and arbitrary, these shared moments matter for community building!
✍️🦙🤖🕵️♂️ Perplexity now has Llama 3.1 (405bn) as an option, so I’ve made it my default to get some first-hand experience with this new frontier model. It’s replacing Anthropic’s Claude 3.5 Sonnet as my primary model.
Benchmarks can only tell you so much about a model.
They measure certain things, but it’s always hard to know if some models have been “tuned to the benchmarks” more than others. Of course, all the AI labs will claim that they would never do such a thing… at least not knowingly.
Only first-hand experience will help you learn what the ‘flavor’ and ‘personality’ of a model is, and whether it works with your own style and needs.
A benefit of Perplexity is that you can easily rerun queries in multiple models. It’s a good way to A/B test models while keeping as many variables as possible the same.
In my early tests, I think I prefer Claude Sonnet 3.5’s writing style over Llama 3.1. But it could be that I’m more used to Claude and need some time to get used to Llama.
Sonnet is also significantly faster, but that’s to be expected from a smaller model.
🙅♂️🤷♀️ I should make a list of times in my life when I would’ve done better by doing nothing rather than doing whatever it is I did.
I think it would be a long list 🤔
🍞🍽️ Family habits are fun to think about, especially when they are things that you don’t notice until you compare them with what others are doing.
A random example: when I make toast for my kids for breakfast, I always cut them in half and put something different on each piece (ie. cream cheese on one half and peanut butter on the other). I started doing this because I figured they’d enjoy the variety.
I’ve been doing it since they were very young, so to them it’s just the way things are.
It makes me wonder if they’ll keep doing that all their lives and do it for their kids who will in turn pass it on... Have I accidentally started a multi-generational family tradition?
What else am I doing as a parent that could end up sticking around for a long time? What have I picked up from my parents? Anything that I’ve been doing unthinkingly forever but that is bad and I should drop?
🏦 💰 Liberty Capital 💳 💴
🦙 Zuck’s AI Strategy: Commoditize Your Complements 🤖
In 2002, Joel Spolsky came up with “commoditize your complements” in a blog post. He wrote:
demand for a product increases when the price of its complements decreases. In general, a company’s strategic interest is going to be to get the price of their complements as low as possible. The lowest theoretically sustainable price would be the “commodity price” — the price that arises when you have a bunch of competitors offering indistinguishable goods. So:
Smart companies try to commoditize their products’ complements.
If you can do this, demand for your product will increase and you will be able to charge more and make more.
He gives an example of it in practice at the time:
Understanding this strategy actually goes a long, long way in explaining why many commercial companies are making big contributions to open source. Let’s go over these.
Headline: IBM Spends Millions to Develop Open Source Software.
Myth: They’re doing this because Lou Gerstner read the GNU Manifesto and decided he doesn’t actually like capitalism.
Reality: They’re doing this because IBM is becoming an IT consulting company. IT consulting is a complement of enterprise software. Thus IBM needs to commoditize enterprise software, and the best way to do this is by supporting open source.
Today’s version would read as follows:
Headline: Meta Spends Billions to Develop Open Source LLMs
Myth: Mark Zuckerberg loves open source and wants to fight back against proprietary and closed AI.
Reality: Meta makes money from ads. People use its products to look at content and share their own content. By making the production of content as inexpensive and low-friction as possible using AI tools, Meta hopes to increase usage of its platforms. It also uses AI tools to create custom timelines, moderation, and ad targeting, so by commoditizing that software, it hopes to leverage the contributions of others to make its tools, which are complements to its main business, better and cheaper. By giving models away, it’s also cutting oxygen to potential competitors who need to monetize LLMs directly. It wants to avoid the emergence of a new platform that they don’t control.
To be clear: I’m not saying it’s purely cynical!
However, the fact that it’s a strategy benefit to Meta’s bottom line makes it hard to determine the true motivations behind it. If this open-source AI move was bad for Meta and Zuckerberg did it anyway, it would be pretty obvious that he was doing it because of some high-minded ideals.
As things stand, there’s a good chance he believes in what he’s saying AND it’s good for business, but to paraphrase Upton Sinclair, it’s generally not hard to make someone believe in something when their wallet is aligned with that thing.
The biggest factor for this strategy going forward will be whether most of the improvement in LLMs in the coming years comes from scaling, or if there are new algorithmic breakthroughs — what’s the next Transformer? — that are not easy to copy/fast-follow.
For example, if GPT-5 or Claude 4 is significantly better than the competition due to some non-obvious innovation and it takes a few years for Meta to catch up, it could be a moment of de-commoditization/differentiation.
📃📃📃🔍🤖 OpenAI Announces SearchGPT (Should Google Worry? 🤔)
Keep reading with a 7-day free trial
Subscribe to Liberty’s Highlights to keep reading this post and get 7 days of free access to the full post archives.