350: Constellation Software, Nvidia, Twitter Ads & Paywalls, Microsoft Copilot, FTX, China, and Severance
"None of this is static."
The only normal people are the ones you don't know very well. —Alfred Adler
🛀💰💸 I wonder what % of people who make it to being billionaires ever lose more than 90% of their personal wealth over their lives.
Have you ever seen stats on this? Has this info been compiled?
Recent developments remind me of Eike Batista. His reported net worth went from US$35 billion in 2012 to around $200 million a year later in 2013… and in 2017 he was sentenced to 30 years in prison 😬
👨🏻⚕️🩸 There’s an old medical joke that also applies to the markets.
It goes something like this:
All bleeding eventually stops.
🦔🕳 Bear with me for a second, this is kinda conceptual:
Human society has a “shape”.
Some people have personality types (the output of the ‘nature + nurture + life experience/choices’ equation) that fit well in society’s shape. The well-rounded pegs in the round hole, so to speak.
Others have personalities that are quite spiky, with large deviations from the norm in intensity, in one or multiple directions.
Some of these are big disadvantages with few upsides, like being someone prone to chronic severe depression, for example.
Other spikes have the potential to be tremendous gifts, but they nonetheless don’t fit well in society. In other words, “your biggest strengths can be your biggest weaknesses”.
Many creative people — in the broadest sense of the term: artists, entrepreneurs, scientists, etc — just can’t turn it off, so many areas of their lives will suffer, but they’re obsessed enough with their craft and with creating that they’ll get the reps in, build up the skills, get the shots on goal, and can end up doing great things.
Others are so anti-authority that they’re constantly fighting the system at school and work, so they start their own thing and create things that are very useful for the society that tried to tame them.
None of this is static.
People can change. You can work on sanding off your weaknesses and accentuating your strengths. But society’s shape also changes over time, with the rate of that change also modulating (mostly accelerating in the modern era).
Take the very same person, have them be born today, 200 years ago, or 5000 years ago, and the “spikes” will get very different mileage.
Maybe 5000 years ago they made you a bit more interesting around the campfire and, on average, this favors you doing well. Or maybe you have spikes that others dislike, decreasing your chances of survival at a time when ostracism can mean death. No wonder we naturally try so hard to fit in.
Maybe 200 years ago some spikes mean that you start your own ranch out in the Western US or whatever. Maybe today some spikes mean that you start an online business with millions of customers, or in prison.
So many combinatorial permutations! Ain’t life something? ¯\_(ツ)_/¯
🛀 If it’s true that the limits of my language are the limits of my thinking, then how do I get better at language?
What’s the deliberate practice here? What’s embedded in the languages that I know (French, English), and how could I think differently in other languages? 🤔
🧀 You know I’m a parmesan fan. I mean, yea. How can I not be, it’s the best!
Great video by Ethan Chlebowski, going in-depth on The Cheese, the real Italian stuff and various imitations, how each does in sauces, as a garnish, whether it’s worth paying up, etc:
💚 🥃 👩🎨 I'm thinking of it kind of like we're sitting in a pub somewhere, and I'm talking about various things that interest me or that I've learned about recently.
If you like it, once in a while, you buy me a drink to keep me talking and show appreciation. That’s what becoming a paid supporter is. 🍻
Liberty’s Highlights is reader-supported. To support my work, consider becoming a free or paid supporter. 👩🎨
📊 A Word From Our Sponsor: Koyfin 🕵️♂️📈
Koyfin gives you professional-grade data and analytics with a user interface from the 21st century to research companies like never before. New features are frequently added, so it gets better all the time!
Quickly stay on top of what's happening in your portfolio and understand how the market is changing with Koyfin. 🕵️♂️📈
And because you’re awesome, you deserve a discount:
⭐️ Use this link to get 10% off the 1st year on any paid plan* ⭐️
*Offer only valid for new users
🏦 💰 Liberty Capital 💳 💴
✨ Constellation Software’s yearly pace of acquisitions since 2015 ✨
It’s a grand, long-term experiment. Can you take something that used to be done by a handful of people at HQ, and then create a fractal machine that over time reproduces that function at recursive levels of a fast-growing organization?
Each portfolio manager inside a business which is inside an operating group is now analogous to what Constellation was in its earlier days, with Mark and Bernie and a few others keeping track of hundreds of VMS businesses, waiting for the occasion to buy the right companies at the right price.
So far the grand experiment seems to be working, and it looks like we’re also about to find out how counter-cyclical their M&A is, with rapidly falling software valuations and a deteriorating economy.
🇨🇳 🔐 Nvidia has a new datacenter chip for China
Looks like they created a version of the A100 with slower interconnects:
At least two Chinese websites by major server makers offer the A800 chip in their products. One of those products previously used the A100 chip in promotional material.
A distributor website in China detailed the specifications of the A800. A comparison of the chip capabilities with the A100 shows that the chip-to-chip data transfer rate is 400 gigabytes per second on the new chip, down from 600 gigabytes per second on the A100. The new rules restrict rates of 600 gigabytes per second and up.
“The A800 looks to be a repackaged A100 GPU designed to avoid the recent Commerce Department trade restrictions,” said Wayne Lam, an analyst at CCS Insight, basing his comments on the specs shared by Reuters, and noting that eight is a lucky number in China.
A little surprised they went with 400 and not something closer to 600, though I guess you also don’t want to push it — regulators may go 🤨 if you come out with a chip that does 599.999999Gbps (it reminds me of 9.9 hp boat motors, because some regulation applies to 10hp+…).
I guess there’s probably a technical reason why 400 makes sense if you want to modify things really fast and not have to redesign too many parts of the system.
🐦 Twitter Blue’s “half the ad load” problem 💸
Casey Newton writes:
Other employees have warned about a secondary feature of the new Blue that Musk added at the last minute: reducing ad load in the Twitter app by half. Estimates showed that Twitter will lose about $6 in ad revenue per user per month in the United States by making that change, sources said. Factoring in Apple and Google’s share of the $8 monthly subscription, Twitter would likely lose money on Blue if the ad-light plan is enacted.
This makes sense if you consider that most people signing up for Twitter Blue will be power-users (aka addicts), likely from Twitter’s wealthier markets and demographics.
So while the ad ARPU for the *average* Twitter user may be X, the ad ARPU for the *average Twitter Blue* user is no doubt higher, possibly multiples of it.
I wouldn’t be surprised if the “half the ad load” thing was pulled, or if to get it you had to sign up for the service outside of Apple and Google’s mobile stores so that the economics make a bit more sense…
🔑 Paywalling all of Twitter? 🔓
It also sounds like Musk and his advisors have been listening to Ben Thompson (💚 🥃 🎩) podcasts recently, because they’re apparently considering putting the whole enchilada behind a paywall:
all of that could be a prelude to the biggest change of all: charging most or all users a subscription fee to use Twitter.
Both Musk and Sacks have discussed the idea in recent meetings... One such plan might allow everyone to use Twitter for a limited amount of time each month but require a subscription to continue browsing, the person said.
It could not be learned how serious Musk and Sacks are about the paywall; Twitter did not respond to a request for comment. It also does not appear imminent, as the Blue team is wholly occupied with the launch of expanded verification.
While it’s an interesting idea, if you think about Twitter as a whole and not just sub-communities like Fintwit or media Twitter, I think it would probably be a fatal blow to the company.
Twitter isn’t primarily a tech company, it’s a community, and the main task of the company is fostering that community.
While users *are* addicted, it’s very hard to get civilians/‘normies’ to pay for anything online, and putting up a paywall may just be the excuse that many are looking for to break their addiction…
👩🏻⚖️ ‘Microsoft sued for open-source piracy through GitHub Copilot’ 🖥 ⌨️🤖🤓
Programmer and lawyer Matthew Butterick has sued Microsoft, GitHub, and OpenAI, alleging that GitHub's Copilot violates the terms of open-source licenses and infringes the rights of programmers. […]
The tool was trained with machine learning using billions of lines of code from public repositories and can transform natural language into code snippets across dozens of programming languages. [...]
Open-source licenses, like the GPL, Apache, and MIT licenses, require attribution of the author's name and defining particular copyrights.
However, Copilot is removing this component, and even when the snippets are longer than 150 characters and taken directly from the training set, no attribution is given. (Source)
I’m not sure what I think of this.
I have more clarity when thinking about training vs inference of generative AI models when it comes to things like visual art.
But for code, the atomic units of information are, in some ways, less malleable than color palettes and brushstrokes. You can make a digital painting in the style of Van Gogh that is unique.
But if an AI coding assistant spits out a bunch of tokens in a row that recreates exactly someone’s code snippet from an open-source project, is that a problem?
(these models “think” in patterns and “write” an output by guessing which series of such pattern “tokens” are most likely to go together)
Because some bits of code have very unique information, such as API keys, if that gets from the training set into the training model and gets reproduced as a token as output, is that recognizable API key covered by fair use?
What’s fair use and what isn’t seems to be the crux.
Do humans who learn to code by looking at other people’s code sometimes accidentally reproduce open-source code they’ve seen somewhere without proper attribution? Is that fair use?
Is it the same or completely different when it comes to Copilot and Ghostwriter and all these AI coding assistants?
If courts decide that this isn’t kosher, what’s the solution?
To expunge all open source code from training sets, making the tools worse (which isn’t going to make coders happy)? To put together some pool of money and hand out $ to open source projects based on usage of their code (as % of training set? By tracking output that hashes out to something existing out there?).
Is this manageable or just too much overhead and legal complexity, since there’s an endless number of open source projects out there — some with just 1 person contributing — and with a legal precedent, any of them could decide to sue at any moment..?
More questions than answers for now… ¯\_(ツ)_/¯
h/t Friend-of-the-show Muji (💾)
What the hell is going on in cryptoland?
For context: Read Byrne’s Very Good Explanation™️
🇨🇳 China’s Electricity Problem 🔌⚡️ 😬
This makes it very clear why China uses *so much coal*, and why the *only* thing that can displace coal on a massive scale in the country is nuclear.
The math just doesn’t add up with anything else.
Sure there’s solar and wind and hydro too (though the best spots for these are located far from population centers), but if you want to remove lots of reliable, baseload, on-demand coal, nuclear is the only realistic option.
You won’t run China on batteries any time soon.
🧪🔬 Liberty Labs 🧬 🔭
🕸 The DPU Era 🕸
Jensen Huang has been bullish enough on the SmartNIC/Data Processing Unit in recent years that he bought Mellanox and turned Nvidia into a ‘3-chip company’ (making GPUs, CPUs, and DPUs), but he’s far from alone in being focused on the space.
How did we get here? Nextplatform has a good piece:
The fact that chips are pressing up against reticle limits and CPU processing for network and storage functions is quite expensive compared to offload approaches have combined to make the DPU probable. The need to better secure server workloads, especially in multitenant environments, made the DPU inevitable. And now, the economics of that offload makes the DPU not just palatable, but desirable.
There is a reason why AWS invented its Nitro DPUs, why Google partnered with Intel to create the “Mount Evans” IPU, why AMD bought both Xilinx and Pensando (which both have a DPU play), and why Nvidia bought Mellanox Technology.
The way I look at it, the modern networked environment is getting way more complex and compute-intensive.
We’re in a world of Zero Trust security, where each connection is getting separately checked for permissions and identity, everything is encrypted multiple times (tunnels within tunnels), and services and apps are getting more complex and less monolithic, with micro-services, serverless, API calls everywhere…
Not only is the traffic going in and out of data-centers going up rapidly, but horizontal, East-West traffic *inside* data-centers is exploding too…
All this results in a ton of traffic that needs to be properly managed and secured, and doing all this on CPUs is costly — best to accelerate it on systems designed to be very very good at doing just this one thing.
Given the state of electricity pricing in the world – it is going up faster in Europe than in North America the savings in power from using DPUs will be more or less. But according to John Kim, director of storage marketing at Nvidia who put together the price/performance comparisons of clusters using and not using DPUs, even at the 15 cents per kilowatt-hour that prevails in California, the addition of DPUs to systems in a cluster more than pays for itself with the power saved through server footprint shrinkage as cores are released from the server nodes that had been running network and storage functions.
As Jensen would say, “The more you buy, the more you save!”
What is immediately obvious from this comparison is that the addition of the BlueField-2 DPU to each of the 10,000 nodes pays for itself in the reduction of nodes needed to support the IPSec encryption and decryption workload. By Nvidia’s math, the capital expenditures on the server hardware are actually 2.4 percent lower.
On top of that, there is a $13.1 million savings in server power and, assuming a power usage effectiveness of a very modest 1.5, there is another $6.6 million in savings in cooling in the datacenter from the cluster consolidation engendered by the DPU. Add up the capex and power savings, and the whole shebang shaves $22.2 million in savings over three years. That’s a 15% savings in total cost of ownership – and that is without taking into account any difference in performance or savings in datacenter real estate or having fewer servers to manage.
And this is with the current generation.
Nvidia has previewed its roadmap for DPUs, and we’re still in a steep part of the curve when it comes to improvements. This isn’t a slow-moving, mature tech yet.
🇺🇸 The U.S. Coal Tide… 🏭🪨🏭🪨🏭🪨🏭🪨🏭🪨🏭🪨🏭🪨
…tide goes in, tide goes out 🌊
🎨 🎭 Liberty Studio 👩🎨 🎥
✄🧠 Early thoughts on ‘Severance’ Ep. 1-2-3 🍎📺
This is spoiler-free on the plot of the show.
My wife and I watched the first three episodes of ‘Severance’ (2022, Apple TV).
Great start, but I’ll reserve judgement until I know more.
I do love the visuals. Very Kubrick, with Wes Anderson symmetry.
I love the set design, color palette. The attention to detail with everything is amazing. I wish I could’ve been a fly on the wall during the production of this show — especially in the writer’s room and storyboarding/design meetings.
The mixed technology is very intriguing. The way they create a weird out-of-time world that feels familiar and alien at the same time is disorienting.
I’m trying to avoid spoilers, so I won’t research it, but it does feel very much like a Philip K. Dick type of story.
Episode 2 is still setting the stage, getting more unsettling and foreboding. I’m reminded just a little of the universe of the film The Lobster (2015, very disturbing).
In episode 3, I started getting a bit of a BioShock vibe to some of the world-building.
I hope there are satisfying payoffs to all the mystery boxes and this isn’t a show that can plant seeds but not harvest them (*cough* Lost *cough*).
A correction on "world of Zero Trust", at least the way I see it. We don't hear of Chinese infrastructure getting hacked. That's because it's operated in intRAnet, separated from outside internet, not sure what that looks like inside and do not assume zero trust approach there as it would hinter surveillance capabilities in the opposite direction.
So I discovered that this post is breaking my RSS feed through Feedly :(.... Breaks on 2 local RSS readers.... Trying to investigate why.