344: Apple 1984 Ad, Munger & China, Samsung Foundry, Tiktok Crackdown, Foxconn & Pegatron, Africa Population, and Military Logistics
"In the Bizzaro world of a few years ago"
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. —Samuel Beckett
🇨🇳 🤔 I wonder if Munger is changing his mind on China, at least a little, because today's China is certainly not China from 15 years ago...
My perception is that he’s a fan of the technocratic Singapore model (Lee Kuan Yew) and thought that this was mostly what China was doing for a while. But it’s hard to argue that this is still what they’re doing.
To be clear, hindsight is 20/20. China could’ve taken a different path some years ago and become a kind of big Singapore. Alas, that’s not the world we live in.
I wouldn't want to be Elon Musk, just wanting to make cool EVs, to focus on battery chemistries and large aluminium casting presses (yes, I write AL with the extra “i” 🇨🇦), and instead having to deal with Chinese politics...
Owning Twitter (🐦) certainly won’t reduce the amount of politics in his life, that’s for sure. 😬
📖 After a hiatus during which we read other things (like the Amulet series), I finished the second Warrior Kid book by Jocko Willink with my 8yo boy.
Such great books, really timeless lessons about discipline, growth mindset, leadership, frugality, resourcefulness, constant learning, healthy living, etc.
👩🔬👩💻🤖📑 Many scientific publications are reviews of existing studies that don’t feature any new original research. They’re meta-studies that look at previous papers to try to summarize the state-of-the-art in a field, or draw stronger conclusions from more data, or expose contradictions, or whatever.
It seems to me that because these reviews/meta-studies are entirely based on existing documentation and generally don’t require any new lab work, they’re particularly well-suited for AI.
In the same way that some LLMs can do summarization of complex texts, we can probably expect science-tuned models to do machine-generated reviews of various fields at some point, and while they may have downside vs human reviews, there can be some upsides too (ability to review vast quantities of data and literature quickly and inexpensively, meaning that many reviews may exist that simply would never have been done by humans — ability to detect statistical anomalies, fraud, or hidden correlations that may not have been noticed by human pattern-seekers, etc).
🛫📱 Noah Smith (🐇):
"We wanted flying cars, instead we got 140 characters"...Yeah because that's what people actually wanted. If you had a flying car, you'd look out the windows for about a week, then you'd be back to tweeting.
I know the previous tweet is true because we've had flying buses all my life, and everyone ignores the fact that they're flying, pulls the window shade down, and watches a crappy movie they can barely hear.
Touché!
(not to mention that if people had flying cars, cities would be terrifying. Between the noise generated by the amount of force required to lift vehicles and the fact that one bad driver or accident could ruin a lot of people’s day, including by falling on your house…)
💚 🥃 🧝🏻 Thanks to J.L., A.B., F.C., and A.L. for becoming paid supporters since the last edition!
You’re making me fire up Pixelmator Pro and add some digital orange paint to that telethon thermometer! 🌡
🏦 💰 Liberty Capital 💳 💴
🍎 That time in 1984 when Apple bought every single ad in Newsweek to explain the Macintosh 💡
You can see every page on this site.
All in all, they bought 40 full pages to explain their new product — at the time, pretty much every single feature was new and alien to the public, so they had to explain what a computer mouse was, what a graphical interface was (“To tell Macintosh what to do, all you have to do is point and click.”), etc.
What’s most interesting is this is half an ad, and half a ‘how to’ manual.
It works because you learn both that a thing exists in the first place and how to do it — and it sounds simple enough that you can imagine yourself doing it! unlike the command-line interface of computers at the time.
You can be sure that Steve Jobs himself went over and approved every single word and image on those pages. Watch and learn. 🪄 AAPL 0.00%↑
🇰🇷 Samsung’s foundry expansion plans to compete with TSMC 📈
Samsung’s unit for contract chipmaking, or foundry, plans to increase its mature nodes and specialty nodes by more than 10 units by 2024, according to industry sources on Sunday. The South Korean tech giant’s production capacity of the mature and specialty nodes will rise by 2.3 times by 2027 from the level of 2018. [...]
The mature node is for older chip production technologies such as 10 nanometers (nm), 14 nm, 28 nm, 65 nm and 180 nm processes to be used in vehicles, consumer electronics components and other products that do not require state-of-the-art processes. The specialty node is the customized legacy node to meet customers’ requests. The mature and specialty nodes account for more than half of the global foundry business.
TSMC is known to use “more than half of its capacity of 28 million wafers a month for the legacy and specialty nodes.” TSM 0.00%↑
Could a crackdown on Tiktok make them even more competitive with Facebook Meta? 🤔
It’s easy to assume — because of growing US-China tensions — that at some point, Tiktok will be banned in the US and that this would help Meta.
But is an outright ban the most likely scenario?
In the Bizzaro world of a few years ago, we almost saw a spin/sale of Tiktok ex-China to Oracle and Walmart (?!?). (also to Microsoft, which would’ve made a bit more sense…)
It seems like this kind of forced spin is still possible, and while it would be distracting for the company for a while, it feels like once Tiktok is fully extricated from China and can operate as a US-based company, it could be an even fiercer competitor for Meta because it won’t be as easy to attack them (and a US Tiktok will probably be able to hire talent in North-America and Europe more easily than they could now).
I could imagine them spinning out the global app *without* the Bytedance algos for tens if not hundreds of billions, and then having newco recreate algos (possibly with partners like Microsoft). So you’d get just the users/brand/graph, and possibly raw user usage data to retrain models on.
Of course, it’s kind of a longshot that everything would line up, but it’s not entirely implausible, I think 🤔 META 0.00%↑
Shutterstock partners with OpenAI & DALL-E for AI-generated stock images 🎨📷🤖
stock image giant Shutterstock has announced an extended partnership with OpenAI, which will see the AI lab’s text-to-image model DALL-E 2 directly integrated into Shutterstock “in the coming months.”
This makes sense.
In addition, Shutterstock is launching a “Contributor Fund” that will reimburse creators when the company sells work to train text-to-image AI models. This follows widespread criticism from artists whose output has been scraped from the web without their consent to create these systems. Notably, Shutterstock is also banning the sale of AI-generated art on its site that is not made using its DALL-E integration.
This makes a lot less sense to me.
I mean, I understand the *optics* of the first part, how it may be a good *political* move, but as I wrote in edition #327, I don’t think that learning from something should be gatekept, paywalled, red-taped, or made high-friction and bureaucratic.
We wouldn’t want to do this to humans (musicians can’t learn from the music they hear, painters can’t learn from the paintings they see, etc), and we shouldn’t do this for our AI tools.
I get that the scale is different, but when it comes to copyright and compensation, the place to apply some rules is on the outputs, not the inputs (ie. if you generate images of Pixar characters with a generative model and try to sell them, you can get sued. But you shouldn’t get sued because the model was trained on billions of images that include Pixar characters).
And the second part about having an exclusive partnership… Seems to me like whoever can source images from multiple models will have better ones, or at least, have the *potential* to have images that better meet their customers’ needs.
🇹🇼 Foxconn and Pegatron
Am I the only one who just realized that both Foxconn and Pegatron are Taiwanese companies? I’m sure I’ve read that they were many times before, but the fact had never quite registered until now.
Taiwan certainly punches way above its weight when it comes to electronics and it’s not just TSMC.
🌬 Wind Power — A Tale of Two (Windy) Cities
As a follow-up to what I wrote in edition #337 about GE cutting 20% of its wind-power employees:
I’m guessing that if this graph included the US manufacturers, it would look more like the yellow line than the blue one…
Thanks to reader Generalist Lab for sharing.
🔩⚙️ Freedom Forge 🛠
During World War II:
the U.S. managed to out-produce the rest of the Allies combined, while devoting a much smaller portion of its economy to the military than other nations, and even increasing civilian consumption by the war’s end.
This is from Noah Smith’s review of the book Freedom Forge.
The people who oversaw the effort learned mass-production at companies like GM.
Fascinating story, kind of the public side of the coin vs the top secret (at the time) Manhattan Project.
🇺🇸 The U.S. Department of Defense’s Global Transportation Logistics Network (Airlines, Hotels, etc) 🌐🛫🏨🪖
The DoD even operates golf courses!
The section on “bases in a box” and global locations of equipment stockpiles, including the floating APS-3 cargo ships, was fascinating.
It makes a ton of sense, but I had never thought about that part of the logistics system.
🧪🔬 Liberty Labs 🧬 🔭
🍼 Population trends for Africa, China, and India 🌏🌍
This year the populations of Africa, India and China are exactly the same but they will diverge dramatically in the next decades.
There’s a lot baked-in with demographics because there’s so much inertia in the system.
But we’ve also seen how fast fertility can change in some countries (Iran is a good example: From 6.5 children per woman in 1980 to less than 2 about 20 years later).
I wonder if Africa’s curve will peak earlier than expected 🤔
🇺🇸☢️ ‘America's new nuclear power industry has a Russian problem’ 🐓🥚🐣
U.S. firms developing a new generation of small nuclear power plants to help cut carbon emissions have a big problem: only one company sells the fuel they need, and it's Russian. [...]
HALEU is enriched to levels of up to 20%, rather than around 5% for the uranium that powers most nuclear plants. But only TENEX, which is part of Russian state-owned nuclear energy company Rosatom, sells HALEU commercially at the moment.
😬
That's why the U.S. government is urgently looking to use some of its stockpile of weapons-grade uranium to help fuel the new advanced reactors and kick-start an industry it sees as crucial for countries to meet global net-zero emissions goals. [...]
The U.S. government is in the final stages of evaluating how much of its inventory of 585.6 tonnes of highly enriched uranium to allocate to reactors, the spokesperson said.
It’s a chicken & egg problem:
without a reliable source of the high assay low enriched uranium (HALEU) the reactors need, developers worry they won't receive orders for their plants. And without orders, potential producers of the fuel are unlikely to get commercial supply chains up and running to replace the Russian uranium. [...]
"Nobody wants to order 10 reactors without a fuel source, and nobody wants to invest in a fuel source without 10 reactor orders"
Quick look at H.266/VCC, a next-gen video compression algorithm 📼 📺
I’ve also loved learning about compression algorithms. I don’t know why. It must trigger something in my brain related to my love of efficiency and optimization.
Doing more with less through clever problem-solving and math, is there anything more elegant?
Why do we need compression anyway? Well, streaming a 1080p 30fps video would use 1423.82 Mbps without compression, and a 4k video stream would take around 6 Gbps.
Yikes!
H.265/HEVC is the current standard codec used to compress a lot of high-def video (like on Netflix 4K or when an iPhone records video — Youtube tends to use AV1 and VP9). It’s still great, but it came out in 2013, and since then there’s been progress in software and in the CPUs (and specialized accelerators) used to compress/decompress it, so we can do better:
H.266/VVC is MPEG's next-generation video encoding and decoding standard. It is designed as the successor to High Efficiency Video Coding (HEVC/H.265) for further alleviating stress on large data transmission like 4K, 8K and even 16K UHD videos [...]
Now you know VVC is likely to save up to 50% bit rate while maintaining the same quality, compared to its predecessors
It’s unsure how fast VVC would be adopted, and it has some pros and cons vs the competition (patents). But whatever gets widely adopted (AV1?), I just like the idea that we’ll be able to get more video quality for the same bandwidth as we switch over to these next-gen formats.
🎨 🎭 Liberty Studio 👩🎨 🎥
🐲⚔️🛡🧙♂️🧝♀️ Dungeon & Dragons + Generative AI Art 🎨🤖
I’ve been having fun using Stable Diffusion to illustrate scenes and characters from an ongoing D&D game that I play with friends.
We alternate between two DMs, so there are two groups of characters, but they’re all set in the same world and may eventually meet. In one of the groups, my character is an aging fighter who used to be a soldier, but now he’s the old guy who trains the new recruits and teaches them to fight. He has bad knees and is basically a raging alcoholic.
The first two images are how I picture him!
In the second DM’s group, I’m playing a ranger who was raised in a family of merchants, with various businesses all around the city (inns, tailors, shipping warehouses, general stores, etc), but he decided that urban life wasn’t for him, so he headed to the woods.
Whenever our group of adventurers comes back to the city, we always drop by my various relatives to eat and sleep — my character’s favorite is smoked meat and he’s constantly sending everyone to eat at his cousin Schwartz’s.
That’s why I had the AI picture him eating what looks to be a cubic foot of smoked meat! (image on the right)
It’s so fun to be able to do this.
There’s a scene we played where a knight tries to murder me in my sleep as we’re detained for reasons that are too long to explain. A servant I had tipped well (so she’d bring my character more beer… 🍺) gives me a warning so I see it coming and we end up fighting the knight in the hallway by a stone staircase.
My friend plays a spellcaster, so he conjured up some magical grease in the stairs and I shoved the knight… Here’s how StableDiff pictured it:
A bit surreal, but it made me laugh, and the fact that I can even create this is pretty magical in the first place! 🪄
Re the Apple Ad: I‘ve had similar thoughts when watching the original iPhone presentation recently. Steve Jobs isn‘t just telling people how awesome the new phone and touchscreen is, he‘s also deliberately showing the audience how to use it: Swipe for scrolling, ping for zooming etc. Those touchscreen gestures aren‘t as intuitive as one might think. I‘ve read an article a while ago where they tested whether people who never used/encountered a smartphone (rare to find them these days) would use the gestures intuitively without being thought. And IIRC, the result was very few of them came to the idea.
I'd love to see your inputs for the AI. Maybe you could post them next to the images.