Discover more from Liberty’s Highlights
428: Zuckerberg's Universe, TSMC Dominance, Ford CEO Jim Farley, Toyota's EV Plans, AI Watermarking, and Hearing Loss & Dementia
"Progress is an ungrateful ratchet"
We are what we repeatedly do.
Excellence, then, is not an act, but a habit.
🤖🎨👨🎨📸 I’m trying *really* hard not to lose my sense of wonder at what generative AI can do.
Humans get used to things really quickly.
Subjectively, things go from amazing to “meh” too fast.
Progress is an ungrateful ratchet. ⚙️
It’s like that Louis CK bit about wifi on airplanes. He’s making a joke, but it’s not far-fetched to imagine someone who learns about the existence of something and then takes it for granted in the next breath. I understand how that helps us move on to the next thing, but I don’t think we’d lose our hunger if we stayed amazed just a bit longer.
So I’m trying to maintain a beginner’s mind about our AI tools, to keep a sense of childlike play.
I’ve been having fun with various creative projects.
You can see one of them above: I took some digital paintings I generated and asked the AI to “imagine” the “photo” versions of the same people. (I posted more on my Reddit account)
Nobody in these images is a living person, they are figments of an AI’s imagination.
It still blows my mind that the transformer model can look at the images on the left, somehow extract all kinds of patterns that are meaningful to our human perception from them, and then translate those into images in totally different styles that still have those patterns expressed in a way that we can recognize them.
For an AI, an image is not a unified scene on a screen but rather a sequence of binary data (0101001010011101), consisting of millions of individual pixels, each containing specific color information. Yet from that giant pile of pixels, AI can extract highly abstract concepts such as styles, people, objects, clothing, locations, lens distortion information, body posture, hairstyles, overall mood, facial expressions, and composition.
Our brain is very highly attuned to human faces. The fact that a generative model can create faces that don’t look too strange is in itself a really high bar.
It also remains very impressive to me how these models can derive extensive meaning from very few bits of information, such as a few keywords or a brief description. Their intelligence lies not only in generating the actual images but *also* in comprehending the request being made (which is no easy feat — we’ve been working on natural language understanding for decades).
You can change the whole mood of an image by changing one word in the description from “sunny” to “cloudy” or “somber” to “joyous”.
This single word influences the color choices for millions of pixels, yet all these decisions and adjustments somehow maintain coherence, allowing the end result to align with our intended outcome. 🤯
🛀💭⛷️🏔️🚡 The first people to do downhill skiing must’ve gotten quite the workout.
Nowadays, we have motorized ski lifts, but back then, I guess you had to walk up the mountain every time…
I feel I have an idea of what that must have been like because last summer, my wife and I hiked all the way to the summit of Mont Tremblant and it was quite the adventure. 😮💨
💚 🥃 🐇 This is the free edition of Liberty’s Highlights with 18,200+ subscribers.
You can get 1-2 extra editions/week full of juicy stuff + access to the private Discord 🗣🗣🗣 community by becoming a paid supporter (it’s quick & easy).
Imagine this: If you get just one good idea, discover one new hobby or favorite piece of media, or develop a deeper understanding of the world, it’ll be worth many times the modest cost:
🏦 💰 Liberty Capital 💳 💴
🗣️ Zuckerberg on Meta’s Universe 📁
The rolling thunder of tech/AI announcements and advances keeps hitting us from all angles these days. It’s hard to keep track or to know what will matter most a year or three from now.
What a change from 5-8 years ago when the common refrain was that tech had become boring and stagnant!
I’d venture a guess that the Apple Vision Pro will be looked back on as an inflection point in AR/VR, largely because of the combination of a few things: The very high-quality screens + the extremely precise eye-tracking + the real-time (under 12ms) rendering pipeline + the hybrid model of doing AR inside a VR headset.
But the world is big, and there’s enough space for more than one player, for more than one approach. I’m also very interested by what I’ll call the John Carmack approach to VR of going lower-end and more affordable.
My friend MBI (💎🐕) puts it well:
I personally admire Apple's ability to serve their customers in an aesthetically and technically superior ways, but it is indeed often underappreciated that Meta and Google may be the only two companies in the history of capitalism to make most of their products affordable and usable for almost everyone in the world, thanks to their ad driven business model which is unfortunately much derided in the consensus discourse in the west.
Here is Zuckerberg’s memo to Meta employees about Apple’s headset where he contrasts his approach with Apple’s.
There are also reports, and confirmation by Zuck, that Instagram is working on a text-centric social network that sounds pretty close to a Twitter clone:
The forthcoming app, which, in the meeting today, Meta chief product officer Chris Cox called “our response to Twitter,” will use Instagram’s account system to automatically populate a user’s information. The internal codename for the app is “Project 92,” and its public name could be Threads, based on internal documents also seen by The Verge.
This could be to have something ready if Twitter gets into more trouble and implodes for whatever reason (is that so hard to imagine? 😬).
Another big Meta (pre)announcement is how they’ll add generative AI to almost every one of their platforms/products.
Basically, anywhere there’s a text prompt, an image uploader, or a chatbot could fit, some generative model will assist users.
To get an overview of all this, the recent interview Zuckerberg gave to Lex Fridman (at the top) was pretty good. He’s clearly becoming better at these over time (and I don’t just mean Lex).
And his Jiu-Jitsu mindset is something you and I could probably learn from… 🥋
🇹🇼 TSMC: 60.1% Market Share Among Global Foundries 🐜
Talk about dominance!
Worth noting: Samsung had a huge decline and donated a bunch of market share, mostly to TSMC…
The foundry industry has been following a downward trend since the second half of 2022. Second and third-tier foundries, constrained by process technology limitations and high product overlap, face intense competition and lack bargaining power. [...]
TrendForce expects a continued decline in revenue for the top 10 foundries in Q2, although at a slower rate than in the first quarter. While supply chains are expected to gradually build inventory in response to peak season demand in the second half of the year, the accumulation of inventory and slow consumption have currently dampened customer attitudes toward stockpiling.
This is according to TrendForce’s data.
🔌🛻🔋 Interview: Jim Farley, Ford CEO ⚡️
I’ve heard a few interviews with Farley over the past few years, and every time I’ve come away impressed by how much he seems to get it.
This one is no exception:
Ford is a big company with a lot of baggage, so having a talented CEO may not be enough, but it’s certainly a good start.
I like some of the moves that he made, such as splitting the EV division from the rest of the company so that incentives are better aligned and they can move faster.
His discussion of how software works at most automakers was very interesting — how can you move fast and control the customer experience when you have 150 suppliers each writing their own software and you then have to somehow piece it together, Frankenstein-like?
I also like how he’s trying to double-down on Ford’s strengths and avoid creating commodity EVs. We’ll see how that turns out…
🇯🇵 Toyota wants “full EV lineup” by 2026 & 600+ miles of range 🚘🔋
Speaking of EVs, Toyota is signaling some changes under its new CEO:
Toyota very boldly claims "will change the future of cars," the Japanese company shared its plan for future cars to reach a range of 1,000 km (~621 miles).
According to Toyota, it will achieve this goal through the "integration of next-generation batteries and sonic technology" and plans to launch a full EV lineup by 2026. It already offers the bZ4X all-electric SUV, which can go about 270 miles on one charge and starts at $42,000. Plus, it plans to release a "next-generation" EV for Lexus, its luxury brand, in the same timeframe. (Source)
WTF is “sonic technology”?
After almost 20 years of watching hybrids and EVs, I have learned to only truly believe things once they are in mass production and available for sale.
I still think Toyota wasted a golden opportunity to be a leader in EV.
They were far ahead of everybody in the early days of electrification with their HSD tech (for hybrids), and they could have translated a lot of this work to full EVs many years ago to take full advantage of that lead, but instead they wandered off in Hydrogen Land and wasted all that time, allowing others to surpass them.
Don’t get me wrong: hybrids are fine for increasing fuel economy and reducing smog-forming emissions. But the mechanical complexity is inelegant, and the way things are going with batteries and electric drivetrains, it won’t make sense to have hybrids around forever. (ie. batteries and EV drivetrains are getting better and cheaper at a much faster rate than internal combustion engine components)
🇬🇧 UK Productivity Growth — The Engine has Stalled 😬
Now imagine how much worse the UK would be doing if it didn’t have all kinds of advantages vs many other countries…
h/t Alec Stapp
📰🤝 High-Quality Classifieds: Public Equities Analyst
Long-time friend-of-the-show WTCM posted this on Twitter:
I’m looking to join a new team (public equities); currently based in NYC but flexible. DMs open / wtcm2023 [at] gmail and thanks for the help.
I think anyone would be lucky to have them on their team!
🧪🔬 Liberty Labs 🧬 🔭
⚛️ The West is ceding leadership on Nuclear Power 😕
Russia and China are building up an outsized presence in the field of nuclear power, with the countries accounting for nearly 70% of reactors under construction or in planning worldwide.
Meanwhile, construction plans in Japan, the U.S. and Europe were largely put on hold after the 2011 disaster at the Fukushima Daiichi nuclear power plant, resulting in a stagnation of related industries in those countries.
🌆🖼️ ‘Evading Watermark based Detection of AI-Generated Content’ 🔓🤖
As I suspected, it turns out to be really hard to detect whether text or images were generated by an AI or not, and watermarking methods have fairly limited effectiveness if someone is actively trying to evade them (which the most dangerous adversaries will do).
I was expecting this to be the case for text at first and for images eventually, but even images are getting harder to watermark now.
This paper studies how robust watermarking is:
A generative AI model–such as DALL-E, Stable Diffusion, and ChatGPT–can generate extremely realistic-looking content, posing growing challenges to the authenticity of information. To address the challenges, watermark has been leveraged to detect AIgenerated content. Specifically, a watermark is embedded into an AI-generated content before it is released. A content is detected as AI-generated if a similar watermark can be decoded from it.
In this work, we perform a systematic study on the robustness of such watermark-based AI-generated content detection. We focus on AI-generated images.
Our work shows that an attacker can post-process an AI-generated watermarked image via adding a small, human-imperceptible perturbation to it, such that the postprocessed AI-generated image evades detection while maintaining its visual quality. We demonstrate the effectiveness of our attack both theoretically and empirically.
Moreover, to evade detection, our adversarial post-processing method adds much smaller perturbations to the AI-generated images and thus better maintain their visual quality than existing popular image post-processing methods such as JPEG compression, Gaussian blur, and Brightness/Contrast.
Our work demonstrates the insufficiency of existing watermarkbased detection of AI-generated content, highlighting the urgent needs of new detection methods.
Text has way fewer bits of information to play with than images, especially short text, making it harder to watermark or probabilistically detect with confidence.
On top of that, by asking the AI, you can usually get it to write in a way that removes some of the typical markers of AI-generated text, or to rewrite a few times to randomize the style some more, leaving most detection tools with enough false positives that they can’t be used effectively.
🦻🏻🧠 Hearing Loss & Dementia Risk
Something to note, knowing this could change your or a loved one’s life:
most people are unaware that 40% of dementia cases are preventable and attributed to modifiable risk factors [...]
With a focus on optimising cardiovascular risk factors, the risk of dementia also decreases.
But there are other factors that are not related to cardiovascular disease that also play a role in dementia risk.
Reduced hearing is an underappreciated modifiable risk factor for dementia.
More than 60% of adults over 70 years of age have some degree of hearing loss.
What might be more concerning, however, is that by 45 years of age, 26% are impacted by some degree of hearing loss.
With an average lifespan of close to 80 years, that is multiple decades of reduced hearing ability.
I’m not aware of having any hearing loss, but maybe I should do a test 🤔
I certainly am glad that my younger self wore earplugs when I saw all those very loud underground metal shows… 🤘
Reduced hearing loss has been associated with a range of adverse outcomes, including:
Reduced social and emotional interactions
A decline in social activities [...]
In addition to the near-term reductions in quality of life, hearing loss has also been linked to the earlier onset of dementia.
Some studies suggest a relative increase of 90% compared to those without hearing impairment.
Hearing loss accounts for just over 20% of the modifiable risk factors for dementia
There are many theories as to why (the Cognitive Load Hypothesis, the Sensory Deficit Hypothesis, and Social Isolation), which you can read more about here.
But my point today is just to make you aware of this correlation so that you can let your loved ones know, and react quickly if you ever notice that something may be wrong with your hearing.
🎨 🎭 Liberty Studio 👩🎨 🎥
🎶🎤 ‘The Beatles Come Together Using AI For ‘Last Record,’ Paul McCartney Says’ 🤔
With Peter Jackson’s help:
More than 50 years after the group’s final studio album, Paul McCartney says he has used artificial intelligence to create what he called “the last Beatles record.”
“We just finished it up and it’ll be released this year,” McCartney said in an interview with the British Broadcasting Corp. on Tuesday.
McCartney said Hollywood director Peter Jackson, who directed the 2021 documentary epic “The Beatles: Get Back,” used AI technology to isolate the voice of John Lennon from an old demo tape.
“He was able to extricate John’s voice from a ropy little bit of cassette where it had John’s voice and a piano,” McCartney said. “We were able to take John’s voice and make it pure through AI and you were able to mix the record as you would normally do.”
McCartney didn’t reveal what the song was, but the BBC said it was likely to be a 1978 Lennon composition called “Now and Then.” (Source)
If this is a way to recover audio from a demo that didn’t sound too good and polish it up enough to release, I’m all for it.
It’s not so different from Peter Jackson taking shaky old black-and-white footage and cleaning it up, colorizing it, stabilizing it, and showing it to us so that we can better see the reality behind the limitations of the medium.
This is very different from what a similar headline could imply, which would be to make John Lennon sing a song that he never wrote or sang…