416: Bank Failures, AI Analogies, Hollywood Writers, IBM, Twitter, US Grid, BYD, Neural Physics, and Wes Anderson Star Wars
"decisions big and small create reality"
The difference between good and great is often an extra round of revision.
The person who looks things over a second time will appear smarter or more talented, but actually is just polishing things a bit more.
Take the time to get it right. Revise it one extra time.
⚠️🚗 A few days ago, I was taking a walk around my neighborhood and noticed this nail on the ground right in front of someone’s driveway.
As I picked it up, it made me think of how life is constantly branching out in front of us — decisions big and small create reality, for us and others.
Maybe I walk by without seeing the nail and someone gets a flat tire. Maybe I don’t pick it up, but someone else does. Maybe someone gets a flat tire and they are late to drive to work that day, but they would’ve gotten into a car accident if they had been on time. Or maybe they get in an accident that they would’ve avoided if they hadn’t been driving fast because they were so late…
Some branches are more probable, others less so, but you never know in advance where small things may take you.
I met my wife at a summer outdoor party about 20 years ago. She knew the brother of my friend. There are multiple possible worlds where she doesn’t get invited, or I don’t go because I’m sick, or we’re both there but never talk to each other.
My whole life would be different. My kids wouldn’t exist.
How many small decisions with big impacts took place in the life of my parents, my grandparents… My unbroken chain of ancestors all the way up to unicellular early life? 🦠
Our existence is so improbable, we may as well get the most out of it!
🎭 🥰😂😭🤔 Nothing is stopping you from having a deeper appreciation of the things you love.
Take your time and savor every bite of your favorite meal. 🍱🥘
Take the time to listen to your favorite album without doing anything else at the same time. Turn off the lights, put on headphones, and really listen — don’t think about unloading the dishwasher or that email, REALLY listen.
Take a hike on your favorite trail without listening to a podcast and really take in the sights and smells and details. Touch the tree trunks with your hand as you pass by, smile at birds, not because the birds care, but because smiling will make you feel good. 🌳🦉🌳🌳🚶🏻♂️🌳🌳🐿️🌳
Appreciation isn’t about quantity, it’s about intensity and quality!
💚 🥃 🐇 This is the free edition of Liberty’s Highlights with 17,100+ subscribers.
You can get 1-2 extra editions/week full of juicy stuff + access to the private Discord 🗣🗣🗣 community by becoming a paid supporter (it’s quick & easy).
Paid posts since last week 🐇:
🔒 414: Microsoft + Activision (RIP), AI Spam on Amazon, China vs India, Sam Zell, Google, AWS Nitro, Nat Friedman, NASA, and Dune: Part 2
🔒 415: Cloudflare Q1, Apple VR Headset, Tinder for Friendships, James Dyson, Palmer Luckey, AI Fools Bank & Family, and Diplomat
If you click the link above, you can see the intro for free and there’s a link to get a risk-free 7-day free trial of the paid version. You can only gain from this!
🏦 💰 Liberty Capital 💳 💴
🏦💥💸 Two Decades of Bank Failures
h/t Jake Taylor (via MBI! Thanks 👋)
🦜🤖🤔🧠 Updating our analogies for AI
Here’s a Ggreat post by prof. Ethan Mollick on how analogies help us think, but can also mislead and blind us to what things really are:
The ability to think with analogies may be the key to how humans have been able to collectively create entirely new things. [...]
But analogies can also be dangerous or limiting, oversimplifying complex issues. While analogies can be useful in making a point, they are not always accurate representations of the issue at hand.
By relying on an analogy, one may overlook key differences between the two things being compared, leading to flawed conclusions. For example, my colleague Prof. Natalya Vinokurova has shown that the 2007-2008 Financial Crisis was due, in part, to bad analogies. The creators of mortgage-backed securities were able to get investors to make analogies between those risky instruments and standard bonds, which led to flawed models about risk.
Analogies are indeed a double-edged sword and should be used with care, especially when they’re about new things and haven’t been battle-tested and proven useful.
So what analogies are we using to think about the crop of AI tools that have recently emerged? How are these mental models pushing us in the wrong direction, or blinding us to what is there?
Science fiction has provided us with a very clear picture of AIs. They are robots - logical, calculating machines that never make a mistake unless their is a flaw in their programming. They are incapable of creative thought, and, in fact, trying to get them to understand something illogical or emotional often makes them explode. [...]
more technical people focus on the ways that LLMs actually work. Since they are only predicting what comes next in a sequence of letters based on an initial prompt, they are nothing more than “autocompletes on steroids.” Alternately, researchers have compared an AI to a parrot because it mimics human language without understanding its meaning or context [...]
Search engines. This is the big analogy I see everywhere. When people see a machine that can give you information on request, they think of the search engine
These analogies are not wrong per se, but they are incomplete:
AI is very good at complex and creative processes - exactly the kinds of things the analogies would lead us to think are the weaknesses of AIs. [...]
I think the value of AI for creative thought is a surprise for many of us, given the dismal performance of early LLMs which would churn out unoriginal dreck.
I totally agree with his conclusion:
Practical generative AI is a new thing in the world, and it will take us awhile to sort out what it really is. We need to be careful, until then, about the analogies we use. AI is not like anything we have seen before, so our analogies will be limited. Plus, the technology itself is constantly evolving, so even good analogies will quickly become obsolete.
Otherwise, we risk falling into the same traps that humanity has fallen into countless times, seeing a truly new thing as just some old thing with a bigger engine and a turbo.
The internet wasn’t like any of the communication networks that came before, the atomic bomb wasn’t like any of the bombs that came before, and the scientific method wasn’t like any of the methods for acquiring knowledge that came before.
So let’s figure out what this new thing is together!
🤖 Looking over the horizon: 40 AI use cases 🚀
I’ve written a lot about AI in the past year. It has been hard just to keep up in real-time with the deluge of new advances and tools. However, it’s worth taking a step back occasionally and looking further ahead at where all this may be going.
This is exactly what Jim O’Shaughnessy (my new boss!) and Ed William wrote about yesterday:
It’s a broad overview of pretty realistic use cases (nothing super scifi) across consumer products and interactions, computer programming, scientific research, education, environmental issues, medicine & social, financial services, transportation… A really broad swath of society!
Here are a few highlights:
3.2 Adaptive learning
AI-powered adaptive learning systems can analyze students' learning patterns, strengths, and weaknesses and tailor educational content to meet their individual needs.
What this could mean: A more personalized and practical learning experience.
3.4 Intelligent tutoring systems
AI-driven tutoring systems can provide students with real-time feedback, guidance, and support, simulating the experience of a one-on-one tutor and enhancing their learning process.
What this could mean: Affordable, accessible, personalized tutors for every child. A democratization of personalized education.
ChatGPT is already having an impact, as Chegg, an American education company that provides “homework help, digital and physical textbook rentals, textbooks, online tutoring, and other student services” saw its stock fall by about 50% in one day when they announced the impact that ChatGPT was having on demand for its services.
But this is just the beginning, and purpose-built models that are more reliable and hallucinate less will no doubt be made specifically for that market.
6.4 Null Hypotheses
There is a benefit to knowing what doesn’t work. Unlike humans, AI doesn’t mind working on things that don’t end up working. AI could generate null hypotheses and machine-publish them directly in repositories so that other scientists and AI scientists could plug in and see what has already been tried to inform their own research.
What this could mean: More time for scientists to allocate to productive research.
6.8 Uncovering Fraud
AI can find anomalies in published research to uncover potential fraud or honest mistakes. For example, samples and results that have been tampered with may show patterns that would be very unlikely to happen by chance, or some results may be re-used, images edited by software that leaves artifacts, etc.
What this could mean: Less fraudulent research.
Since science has such a huge leverage for helping humanity, if we can use these tools to make science better, we’ll get *massive* benefits! 👩🔬
🪧🎥🎬 Hollywood writers on strike (first time since 2007)
I still remember the previous strike, because many shows had crappy seasons with fewer episodes 😬
If you’re curious about the impact that this will have on content, especially if it lasts a while, Vulture predicts:
Late night will be the first genre to go bye-bye. Because these shows are written on an extremely tight schedule in order to stay topical, they can’t have any episodes banked for future use. [...]
Network TV is the next to go. The strike would strike while most major-network TV shows (Abbott Elementary, the Chicago procedurals, Ghost, etc.) are wrapped for the summer. However, if the strike extends for as long as the 2007–8 run did, that would eat into the prep time for these shows before they return in the fall. [...]
Well, this is the big question mark. Streamers such as HBO Max, Amazon Prime, Hulu, and Netflix tend to bank a lot more shows than network TV does, so it would take longer for them to feel the real effects.
An interesting idea for Netflix would be to start putting out its best shows on a weekly cadence rather than releasing all the episodes at once to stretch them out a bit.
I’ve argued in the intro of Edition #309 that they should do this for other reasons, but maybe the strike will be the last straw
✂️ IBM CEO claims AI could replace ˜8,000 of its workers
IBM CEO Arvind Krishna said the company expects to pause hiring for roles it thinks could be replaced with artificial intelligence in the coming years. [...]
These non-customer-facing roles amount to roughly 26,000 workers, Krishna said. “I could easily see 30% of that getting replaced by AI and automation over a five-year period.” [...]
More mundane tasks such as providing employment verification letters or moving employees between departments will likely be fully automated, Krishna said. (Source)
Maybe IBM could get an AI CEO. is Watson being groomed for the role yet?
⚡️ US Power Grid: We could be close to reform to the interconnection queue process 🔌
In Edition #409, I wrote about the 18-year approval process for a transmission line in the US. Not construction, just approval!
Something needs to be done, and apparently, the Federal Energy Regulatory Commission (FERC) agrees:
The FERC is closing in on reforming the interconnection queue process and setting minimum levels of interregional transfer capacity, according to acting Chairman Willie Phillips.
These reforms are aimed at removing barriers to regional and interregional transmission investment, bolstering reliability during extreme weather, and supporting the development of grid-enhancing technology to beef up capacity on existing lines.
Phillips stressed the importance of reducing delays and litigation by engaging with communities early on in proposed infrastructure projects. He also highlighted the role of regional transmission organizations and independent system operators in fostering diverse generating resources and bringing transparency to transmission planning and operations.
Phillips believes that transmission is the best path to providing reliability to the grid.
Sounds good, but the devil’s always in the detail.
Can they execute these plans in a way that makes a big positive difference, or is this just going to change a few things at the margin and leave the regulatory process extremely long and expensive (so that many projects are not even attempted, an invisible casualty of too much red tape)? ¯\_(ツ)_/¯
🐦 Twitter’s strange courtship of creators 🤨
It’s been a while since I wrote about Twitter, not because nothing’s happening, but because after a while of living in the new normal (SNAFU), it seems less notable.
The latest strange development hits close to home:
For the past few weeks, Twitter has been encouraging creators to sign up for its subscription product, through which users pay creators a few dollars a month to receive content that’s behind a paywall, as well as a badge denoting their subscriber status.
So after pissing off creators by blocking and throttling Substack and setting the algorithm so that tweets with links get less engagement because Twitter really wants to keep you on the platform so that you see more ads, making the whole platform a worse tool for people to discover cool things from around the internet…
Twitter is now trying to woo creators with a few new features cobbled together with duct tape and bubble gum.
I’m not sure how successful it will be, because *trust* is very important if you’re going to decide to pour your sweat and blood into building something on a platform. It takes a long time to build trust, and it can be lost in a moment.
Twitter has already shown that it doesn’t really care if it hurts the livelihood and reach of small creators, that anything can change at any time for any capricious reason, and that they don’t have much of a roadmap to keep developing these new features (remember Revue?).
There’s also a strong vibe coming from Musk that creators should feel lucky to even be allowed on Twitter and that they owe their success to Twitter, so they better give Twitter a cut or risk being blocked from the platform.
This is backward: Creators make content for Twitter for free. They create most of the engagement on the platform and are one of the main reasons to use the platform. Twitter should thank its power-users and do everything it can to keep them on the platform creating compelling tweets, rather than feeling entitled to their content.
In any case, none of this inspires confidence in Twitter as a platform for creators 😬
Substack and other platforms aren’t perfect, but at least we can know that their success is largely aligned with the success of their users. Twitter’s success mostly depends on advertising and on Elon Musk not chasing away too many of the power users.
🇨🇳🚘 The most popular car company in China is now… BYD
🧪🔬 Liberty Labs 🧬 🔭
💇🏻♀️ Neural Physics to simulate human hair!
It may sound like a small thing, but getting the physics of hair right isn’t trivial.
We devise a local–global solver dedicated to the simulation of Discrete Elastic Rods (DER) with Coulomb friction that can fully leverage the massively parallel compute capabilities of moderns GPUs. We verify that our simulator can reproduce analytical results on recently published cantilever, bend–twist, and stick–slip experiments, while drastically decreasing iteration times for high-resolution hair simulations. Being able to handle contacting assemblies of several thousand elastic rods in real-time, our fast solver paves the ways for new workflows such as interactive physics-based editing of digital grooms. (Source)
Pixar worked very hard on this, and I wouldn’t be surprised to see them adopt these kinds of tools if they can do a better job and/or use fewer compute cycles than their current techniques.
This is just one of about 20 new papers that Nvidia is presenting at SIGGRAPH, the year’s most important computer graphics conference. Here’s another highlight below ⬇️
🗜️Neural Texture Compression
This neural compression technique can deliver “up to 16x more texture detail” without taking additional GPU memory:
Neural texture compression can substantially increase the realism of 3D scenes, as seen in the image below, which demonstrates how neural-compressed textures capture sharper detail than previous formats, where the text remains blurry.
🎨 🎭 Liberty Studio 👩🎨 🎥
🍿 Star Wars by Wes Anderson 🎬
Great trailer for a fake film. Now I kind of want to see it!
I’m sure I would find it more fun and interesting than the recent ‘real’ Star Wars films (I have not seen the ‘Andor’ series, but I hear really good things about it, so it may be the exception).
h/t friend-of-the-show Kevin Holloway
Images generated by the same prompt *one year* apart 🤯
I think most would agree on which is better, except maybe the Salvador Dali uber-fans out there
Great stuff as always. I love that fake Wes Anderson trailer!
The section around analogies being limiting reminds me of this Emerson quote “Words are finite organs of the infinite mind. They cannot cover the dimensions of what is in truth. They break, chop, and impoverish it.”
Excellent writing and opinion on Twitter. 🐦