492: Zuckerberg Interview, Ed Thorp, Streamers, CEO Transitions, Llama 3, Gameboy Engineering, Energy Scale, Dark Matter, and Reznor
"it subtly leaks into every decision you make"
Sinners often speak the truth. And saints have led people astray. Examine what is said, not the one who says it.
—Anthony de Mello
🛀💭☠️ Zero-sum thinking is the belief that in any situation, one person's gain must come at the expense of someone else’s loss. It's the idea that there is a fixed amount of success, wealth, or happiness available, and for someone to get more, someone else must get less.
Zero-sum thinking is corrosive because it subtly leaks into every decision you make.
As a worldview, it eventually colors every emotion and every social interaction.
It’s self-defeating, because even the success that you do have will be diminished by the belief that it came at the expense of making others worse off. That eventually screws with your sense of self-worth and the well-being of those around you: The worse you feel, the worse you make others feel (your family and loved ones).
Don’t go down that route!
🤖 Upon seeing the new Boston Robotics robot, this was my initial reaction:
(yes, a 1980s industrial music reference — why not? Now I’ve got that song stuck in my head…)
🏴☠️🎣🚐 My parents came this 🤏 close to falling for an online scam.
They would like to sell their RV, so they listed it on a few online marketplaces.
They rapidly got messages from “interested buyers” claiming they wanted to make an appointment to see it the next day… but first, they wanted to see a report on the vehicle’s history, so “please go to [site that looks legit but isn’t Carfax]”.
I’m sure that countless people go to these sites, enter their credit card info to buy a “report” and at least lose that amount of money because the buyer isn’t real, and probably also get their credit card info stolen.
Lucky for my parents, a few details seemed suspicious so they asked me about it.
Extrapolating to the near future — this type of social engineering will be more effective when you have LLMs chatting with potential victims in any language. They’ll never tire of trying to close the deal and will be significantly more convincing than a static script. Sophisticated scammers could even have the LLM scan a potential mark’s Facebook page and tailor things to them (ie. make local references, mention things the person likes, etc).
I’m not sure what the best counter-measure for that is 🤔
You can train people to recognize common scams, but you’ll never train *everybody* to recognize *every* scam. If thieves can scale their attempts, they just need a small % success rate to make billions.
Trust is SO VALUABLE in our society that it would be a huge loss to have everyone default to zero trust… But what’s the right balance? ¯\_(ツ)_/¯
🏦 💰 Business & Investing 💳 💴
🗣️ Interview: Mark Zuckerberg on Llama 3, AI Energy Bottlenecks, Open Source, and Meta’s Strategy 🤖💰💰💰
Friend-of-the-show Dwarkesh Patel — one of the best interviewers in podcastland — had a very interesting convo with Mark Zuckerberg.
Mark dodged a few questions, but most of the time he nerded out and went into a fair amount of depth about the trade-offs required when training vs deploying models, the strategic considerations of open sourcing (commoditize your complements? leverage third parties to go faster up the learning curve?), and where he sees bottlenecks moving next (hint: energy, as I’ve been writing about for a while).
Here’s a highlight on the sizes of Llama 3:
We're training three versions: an 8 billion parameter model and a 70 billion, which we're releasing today, and a 405 billion dense model, which is still training.
405 billion dense! Not a mixture-of-experts (MoE)!
It’ll be really interesting to see how it compares to the state of the art, especially since Mark said this about the leap in performance between v2 vs v3:
Mark: The 8 billion is nearly as powerful as the biggest version of Llama-2 that we released. So the smallest Llama-3 is basically as powerful as the biggest Llama-2.
This doesn’t necessarily mean that things will scale the same way with the bigger models, and we’ll still need some real-world testing to confirm his claims, but this sounds very intriguing and makes me think that some really good small models that can run locally on computers and phones are just around the corner (RAM is one of the big bottlenecks with trying to do inference on larger models).
Speaking of Meta, I guess Perplexity is on their radar:
It’s not a complex design, but simplicity can be just as hard. The three rows of suggested searches that scroll horizontally with emojis feel pretty distinctive. When I saw Meta AI, I immediately thought of Perplexity, but when I first saw Perplexity I wasn’t reminded of anything else.
(to be clear: I’m not saying it’s the end of the world or unprecedented. I opened Meta AI for the first time yesterday and it immediately jumped at me, so I’m pointing it out. It’s not because something is legal and probably good business that you can’t criticize it. Personally, I'd rather see copying when a category is pretty mature and the best form factor has been identified through a lot of experimentation and iteration rather than in the early days when so little has been tried)
✂️ 🔪 Television: Removing Friction Cuts Both Ways 🗡️ ↔
Every time an industry discovers a new lower-friction way of doing business (usually thanks to some technological innovation), there’s a big celebration as incumbents dream of how much more money they can make by making it easy for customers to buy their products and services.
However, it’s not unusual for many of them to start yearning for the “good ol’ days” rather quickly:
Keep reading with a 7-day free trial
Subscribe to Liberty’s Highlights to keep reading this post and get 7 days of free access to the full post archives.