481: Microsoft Goes Poly with Mistral, Apple Cancels Car, Google SEO Wars, Apple + Bing?, OpenAI, Helicopters, and Plutarch
"If you want quality, you need thousands of no’s for every yes."
The strategies that made you successful in the past will, at some point, reach their limit.
Don't let your previous choices set your future ceiling. The willingness to try new ideas allows you to keep advancing.
—James Clear
📰5️⃣0️⃣0️⃣🗓️ 🚢⚓️ We’re approaching Edition #500!
On one hand, it’s an arbitrary milestone. But on the other, it’s good to look back and self-reflect once in a while, so whatever triggers that can still be worthwhile, even if it’s arbitrary.
The first reflection that comes to my mind is about shared context.
I wonder how many people have read all Editions so far. Not necessarily every word, but at least opened them and scrolled through, reading the parts that caught their interest.
That number can’t be too big because few people go back through the archives when they discover an ongoing publication. And the bigger the archives, the more intimidating they are even to those who do.
I wonder if there’s even ONE.
By definition, if there’s someone reading everything… you’re reading this, so 👋🫡
Do you remember Mothy McFly? Man, that was a stupid joke… 😬
Otherwise, it’s just me. I’m the only one who has read every Edition. Which is ok.
But that triggers a different thought:
When talking about *anything* with someone else, we may think we’re talking about the same thing, but we never *really* are because we have different shared context about that thing, which compounds the fact that we’re different people. So whatever we do have in common is being interpreted differently by our unique lens on life.
Take two people discussing, say, Warren Buffett. One of them says they hate him and the other says that they love him.
How far apart are their internal models of who Buffett is?
If they could somehow be made to share the exact same knowledge (through some kind of scifi Neuralink upload, let’s say), I imagine that would bring their positions closer together, though not necessarily fully because they will still have a personal way of interpreting all of that information (different values, different life experiences, different interests, etc).
How tf did I get from Edition #500 to Neuralink and Buffett? ¯\_(ツ)_/¯
That’s part of the fun here, you never know what you’re gonna get (which is *not* like a box of chocolates — sorry Forrest, they’re labeled on the back of the box 💝).
🥱🛏️😴✂️💭 The more I think about the discontinuity in consciousness that happens every time we sleep, the weirder it is.
👨💻🕸️👩💻🏗️🚧 How different would the internet be if fiber optics cable worked the same way as most other technologies and to upgrade their speed you had to fully replace them with a brand new thing?
We're very lucky that by just changing the equipment at both hands (and probably repeaters along the way) we can get a lot more bandwidth out of existing lines.
🚨🤞 🎟 📚🎁✍️🏆 Last call on the signed book giveaway:
My friend Jimmy Soni (📝📚) wrote a book called ‘Founders’ about PayPal and the incredible agglomeration of talent in the early days of the company. He and I and David Senra discussed the book on this podcast.
The paperback version just came out!
I’m giving away *10 signed copies* to supporters (💚 🥃 📖)
Here’s how to throw your name in the hat:
If you’re a paid supporter of the newsletter, send me an email to let me know you want to participate (you can reply to any of the newsletter emails). Send this before tomorrow, February 29th at 11:59 PM EST. That’s it.
I’ll compile a list of participants and draw a name out of a hat, or buy one of those bingo machines with the balls bouncing around, or use randomizing software, whatever, and pick 10 winners. I’ll email the winners for shipping details.
If you’re NOT a paid supporter but would like to become one before the 29th, no problem. Hit the subscribe button, and then drop me an email to say you’re interested in the book.
Not all supporters will participate, so if you do, your odds of getting a signed book are excellent.
🏦 💰 Liberty Capital 💳 💴
🤖💞🤖 Microsoft Goes Poly — Makes AI Deal with Mistral 🤝 + Mistral Large & ChatGPT Competitor 🐈
OpenAI isn’t the only one in Satya’s heart anymore.
Mistral was *founded in April 2023* and they’ve already struck a deal with Microsoft to help with distribution, compute, and R&D:
We are announcing a multi-year partnership between Microsoft and Mistral AI [...]
Microsoft’s partnership with Mistral AI is focused on three core areas:
Supercomputing infrastructure: Microsoft will support Mistral AI with Azure AI supercomputing infrastructure delivering best-in-class performance and scale for AI training and inference workloads for Mistral AI’s flagship models.
Scale to market: Microsoft and Mistral AI will make Mistral AI’s premium models available to customers through Azure AI Studio and Azure Machine Learning model catalog.
AI research and development: Microsoft and Mistral AI will explore collaboration around training purpose-specific models for select customers, including European public sector workloads.
Not bad for a 10-month-old startup (although, to be fair, it was founded by people who were previously at DeepMind and Meta).
Microsoft is also taking an equity stake in Mistral, investing €15 million (˜$16.2 USD) which will convert to equity during their next funding round (valuation will be priced then).
It’s a small equity investment, but I suspect that if things go well and Microsoft becomes one of the main distribution channels and partners for Mistral, they’ll have plenty of leverage and be well-positioned to deepen the partnership later.
The previous round, in December 2023, valued Mistral at $2bn. I can imagine that this new partnership will help make the next round significantly higher. They’ve raised a bit over $500m so far — it’s kind of crazy to think how fast they went from a blank page to this.
As an aside, this story is a testament to the cloud era.
In the early days of the internet, even if you raised hundreds of millions on day 1, it would take months just to buy servers, get them delivered, rack ‘em & stack ‘em somewhere, etc.
The fact that a startup’s infrastructure can be abstracted away is a huge advantage when it comes to moving fast.
But that’s not all!
This news is timed with Mistral releasing a new model, Mistral Large, which is their first model to compete with GPT-4 and Claude 2.1:
Of course, benchmarks are only worth so much, and real-world testing will tell the real story.
The company is launching a new flagship large language model called Mistral Large. When it comes to reasoning capabilities, it is designed to rival other top-tier models, such as GPT-4 and Claude 2. [...]
By default, Mistral AI supports context windows of 32k tokens (generally more than 20,000 words in English). Mistral Large supports English, French, Spanish, German and Italian. [...]
In addition to Mistral Large, the startup is also launching its own alternative to ChatGPT with a new service called Le Chat. This chat assistant is currently available in beta.
Cute. In French, “Le Chat” means “The Cat” 🐈
🍏🚗 Apple Cancels ‘Project Titan’ (their Electric Car)
Mark Gurman has the scoop:
Apple is canceling a decade-long effort to build an electric car, abandoning one of the most ambitious projects in the history of the company. [...]
Apple made the disclosure internally Tuesday, surprising the nearly 2,000 employees working on the project
The two executives told staffers that the project will begin winding down and that many employees on the team working on the car — known as the Special Projects Group, or SPG — will be shifted to the artificial intelligence division under executive John Giannandrea. Those employees will focus on generative AI projects, an increasingly key priority for the company.
A lot of people will say that this is a failure, a defeat for Apple, etc.
If we knew what it was we were doing, it would not be called research, would it?
(attributed to Albert Einstein -- who knows who said it, all these things end up attributed to the same 5 people...)
Another useful way of thinking about it comes from Jeff Bezos, who wrote about how the size of bets has to scale with the size of a company, which means that the size of the failures will scale too — the Fire Phone seemed big at one time, but who remembers what didn’t work when some of the bets pay off spectacularly!
Back to Apple kiboshing the car project:
I think it’s great. It’s how things should work. You have an idea, you try it really really hard. You do your best and you solve as many of the problems standing between you and that idealized vision as possible. But if it doesn’t work, or it works but it’s not good enough, you recognize it and move on.
Apple has craploads of prototypes in its labs that never become products.
If you want quality, you need thousands of no’s for every yes.
This one was a particularly big project, so it couldn’t be kept a secret. Once things get out, some people feel like that product is owed to them, so when it does happen, it’s almost like you took something away from them.
But Apple never promised a car to anyone.
While this may not be a failure for Apple, I think it’s probably making Elon Musk feel pretty good. I'm surprised Tesla's stock wasn’t up much on the news, if only because it should lower pressure when it comes to the poaching of talent... Maybe Mr. Market interprets it as being a sign that EV demand will be low, but I don’t think this is why Apple is making this decision.
It’s also not a bad idea for Apple to redeploy some of that R&D talent to generative AI. Apple is already using machine learning in a lot of its products — often in a less flashy way than the competition, like in computational photography — but they could do so much more. Just a Siri that doesn’t suck would be a great quality-of-life improvement! 📲🗣️
Search Spam, the SEO Wars, and Web Pollution
I was discussing this topic in a group chat with some friends (hey guys 👋).
I don’t want to write a big thing about this today. I’ll probably revisit in the future… But I still want to organize some thought and plant some seeds here so that you can think about it on your side and start to notice what’s going on if it wasn’t already on your radar.
Google has been fighting SEO spammers from day one (ok, maybe day two).
It’s an arms race between those trying to rank better to drive traffic to their websites (which they then try to monetize somehow), and the search quality teams at Google trying to keep search quality high (hopefully it’s not the same team who did RLHF on Gemini 😬 but I digress…).
Spammers invent new techniques, Google invents new counter-measures. Rinse and repeat. 🔁
Old story, right?
But large language models (LLMs) are not just like any other weapon in this war. I think they’re a qualitative change.
To fight spam, you have to be able to differentiate spam from real content.
Imagine that it’s 2010 and a spammer wants to rank highly in Google, but they want their stuff to be *impossible* for Google to detect as being spam and remove from search results. What is the surest way to do it?
It’s to produce high-quality unique content. It’s to not be spam, because anything else can eventually be ID’ed as spam and flagged. If you just copy someone else’s real content, Google can detect the duplication, know that they were first, and penalize you. If you pay a bunch of kids in a poor country to churn out unique content, it’ll likely be low quality and wouldn’t rank highly on merit. If you try to get a bunch of inbound links in an inoarganic way (link farms and paid links), Google can detect the pattern and penalize you.
The only sustainable way to not get caught is to make unique, quality content that gets organic inbound links and clicks. That’s how Google wants it.
But with LLMs, spam has a Turing Test moment that changes the game.
You can now make lots of unique, quality content very cheaply. It may not be great, but most writing isn’t great either. It won’t easily get caught because it’s close enough to what a human could have written that unless you get other signals (ie. the spammers use some of their other tricks, like buying inbound links or using duplicated content), it will be increasingly impossible to differentiate from ‘real’ web content.
I know that everyone just thinks we need “a detector for AI content”, but it’s not as easy to implement as it sounds. Such a detector needs to be able to find some pattern that is different between LLM spam and human text. That diff will get increasingly small, until a point where you get so much false positives and negatives that it’ll affect search quality either way.
Let’s think about the next steps together in a future Edition.
🔍 🍎 Microsoft offered to sell Bing to Apple in 2018 🪧🤝
A very interesting court filing by Google from their antitrust case:
Since 2009, Microsoft has approached Apple a number of times to propose that Bing become Safari’s default search engine. Each time—in 2009, 2013, 2015 to 2016, 2018, and 2020—Apple declined to do a default deal with Bing out of concerns regarding Bing’s product quality. [...]
Microsoft approached Apple again in 2018, representing to Apple that its search quality had improved since the 2015 to 2016 discussions, and offering to sell Bing to Apple or enter into a joint venture regarding Bing. [...]
[Eddy] Cue summarized Apple’s conclusion: “Microsoft search quality, their investment in search, everything was not significant at all. And so everything was lower. So the search quality itself wasn’t as good. They weren’t investing at any level comparable to Google or to what Microsoft could invest in. And their advertising organization and how they monetize was not very good either.”
In a recently unveiled document, the Justice Department disclosed that, over two decades, Microsoft has invested close to $100 billion in Bing and its previous search products (where my Windows Live bros at? 😬).
On one hand, the direct ROIC on that $100bn is likely hideous.
But if we zoom out and look at the bigger picture, Microsoft’s web service expertise largely began with search and is the seed from which all of Azure’s cloud grew, so maybe it was *very* successful, just not in the way they expected. ☁️
OpenAI’s Venture Fund is owned by… Sam Altman?! 🤔
This is a bit weird:
OpenAI Startup Fund was launched in late 2021 to invest in other AI startups and projects.
By last May it reported $175 million in total commitments, and a portfolio that
included video editor Descript and legal tool Harvey.
What set OpenAI Startup Fund apart, however, was that it wasn't (and isn't) owned by OpenAI. Nor even by its affiliated nonprofit foundation. Instead, it's legally owned by Altman. (Source)
I mean, it’s fine for Altman to own a venture fund. But why is it called the OpenAI Venture Fund..? If he had been fired (and not “nevermind” rehired) as CEO on November 17, 2023, would he have just walked away with the fund?
In a company where everything is kind of strange, this is one more layer of WTF 😐
The company claims this was a temporary measure, but how long is temporary? It’s been a few years now… I suspect they’ll want to figure out other aspects of governance and fill out the board first, but this should probably be on the list.
🧪🔬 Liberty Labs 🧬 🔭
⬇️ 🚁 What happens if a helicopter engine fails? Autorotation FTW! 🚁 ⬆️
This is really cool, and good to know if you’re ever going to fly in a chopper (GET TO DA CHOPPA!!!)
Here’s a demonstration of what that looks like in the real world (a simulated engine failure and then a landing in a mountain range! It looks beautiful too 🗻).
🇨🇳 China Stockpiles Semiconductor Equipment for the Next Stage in Chip Wars 🐜🐜🐜🐜🐜 🦅
Imports of semiconductor equipment to China rose 14% year on year in 2023 to nearly $40 billion, according to the country’s customs data. [...]
Chinese chip makers have been rushing to stockpile equipment in anticipation of tougher Western export restrictions. That is particularly true for lithography machines, which use light to print tiny circuits on silicon wafers [...]
China’s imports of lithography machines from the Netherlands almost quadrupled year on year in 2023, according to China’s customs data. Net system sales to China from Dutch company ASML, the market leader in lithography equipment, tripled in 2023. China made up 29% of ASML’s total net system sales last year—up from just 14% in 2022.
🇪🇺⚛️🇫🇷 EU lectures France on renewable energy
The large European country with the cleanest power grid is being lectured by EU bureaucrats on its ‘renewable energy commitments’.
The EU’s energy commissioner, Kadri Simson, has urged France to raise its renewable energy target to “at least 44%” by 2030, warning it would consider taking “steps” at EU level in case of persistent shortcomings. [...]
According to Brussels’ calculations, France needs to achieve at least 44% renewables in its gross final energy consumption by 2030 in order to contribute to the EU target of 42.5% set out in the Renewable Energy Directive, which was updated last year.
Basically, in the name of the environment, they would have to shut down nuclear reactors providing large amounts of 24/7 clean power and replace them with intermittent wind and solar. They would stop using existing structures and instead build new ones, with all the embedded energy involved in manufacturing and installing these things (did you know that the polysilicon used in solar panels comes from very hot natural gas furnaces in China?).
This is so stupid.
If Brussels really cares about the environment, it should instead work to convince Germany to restart the 26% of its grid that used to be nuclear and is now largely natural gas and coal.
🎨 🎭 Liberty Studio 👩🎨 🎥
🏛️ Plutarch 📜
Wish me luck, I’ve decided it’s time to go directly to the source on classical antiquity.
Over the years, I picked up lots of bits & pieces about the era and the influential thinkers and doers of the time, but I feel like I’m missing a big piece of Western Civilization’s Foundation and need to remedy that. 📚🧠
What made me pull the trigger was discovering an excellent podcast called Cost of Glory. I had a great chat with Alex Petkas, which I’m hoping was the first of many.
If you want to get started, I can recommend his series on Sulla (I don’t know if it’s the best place to start, but I enjoyed it). Here’s part 1-3:
Re fiber optics and only needing to upgrade the equipment at each end, for long point to point links like undersea cables the 'repeaters' tend to be Erbium Doped Fiber Amplifiers that don't actually repeat the signal by going light to electrical and back to light but instead amplify it directly in the optical domain without needing to know how the original signal was modulated, even by future ultrafast technology or future higher density wavelength division multiplexing to upgrade the link bandwidth. A relatively high power, high reliability pump laser operating at about 980 nm wavelength, excites the electrons in erbium which has been added to the glass in a loop of fiber in the amplifier, allowing spontaneous emission to optically amplify the lower energy (longer wavelength) photons that arrive in the 1550 nm wavelength band. It can even work bidirectionally for signals passing both ways along the same fiber and can sometimes include dispersion compensators to correct for temporal spreading of signals of different 'colors' within the 1550nm band over thousands of kilometres, though this could also be corrected on dry land at the end of the cable. The electrical power needed to run the 980nm pump lasers is why so much copper is needed in sub sea fiber optic cables.
Probably Apple could come up with a car human interface that was better than Teslas, and maybe get FoxConn to build a factory to build EVs, but a non-autonomous EV doesn’t have a big enough margin (see Tesla price cuts and Ford, GM and Volkswagen troubles ) to make it worth the big capital expense. Cruise has a big troubles, Waymo isn’t improving as fast as predicted and laws are still in the way, so even autonomous cars are ahead of their time, and Apple would be catching up. Working on a good device based AI would be great, especially since they already control their own processors. I would also love to see them improve the comfort and battery life of the Vision Pro, it’s great for 3D movies, but not 3 hour ones. It’s funny the called the car developers Special Projects Group, before I left Apple over 30 years ago, I worked in the Special Projects Group that then was working on replacing the Motorola CPUs in Macs with RISC processors, but we actually shipped some products. Before the SPG I worked in ATG, the Advanced Technology Group which was more advanced, but less specialized, and we worked on lots of cool, but unshipped products.