76: Intel + TSMC & Samsung?!, Google Cloud Too Early & Too Late, OpenAI's DALL·E, Product Features of Weibo, Intangibles in S&P 500, Signal, Gmail, and #ShroomBoom
"Common thread: engineering hubris."
The world breaks everyone and afterward many are strong at the broken places.
—Ernest Hemingway, A Farewell to Arms
What can I say?
Investing & Business
Homesteading Ideas
Another great post by Byrne Hobart, really well-written as usual, and thought-provoking. I particularly like the first 4 paragraphs. Some highlights:
A surprisingly useful piece of advice about writing online, which generalizes beyond that, is that you should never get tired of repeating your best ideas. [...] So if you're a single-source supplier for an analytical input that repeatedly produces novel outputs, you'll only get full credit if you draw attention to it.
This is easiest to see in online media: "Aggregation theory" is synonymous with Stratechery, "Everything is securities fraud" is straight out of Money Stuff, "Charge More!" means Patrick McKenzie, "Software is Eating the World" means Marc Andreessen, "Signaling," is Robin Hanson, "X, Explained" is Vox, etc. In no case was the concept completely invented by the person who now owns it—but in every case they've refined it, found many new applications for it, and consequently owned it. It's the IP version of homesteading, or Lockean property rights: if you put enough effort into an un-owned idea, it's yours. This dynamic produces something that's normally rare in knowledge work fields: a form of intellectual property rights that are valuable to the creator but hard to abuse.
These topics are not quite a beat, but might be closer to a metabeat; there are some topics these writers and publications cover that other writers know to stay away from, both because of expertise and because of branding. There are some stories that basically belong to these writers, and anyone else who writes about them is going to be swimming upstream.
You can read the whole thing here.
I think this is fascinating. I know there are some ideas I really like that I constantly apply to lots of things, but I have no idea how visible they are to those who have been reading me for a while, and if they associate them with me or not.
I’d be quite happy if over time some of these meta-beats emerged organically — I certainly don’t want to force it, because the point for me is to write about whatever interests me — in part because I think it would provide extra context (meta-context?) and clarity, and also possibly make it easier to go deeper on certain things, as you build up shared context.
By that I mean that similarly to what Eugene Wei was saying about Jeff Bezos’ ability to compress ideas (I wrote about that in edition #74), and how Jeff didn’t have to explain what “Day 1” meant every time he mentioned it anymore, I think that once you’ve established a certain framework or foundation, it makes it easier to go straight to more subtle ideas rather than have to start from zero every time.
Anyway, it’s all a bit theoretical, because I’m not sure I’m homesteading any ideas so far… Maybe I am, but don’t realize it because my favorite ideas are to me like water to a fish, so ever-present that I barely notice them anymore..? ¯\_(ツ)_/¯
‘Intel Talks With TSMC, Samsung to Outsource Some Chip Production’
Intel has talked with Taiwan Semiconductor Manufacturing Co. and Samsung Electronics Co. about the Asian companies making some of its best chips, but the Silicon Valley pioneer is still holding out hope for last-minute improvements in its own production capabilities. [...]
Any components that Intel might source from Taiwan wouldn’t come to market until 2023 at the earliest and would be based on established manufacturing processes already in use by other TSMC customers [...]
Talks with Samsung, whose foundry capabilities trail TSMC’s, are at a more preliminary stage (Source)
This is one of those headlines that if you could send back in time 10-15 years, a lot of industry observers wouldn’t believe.
Intel is facing a Catch 22, because if it sells its best chips on non-leading process nodes (ie. not as small and power-efficient as what competitors have), even a really good design will have a hard time competing.
And if it outsources the fabbing of its best chips to TSMC, they’re giving more scale to a competitor and reducing their own fabbing scale, in a business that is all about scale, with extremely high fixed costs (so the more volume you can push through, the better).
A decision to outsource some fabbing buys some time, but it doesn’t solve the real problems. Intel’s future will depend on whether they can get their own process back on track and catch up to TSMC, and whether they can do some really great design work to compensate for being behind on process (but AMD has been doing great work lately too, so it’s harder than if AMD had been less competitive).
Google’s Cloud: Too Early & Too Late
Good thread by ex-Googler Hemant Mohapatra on his experience inside GCP. Some highlights:
~8yrs ago (Dec’12) I got a job @Google. Those were still early days of cloud. I joined GCP @<150M ARR & left @~4B (excld GSuite). [...]
By 2008, Google had everything going for it w.r.t. Cloud and we should’ve been the market leaders, but we were either too early to market or too late. What did we do wrong? (1) bad timing (2) worse productization & (3) worst GTM [go to market].
We were 1st to “containers” (lxc) & container management (Borg) - since '03/04. But Docker took LXC, added cluster management, & launched 1st. Mesosphere launched DCOS. A lot of chairs were thrown around re: google losing this early battle, though K8 won the war, eventually 👏
We were 1st to “serverless” (AppEngine). GAE was our beachhead -- it was the biggest revenue source early on but the world wasn’t ready for serverless primitives. We also didn’t build auxiliary products fast enough. Clients that outgrew GAE wanted “building block” IaaS offerings.
1st to hadoop (map-reduce ‘04) but our hosted Hadoop launched in ‘15. AWS EMR was ~200M ARR by then. 1st to cloud storage (GFS ’03), but didn’t offer a filestore till ‘18! Customers were asking for it since 2014. Didn’t launch archival storage or direct interconnect till v. late.
Common thread: engineering hubris. We had the best tech, but had poor documentation + no "solutions” mindset. A CxO at a large telco once told me “you folks just throw code over the fence”. Cloud was seen as the commercial arm of the most powerful team @ Google: TI (tech infra).
TI “built” stuff == good; Cloud “sold” stuff == bad. This unspoken hierarchy led to GTM decisions that cemented our #3 position in a market where we had far superior tech. Another reminder that BETTER TECH RARELY WINS AGAINST BETTER GTM
I can’t tell what’s just his experience, what was more generalized, what has changed, etc, but it’s more data points to help understand the story, and it mostly fits with other pieces of the mosaic that I got from other sources.
You can read the full thing here. h/t Gavin Baker
Interview: Sean Stannard Stockton
I mean, the man has Stock in his name!
Another very good interview by Bill Brewster. Sean is president and chief investment officer of Ensemble Capital, a firm that has always been generous sharing its ideas.
Sean’s the kind of guy where almost everything he says makes me go “yeah, that makes sense, that’s smart, he seems pretty well calibrated”. So let’s all be more like Sean, I guess.
The interview is less personal and more straightforward about investing than some of Bill’s other episodes, but that’s not a bad thing (I only care that it’s good, not about the specific format).
‘Value of Intangible Assets in the S&P 500’
Another one from Visual Capitalist.
‘product features of Weibo aka Chinese Twitter’
Good thread by Lillian Li on Weibo. Some highlights:
Its double the size of Twitter in users and did all the things that Twitter didn’t by providing content creators with multiple streams of revenue. [...]
2) So Weibo has no 240 character limit, you can and people do post pretty long articles with up to 9 pics, as well as short video, live-streaming and polls. It’s much more interactive and multi-media based
3) Accounts’ about me sections are much lengthier. You can also buy a VIP subscription that allows to personalise your page even further. And also follow more accounts. [...]
6) For the big Vs / content creators. They have a lot more monetisation streams on platform - they advertise, open up e-commerce shops (Weibo has strategy alliance with Alibaba remember?), they can run a subscription offering and also get tips from fans.
7) As the saying goes Substack is how Twitter monetises. Weibo has incorporated that into their product offering already. When I buy a subscription for an account I follow I get exclusive content created and unique access to that creator. Also access to their community [...]
9) The next generation of social media platforms will put monetisation for creators first. You’ve got TikTok already showing up with this ethos, others have to follow or they will lose out on talent.
h/t Fat Tail Capital
‘$TSLA traded an unbelievable $62b worth of shares today’
$TSLA traded an unbelievable $62b worth of shares today, that's more than the 10 next most active stocks combined and double $SPY. Outside of "Inclusion Day" (which was arguably shouldn't count) I think this is all-time record for a stock.
Fun Fact: $TSLA has traded $1.2T worth of shares over the last six weeks. That's as much as $FB traded all of last year.
Also fascinating is that $TSLA is now trading more than $SPY on regular basis (shown by the white in this chart). Nothing has traded more than $SPY (even for a day) since the '90s basically.
Science & Technology
Gmail Rant
Gmail can be so stupid, sometimes.
I found a few real emails in my spam folder, things that newsletter readers had sent me a few weeks ago, so I marked them as "not spam". That action moved them to inbox, but marked as ‘read’, so now there's no way for me to find them again, they're lost in a sea of email... 🤬
If you wrote me in recent weeks and never heard back, sorry.
OpenAI: DALL·E, Creating Images from Text
This is really cool. Everybody has heard about GPT-3, the massive autoregressive language model by OpenAI. This is a very impressive and intriguing variant that mixes input and output mediums:
DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images.
So the images that you see at the top are what came out with the prompt of “an armchair in the shape of an avocado” (who thought of that one? I could see it in some corner of an IKEA showroom, though…)
GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks. Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach.
Things can get pretty surreal, and I’d love to see what some artists could come up with if they had this tool:
You can see more examples of the model’s capabilities here, as well as more details on how it works:
MIT’s Technology Review also has a piece on it.
File Under: #ShroomBoom
A few interesting ones recently:
Highlights from this write-up:
“Ketamine is exciting because of its potential to both treat, and better understand depression. This is largely because ketamine doesn’t work the way ordinary antidepressants do – its primary mechanism isn’t to increase monoamines in the brain like serotonin [...]
“One of the major candidates for the mechanisms underlying ketamine’s antidepressant properties is how it increases neural plasticity. Neural plasticity is the brain’s ability to form new connections between neurons and ultimately underlies learning and memory in the brain.”
Highlights from this write-up:
Combining the psychedelic drug psilocybin with supportive psychotherapy results in substantial rapid and enduring antidepressant effects among patients with major depressive disorder, according to a new randomized clinical trial. [...]
“After both groups had received treatment, 71% of participants had a clinically significant response to the treatment (greater than 50% decrease in depression scores) at 4-weeks post-intervention and 54% were in remission from depression at 4-weeks post-intervention. This represents a large effect of this treatment among people with major depressive disorder, approximately 4 times larger effect compared to studies of antidepressant drugs.”
Here’s someone on Reddit describing their experience with Psilocybin in Phase II of Compass Pathways’ study. It’s worth reading if you’re curious about what the experience is like.
Happy Birthday Wikipedia
20 years old. I’m so glad that this project which seems like it shouldn’t-work-in-theory-but-works-in practice exists.
I used to be a pretty active Wikipedian in the early 2000s. Probably around 2003-2007..? I’m not the best at remembering exact dates. My account had thousands of edits and I created a few common word pages on both the English and French wikis… fun times. There were fewer editor turf-wars on the Talk pages back then (every page has a Talk page where editors can argue about changes and edits).
Wikipedia hosts more than 55m articles in hundreds of languages, each written by volunteers. Its 6.2m English-language articles alone would fill some 2,800 volumes in print. [...] With over 20bn page views a month, it has become the standard reference work for anyone with an internet connection. [...]
It defies the Silicon Valley recipe for success. The site has no shareholders, has generated no billionaires and sells no advertising. [...]
One study in 2018 estimated that American consumers put a value of about $150 a year on Wikipedia. If true, the site would be worth around $42bn a year in America alone. Then add indirect benefits. Many firms use Wikipedia in profitable ways. Amazon and Apple rely on it to allow Alexa and Siri, their voice assistants, to answer factual questions. Google uses it to populate the “fact boxes” that often accompany searches based on factual questions. Facebook has started to do something similar. This drives traffic to Wikipedia from those keen to learn more. ai language models of the sort employed by Google or Facebook need huge collections of text on which to train. Wikipedia fits the bill nicely. (Source)
Deep Dive in State of COVID19
Signal’s Poet-in-Residence
Kudos to whoever wrote the Signal 5.1 release notes…
As Turner Novak wrote: “I can’t stop reading this in Eminem’s voice”
The Arts & History
‘You want to be on vacation, Pete? 'Cause I can make that happen.’
I’ve been alternating re-watching Mad Men and Gilmore Girls episodes with my wife for the past few weeks. Televisual comfort food. Tonight was 'The Jet Set' (S2E11), where Don goes to California. Great one.
How many of you have watched ‘Palm Springs’ (2020) since I recommended it? Hopefully without any spoilers… Curious what you thought (you can reply to this email).
It’s worse than you know…
I made a new one-off GIF from ‘Serenity’ (2005 — has it already been that long?!).
I figure that’s going to be a useful one…