375: AI's Intelligence Ladder, Sweden's Rare Earths, Cancer, Nvidia, AI Crimes, Energy Crisis, Twitter, and Princess Bride
"the design space of all possible minds is very large"
I never lose. I either win or learn. -Nelson Mandela
📺 📼 📼 📼 📦📦📦📦 🍏 If you cannot cure your Obsessive Compulsive Disorder, you may as well harness it for a good purpose.
This is what Marion Stokes did. Hoarding that was beneficial to society:
[She was a] prolific archivist, especially known for her compulsive hoarding and archiving of hundreds of thousands of hours of television news footage spanning 35 years, from 1977 until her death in 2012, at which time she operated nine properties and three storage units
Some of Stokes's tape collection consisted of 24/7 coverage of Fox, MSNBC, CNN, C-SPAN, CNBC, and other networks—recorded on up to eight separate VCRs in her house. She had a husband and children, and family outings were planned around the length of a VHS tape. Every six hours, when the tapes ran out, Stokes and her husband switched them out—even cutting meals short at restaurants to make it home to switch out tapes in time. [...]
The archives grew to about 71,000 VHS and Betamax tapes (many up to 8 hours each) stacked in her home and apartments she rented just to store them
Stokes became convinced there was a lot of detail in the news at risk of disappearing forever, so she began taping.
The Internet Archive (at Archive.org) agreed to digitize the collection, preserving it for posterity.
Another noteworthy fact about Stokes is that she was into Apple computers, which led her to preserve 192 Macs *and* purchase Apple stock early, which is how she financed the video tape hoarding:
Stokes bought many Macintosh computers since the brand's inception,[5] along with various other Apple peripherals. At her death, 192 of the computers remained in her possession. Stokes kept the unopened items in a climate-controlled storage garage for posterity. [...]
Sensing the immense potential of the Apple brand during its infancy, Stokes invested in Apple stock while the company was still fledgling with capital from her in-laws. Later, she encouraged her already rich in-laws to invest in Apple, advice they took and profited greatly from, increasing their wealth even further.
Stokes then allocated part of her profits to her recording project, which was important for her work, especially for the first few years when videotapes were a new, expensive technology.
There’s a documentary about her called ‘Recorder: The Marion Stokes Project’. I haven’t seen it, but it seems interesting.
h/t to friend-of-the-show Alex S. for emailing me about the Apple connection
🕵️♀️🤖🗣️🎙️✍️📕 Thinking back to a conversation I had with my friend Jimmy Soni about the hundreds of interviews he did for his books made me think about what is involved with writing a non-fiction book, particularly the research and interviewing sources aspect.
Perhaps there’s a potential use case for AI to augment human writers. 🤔
For example, imagine an AI ‘research assistant’ that could *simultaneously* contact hundreds of people who were involved with the thing you’re writing about (ex-employees, witnesses, the family of important individuals, etc), have a chat with them via text or voice to explain the project, ask for their help, and take down their stories and recollections, collating them all into a massive corpus that could then be used as a reference.
Anything particularly interesting that needs further investigation could be followed up by the writer. I doubt it’d be perfect and it wouldn’t replace good interviewing skills and instinct — but as an additive thing, to reach out to people that may not even be questioned at all otherwise..? I think it would help.
Are non-fiction writers next on the AI’s list?
In an ideal world, we’d have an infinite number of writers with infinite time to dig.
But as things are, there are so many good books that never get written, so many interesting stories that never get shared because nobody reaches out to person XYZ and they take what they know to their grave… It would be neat to have a tool to massively increase the productivity/lower the cost of reaching out to potential sources. To cast a wider net.
The result could be more in-depth books, and many books that wouldn’t exist otherwise.
🔥😲 Wanna get a taste of Dhaka street food? MBI is in Bangladesh and shared this video. You have to watch to the end to get it — there are levels to this thing!
🖼️🎨🤖✋🚫 One major problem with the whole “generative AI shouldn’t be trained on the work of artists!” — aside from the fact that it’s a standard that we don’t apply to human artists, who constantly train on copyrighted work, as I explain in more details in My Thoughts on the Ethics of AI Art — is that most humans simply don’t have much vocabulary to describe art without referring to existing art.
For example, if I tell you to imagine music that sounds like a mix of Metallica, the Beatles, Pink Floyd, and Daft Punk, you may never have heard ANYTHING like it. This combination may be very unique and original. But you can get a feel for what I mean with just 6 words.
If I tell you that something looks like a mix of Van Gogh, Rembrandt, and MC Escher, you can kind of understand what I mean.
But what words could I use to convey these ideas without referring to existing artists? How can you possibly guide an AI in the direction that you want without these referents?
It’s generally a bad idea to try to block anyone (or anything) from learning from something (Can’t Google’s search engine be trained on copyrighted works?! Can’t social scientists study literary works?).
Copyright law should regulate the commercialization of things, not how we learn from them.
🏦 💰 Liberty Capital 💳 💴
🤖🧠👩🔬Climbing the intelligence ladder without understanding human consciousness 🪜
I really enjoyed this interview with Replit CEO Amjad Masad by Patrick O’Shaughnessy.
I think Replit is a very interesting company doing great work to make coding more accessible. Oeople gaining new skills and educating themselves is one of the highest-leverage things our civilization can facilitate.
But there’s this thing he said that I have to disagree with. First I’ll provide the excerpt, and then explain my alternative view:
I think there are limits to how intelligent computers can get. I think they can get pretty raw intelligence, but I think we will not be able to cross the chasm into a human-like agent until we actually understand the nature of consciousness and they understand the nature of human intelligence.
It's almost like you expect to get abs without going to the gym, which is, I'm going to sit around and get abs. It's like how do you expect to build a humanlike agent if we don't understand human agency to start with. We don't understand the nature of consciousness. We barely understand the brain. So I feel like we're so far away from that question that it's inconceivable for me to create things that act like humans in the world until we understand what humans are.
I am convinced that we don’t need to understand the nature of consciousness or know everything there is to know about the human brain to create high-intelligence, even eventually smarter-than-human AIs.
From first principles, we know that evolution created human intelligence without any understanding of sentience, consciousness, or human brains.
Simple processes can create very complicated end results if they have a good optimization/feedback mechanism. For evolution it was natural selection, which is very slow, but very effective if you leave it to run for a few billion years.
We can iterate and ratchet up improvements much faster — we’re improving in parallel our algorithms, our training data, and our hardware (more compute, more memory, more storage, more bandwidth between nodes, etc). We can also have more nuanced selective pressure than just who survives to reproduce.
So even if all we have is a relatively dumb process, I don’t see why we couldn’t bootstrap something very smart — and we’re already seeing the first signs of that with large language models (LLMs) that are showing early signs of generalization. In other words, they can do all kinds of tasks that they were never designed to do. These skills are emergent from throwing enough data and compute into these sophisticated-enough algorithms.
If we keep improving every one of those elements — the trinity of algorithms, data, and hardware — who knows how far that can go?
Another reason why I think it’s entirely possible to get very far without much understanding of the human brain is that I don’t think human minds are the only possible minds with that level of intelligence. There are many paths up the mountain.
I think the design space of all possible minds is very large, and the approach that we’re taking is likely to create a very non-human intelligence (things may have been different if the only way we had been able to progress in AI was by scanning human brains, modeling them in software, and trying to emulate biology more closely… but that’s not what we’re doing).
I don’t think the fact that this intelligence is built on a different architecture than the human brain means that it is inherently limited in how smart it can get (and we may find out that intelligence that is very different from us can exist when it comes to things like sentience, feeling like an individual, emotions, motivations, or agency).
Don’t get me wrong, we may need plenty of other breakthroughs to get there. Who knows what will be the next step forward equivalent to Google researchers introducing Transformer machine learning models in 2017..?
But I don’t think we necessarily need to circle back and solve human consciousness and the mysteries of the brain to avoid getting stuck on a plateau at a sub-human level of general intelligence.
Keep reading with a 7-day free trial
Subscribe to Liberty’s Highlights to keep reading this post and get 7 days of free access to the full post archives.