Regarding your point on the social/interest potential of AI, I can definitely see it being a step change from what’s currently available.
You also mentioned everyone having their own tutor — and while it might sound a bit hand-wavy, it really feels like OpenAI is edging close to becoming all of these things at once.
The “tutor” could recommend communities to join — I’ve even seen o3 say something like, “Would you like me to highlight a few forums where this topic is being discussed?” It wouldn’t all have to live in-house, though there would be obvious advantages if it did.
The same idea could extend to jobs. How much more pleasant (and efficient) would it be to have ChatGPT suggest roles, or even make an introduction?
Lately, I’ve noticed ChatGPT taking more initiative, asking if I’d like to move on to the next logical step. I often find myself responding, “Yes please, that would be great!”
I really liked your framing around “returns on agency.” But I wonder: will self-agency be the key differentiator in this new paradigm, or will it matter more that we trust the ghost in the machine to steer us well? Maybe will turn out to be the ultimate AI agent…
Interesting. I think at the root of *all* this is the fact that AI is a general technology. It's almost perfectly horizontal, and so it can be almost anything and everything, or at least be useful with almost anything.
It's fascinting to think about potential impacts as things further develop 🤔
Your point on Meta combining all its apps or OpenAI’s social media interest/matching potential made me think of Elon and Grok. I know that he wants X to become the “everything app”. If he succeeds, which I am not going to put past Elon, could we see Grok skyrocketing up there?
I already see so many users using Grok to verify tweets. I feel like it hallucinates a lot , but untapped potential there?
I think a lot of people looked at Wechat a few years ago and thought, we gotta do that here too! But I'm not sure if the rest of the context is similar enough for it to make sense.
Grok does get a lot of usage by being available right there with tweets, but I'm not sure if that makes a lot of people use it as their main AI and want to pay more for it. I almost never hear anyone talk about Grok usage the way they talk about ChatGPT (or even Claude or Perplexity). It feels more like a Twitter feature than a real contender for mainstream consumer AI, at least so far.
In my head, I am wondering if RoA was actually higher in the "hunter gatherer" era because those with the most agency controlled everything -- the food supply, the social hierarchy, etc.
In that time, lack of agency could have easily meant death, whereas now I think a lot of people can get away with a comparative lack of agency and still have good living standards because of societal advancements... what do you think?
I guess the difference in my thinking is that I'm measuring return as quality of life, while you're defining it as intelligence, optionality, or sheer quantity/quality of knowledge being abundant with agency in todays era of accessible information.
Honestly the quality of life upside of agency is probably still similar today, but the downside to lack of agency I think is slightly lower... I'll have to think about it more 😅
First, on the path forward for AI social media - Attention is king and relationships are the way to get it. Therefore, as I am proposing, a single topic "journal" around a single broad topic - information - that monetizes authors and attracts readers from all areas of human interest.
The eye story is also very interesting. Turns out, my neurobiology mentor, Dr. Peter Sterling, ran a lab at U Penn focused on retina. Below is the reply I left on X. Our eyes are our highest bandwidth sense by far. If we want better machine/brain interfaces, this is where to look. That's what drives TikTok, but not for the better.
Equally as amazing is what happens after the pulse streams leave the eye - the brain sees only contrast edges, not gray scale (think compression) and these shapes are sent to more than 20 "what" specialized neural networks - five just for faces. As shapes are processed, they are sent to higher memory centers for possible matches. These are sent back to the stream as positive feedback. Color is sent to a completely different part of the brain, close to the amygdala, which adds emotion. The stream is also sent to the "where" cortexes to build a model of the environment and coordinate with your proprioception so you don't trip on things. 90% of your balance is maintained by your eyes, not your ears. After the scene is processed, the color is painted back in and sent to your cognitive cortex to tell you what it thinks you have seen. Pretty freaking cool for sure.
Interesting, I have to say the more I use ChatGPT (o3 and other models) the more I like perplexity and Gemini largely based on being more accurate. I find ChatGPT is interesting but much less likely to be correct or have factual errors which is pretty important to me. Can you mention why people are moving?
It's hard to be sure why people are making the switch, and your experience will depend on your use cases, but in general, people just really seem to like the product and the answers they're getting.
I was a huge fan of Perplexity but I'm using it a lot less now. I tend to use a mix of Claude/GPT-4o/O3/4.1 via API, and sometimes Gemini 2.5 Pro these days
It may just be an impression but I kind of feel like Perplexity has gotten worse in recent times. The answers tend to be less interesting somehow. Somestimes too terse, almost like they want to save on compute (brevity can be good, but too brief and you feel like you're not getting the full picture).
In perplexity, dont you just pick from among which modern models you want? And perplexity provides cites, which is important for verification. Is gpt4.1 thru perplexity different than 4.1 not thru perplexity?
Regarding your point on the social/interest potential of AI, I can definitely see it being a step change from what’s currently available.
You also mentioned everyone having their own tutor — and while it might sound a bit hand-wavy, it really feels like OpenAI is edging close to becoming all of these things at once.
The “tutor” could recommend communities to join — I’ve even seen o3 say something like, “Would you like me to highlight a few forums where this topic is being discussed?” It wouldn’t all have to live in-house, though there would be obvious advantages if it did.
The same idea could extend to jobs. How much more pleasant (and efficient) would it be to have ChatGPT suggest roles, or even make an introduction?
Lately, I’ve noticed ChatGPT taking more initiative, asking if I’d like to move on to the next logical step. I often find myself responding, “Yes please, that would be great!”
I really liked your framing around “returns on agency.” But I wonder: will self-agency be the key differentiator in this new paradigm, or will it matter more that we trust the ghost in the machine to steer us well? Maybe will turn out to be the ultimate AI agent…
Interesting. I think at the root of *all* this is the fact that AI is a general technology. It's almost perfectly horizontal, and so it can be almost anything and everything, or at least be useful with almost anything.
It's fascinting to think about potential impacts as things further develop 🤔
Your point on Meta combining all its apps or OpenAI’s social media interest/matching potential made me think of Elon and Grok. I know that he wants X to become the “everything app”. If he succeeds, which I am not going to put past Elon, could we see Grok skyrocketing up there?
I already see so many users using Grok to verify tweets. I feel like it hallucinates a lot , but untapped potential there?
I think a lot of people looked at Wechat a few years ago and thought, we gotta do that here too! But I'm not sure if the rest of the context is similar enough for it to make sense.
Grok does get a lot of usage by being available right there with tweets, but I'm not sure if that makes a lot of people use it as their main AI and want to pay more for it. I almost never hear anyone talk about Grok usage the way they talk about ChatGPT (or even Claude or Perplexity). It feels more like a Twitter feature than a real contender for mainstream consumer AI, at least so far.
Have been thinking all day about RoA...
In my head, I am wondering if RoA was actually higher in the "hunter gatherer" era because those with the most agency controlled everything -- the food supply, the social hierarchy, etc.
In that time, lack of agency could have easily meant death, whereas now I think a lot of people can get away with a comparative lack of agency and still have good living standards because of societal advancements... what do you think?
I guess the difference in my thinking is that I'm measuring return as quality of life, while you're defining it as intelligence, optionality, or sheer quantity/quality of knowledge being abundant with agency in todays era of accessible information.
Honestly the quality of life upside of agency is probably still similar today, but the downside to lack of agency I think is slightly lower... I'll have to think about it more 😅
I think both ways of looking at it are very valid, just measuring different things (more relative vs absolute returns?).
In both cases it's definitely better to be on the extreme right of the curve on agency, that's for sure! 💚 🥃
Two comments:
First, on the path forward for AI social media - Attention is king and relationships are the way to get it. Therefore, as I am proposing, a single topic "journal" around a single broad topic - information - that monetizes authors and attracts readers from all areas of human interest.
The eye story is also very interesting. Turns out, my neurobiology mentor, Dr. Peter Sterling, ran a lab at U Penn focused on retina. Below is the reply I left on X. Our eyes are our highest bandwidth sense by far. If we want better machine/brain interfaces, this is where to look. That's what drives TikTok, but not for the better.
Equally as amazing is what happens after the pulse streams leave the eye - the brain sees only contrast edges, not gray scale (think compression) and these shapes are sent to more than 20 "what" specialized neural networks - five just for faces. As shapes are processed, they are sent to higher memory centers for possible matches. These are sent back to the stream as positive feedback. Color is sent to a completely different part of the brain, close to the amygdala, which adds emotion. The stream is also sent to the "where" cortexes to build a model of the environment and coordinate with your proprioception so you don't trip on things. 90% of your balance is maintained by your eyes, not your ears. After the scene is processed, the color is painted back in and sent to your cognitive cortex to tell you what it thinks you have seen. Pretty freaking cool for sure.
Fascinating! And TIL about balance (though I intuited some of it, since I got the balance board and I sometimes try with my eyes closed).
Very cool! Thanks Mark! 💚 🥃
Interesting, I have to say the more I use ChatGPT (o3 and other models) the more I like perplexity and Gemini largely based on being more accurate. I find ChatGPT is interesting but much less likely to be correct or have factual errors which is pretty important to me. Can you mention why people are moving?
It's hard to be sure why people are making the switch, and your experience will depend on your use cases, but in general, people just really seem to like the product and the answers they're getting.
I was a huge fan of Perplexity but I'm using it a lot less now. I tend to use a mix of Claude/GPT-4o/O3/4.1 via API, and sometimes Gemini 2.5 Pro these days
It may just be an impression but I kind of feel like Perplexity has gotten worse in recent times. The answers tend to be less interesting somehow. Somestimes too terse, almost like they want to save on compute (brevity can be good, but too brief and you feel like you're not getting the full picture).
In perplexity, dont you just pick from among which modern models you want? And perplexity provides cites, which is important for verification. Is gpt4.1 thru perplexity different than 4.1 not thru perplexity?
Perplexity steers the models with its own meta prompts and RAG implementation, so you don't get quite the same thing. That's my understanding, anyway