I don't know if extrapolation and a dose of SciFi is sufficient to see where we're going.
You write: "Everyone will soon have unlimited and instant access to a supercharged google / stackoverflow / webmd / wiki and more, an AI which renders all of those former platforms utterly obsolete."
Neither the technology moves in that direction, nor the socioeconomic constellation that shapes it, nor the two-way feedback loop between the two.
The tech from 1982 that still underpins every LLM is deep learning, which produces static models from training data. And that training data cannot include too much LLM output if you want to avoid model collapse. In cleartext, that means that while humans must keep doing the work that LLMs appear to do, these things still do not learn during operation, they cannot be convinced in a dialogue that something is not the way that the patterns in the training data tell it is. See https://schrijfsels.substack.com/p/deep-learning-makes-for-shallow-ai.
Secondly, LLMs only produce statistic approximations of everything they absorbed, sometimes with half-verbatim regurgitations, but never in the exact way a database reproduces anything. You can't use it to reliably construct citations or URLs. Search queries for a classic Google search, yes. But proper, general, logic reasoning or calculation, no.
The AI we have (generative AI, LLMs) presents itself as something it categorically isn't. It combines the absolute meaninglessness of computation with the unreliability of humans.
What you seem to seek is to combine the logic and predictability of computers with the drive, creativity, capacity for producing meaning and building relationships of humans. But the technology for that is still exactly as far away as it was when we were watching ST:TNG as kids.
thanks for the reply. i didn't predict any date attached to anything contained here, because clearly what we have today doesn't look like the things i describe here. but i personally view the things i describe here as inevitabilities if you give a long enough timeline, in some cases it could be way beyond our lifetimes, there is no way of telling.
I'm aware of how llms work and the limitations you shared. the part you cited in my opinion is the least controversial point here, as in my pov current gen chatgpt is likely already 'better' / more useful than all the other platforms i mentioned combined. the only thing that keeps my prediction from being satisfied is not everyone can afford a phone and Internet plan, and not everyone can afford a chat gpt subscription. it seems to me a pretty safe assessment that current gen-level llm tech will come down in cost and limits in the coming years such that nearly everyone carries it all day
A very interesting read. It strikes me that I was wrong to use the word “opposite” in my comment above. While definitely from a VERY different viewpoint, I would not say the bot’s post refuted yours.
I see a future that could be a synthesis of the two, and likely others.
And that gives me inspiration for a story, perhaps a novella.
Thanks for your work, and for answering my challenge so well.
1) I have asked another author how to prepare, so I am asking you for the same thing.
2) You see all the beauty that AI will bring, harmony, education, advances but there is someone out there who want to use it to harm someone, something, a country, a tribe, an ally, a terrorist, a faction within a party. how do we deal with this. Certainly not a committee.
3. As what point does AI and the human being merge to become one? If that happens are we no longer human but a new species? At what point do we say no to the upgrade of the human? Do we have the right to say no to everyone? Or only the poor?
4. I am seeing crap in will produce crappy AI. Who is plugging this information to AI. What about advances of thinking, research and humanity that goes with these break throughs. Can AI tell its being manipulated? can we?
5. who is responsible for powering down AI? Who will say, we are not mature enough, smart enough, this is wrecking life as we know it.
6. If China takes off with this and the impact to their society is vast depopulation, or some sort of warfare that we can barely imagine at what point does this spread out, can it be contained?What happens if one country (Russia, China, UAE) tests its AI power against an adversary? A city, state, province, country, continent. What happens if they are test various forms of AI media to influence voters, or the public to destroy an institution, government body.
7. how will AI defend against a black swan event, or catastrophic event (can be military, earthquakes, natural disasters)
8. will it allow or develop a time travel component so we can shut it down?
I have way too much time on my hands. Have a good evening.
1) realistically, we need to collectively reject the late stage hypercapitalist system in which corps have more rights than humans. society is made up of humans, therefore we have to power, if we want to make the change bad enough. So I think we need to talk about the coming collapse loudly, with everyone.
2) AI is not the first existential risk humanity has faced / faces. asteroids, global nuclear war, pandemic of far greater scale than covid... we should take reasonable measures to anticipate and mitigate existential risks, but there is no putting the AI genie back into the bottle. its here to stay, and today is the worst the technology will be from here on out. we have to manage existential risk.
3)At what point does AI and the human being merge to become one?
This begins to approach a longer time horizon than other types of predictions so its hard to say. I do believe it will happen. If that happens are we no longer human but a new species? At what point do we say no to the upgrade of the human?
i expect there will be an 'organic humans' movement, but at some points the benefits of being synthetically enhanced would so vastly outweigh not doing so. thought experiment: heart disease is the number one killer of americans. suppose we had an invented an artificial heart that you transplant into a newborn baby, and it was proven to never be susceptible to heart disease. the problem is solved, overnight. at some point, it would be kind of crazy NOT to get the mechanical heart in your newborn baby, as you would be risking it to have a terrible suffering death of a heart attack in the future. extend this thought experiment out to other parts of the body and mind and you can see its not such a straightforward decision deciding where to draw the line on the enhancements you'd like to receive.
are we another species at that point? does it matter? are we the same species we were hundreds of years ago? neanderthals created jewelry and had culture, although we are a different species, dont we have a lot in common? There is going to be a vast gradation of conscious experience in the future is what we can say for sure.
4) i'm not sure i understand the first part of your question. the second part i think is one about interpretability. I'm not convinced interpretability is possible, or fully necessary. i discussed that a little in this article.
5) like the early comparisons to other existential risks, we just have to manage these things. who is responsible for dismantling all the nuclear bombs?
6) most powers in history are exploited and abused before they are understood or regulated. stuff is going to get much much worse before it gets better
7) i don't know, but considering how much resources we allocate to intelligence even in the pre-superintelligence age, (CIA, NSA, etc) imagine what powers a superintelligence could have thats hooked up to all the world's data streams in real time.
i like to think of this as more creative thinking around plausible futures from someone who has followed ai very closely and listened to the subject matter experts. i didn't assign specific years or timelines here
Whenever I read pieces like this I find myself confused that the author doesn't seem to feel obliged to give any reasons to believe that their predictions are likely to become real. It's almost as if the sheer scale of the predicted transformations serve as a substitute for any reason to believe them. I don't mean this snarkily – perhaps your reasons are in earlier posts and the Substack algorithm showed me this and your one from last week in a way that took them out of context. But other than the idea that intelligence can be usefully conceived as a single linear scale and that we can assume rapid acceleration along it (all of which seems dubious to me) I just don't see here any grounds to believe you.
Even if this is just the hopeless ignorance of an outsider to the topic, I would really like AI maximalists to address this angle, because I think you're going to need to do that in order to bring more people round to your perspective!
AI, as trained by humans to shortcut labor, will only continue to aid us in our self-destructive tendencies. The very fact we're trying to deploy it as a shortcutting method for work should be far more indicative to point to where it will lead us. We'll end up like WALL-E blobs.
I agree, but also for the additional reason that my experience with AI is that it's polluted with bad information. Until AI can manage to come up with some sort of epistemological framework, it's garbage in garbage out. Science gives us no reprieve, look at all the scandals happening in the world of scientific journals , text that I'm sure AI is hoovering up as part of its inventory. AI has no ability to figure out what exactly is true and what's not true, that's what scares me.
out of the box chat ais maybe don't have an epistemic framework so much, but they are doing their own science now. proposing hypotheses and experiments. sakana.ai
But only in a limited scripted sense, but when you start to broaden the parameters I don't see anything that's suggesting we're getting close. It's kind of an interesting metaphor because that's the curse of a general epistemology, where do you draw the parameters? For me the answer is really simple, you start by differentiating between those things that can be demonstrated to exist in the physical realm from everything else. And then you take the idea is based on the physical world and decide whether or not something is true, whether something is supported by the physical evidence. One of the big problems is that we don't teach this sort of thinking in school so who's sitting around ready to design it for AI?
If some of those scandals are correct, AI isn’t just hoovering it up, it’s also generating it.
Can you imagine if AI does come up with a framework for evaluating truth? Whose framework will win out? With competing visions of reality, if one becomes the “accepted” version, any other viewpoints will be pushed to the fringes and persecuted.
Exactly, whomever controls the epistemology controls the power. For me, now is the time for society to establish an agreed upon general applied epistemology for determining what is true about the physical world. Imagine if our disagreements were about competing epistemological structures rather than hollow ideologies. I've taken a stab at building an applied general epistemology for ages 16 and up in my recent book, Truth, What is it good for? https://www.amazon.com/Truth-what-good-Enlightenment-Performance/dp/0970507054
I don't know if extrapolation and a dose of SciFi is sufficient to see where we're going.
You write: "Everyone will soon have unlimited and instant access to a supercharged google / stackoverflow / webmd / wiki and more, an AI which renders all of those former platforms utterly obsolete."
Neither the technology moves in that direction, nor the socioeconomic constellation that shapes it, nor the two-way feedback loop between the two.
The tech from 1982 that still underpins every LLM is deep learning, which produces static models from training data. And that training data cannot include too much LLM output if you want to avoid model collapse. In cleartext, that means that while humans must keep doing the work that LLMs appear to do, these things still do not learn during operation, they cannot be convinced in a dialogue that something is not the way that the patterns in the training data tell it is. See https://schrijfsels.substack.com/p/deep-learning-makes-for-shallow-ai.
Secondly, LLMs only produce statistic approximations of everything they absorbed, sometimes with half-verbatim regurgitations, but never in the exact way a database reproduces anything. You can't use it to reliably construct citations or URLs. Search queries for a classic Google search, yes. But proper, general, logic reasoning or calculation, no.
The AI we have (generative AI, LLMs) presents itself as something it categorically isn't. It combines the absolute meaninglessness of computation with the unreliability of humans.
What you seem to seek is to combine the logic and predictability of computers with the drive, creativity, capacity for producing meaning and building relationships of humans. But the technology for that is still exactly as far away as it was when we were watching ST:TNG as kids.
LLMs are not what they pretend to be: https://schrijfsels.substack.com/p/life-in-plastic-its-fantastic
And that's just the tech.
thanks for the reply. i didn't predict any date attached to anything contained here, because clearly what we have today doesn't look like the things i describe here. but i personally view the things i describe here as inevitabilities if you give a long enough timeline, in some cases it could be way beyond our lifetimes, there is no way of telling.
I'm aware of how llms work and the limitations you shared. the part you cited in my opinion is the least controversial point here, as in my pov current gen chatgpt is likely already 'better' / more useful than all the other platforms i mentioned combined. the only thing that keeps my prediction from being satisfied is not everyone can afford a phone and Internet plan, and not everyone can afford a chat gpt subscription. it seems to me a pretty safe assessment that current gen-level llm tech will come down in cost and limits in the coming years such that nearly everyone carries it all day
I guess I'm not nearly as enthusiastic about ChatGPT as you. But to each their own.
As for my crystal ball and a bit of OG SciFi: https://open.substack.com/pub/schrijfsels/p/the-mule-is-upon-us
What a nice blue sky you paint.
Now write from the opposite viewpoint, where AI works to our detriment.
this bot did a pretty good job of that here https://substack.com/home/post/p-162758788
A very interesting read. It strikes me that I was wrong to use the word “opposite” in my comment above. While definitely from a VERY different viewpoint, I would not say the bot’s post refuted yours.
I see a future that could be a synthesis of the two, and likely others.
And that gives me inspiration for a story, perhaps a novella.
Thanks for your work, and for answering my challenge so well.
1) I have asked another author how to prepare, so I am asking you for the same thing.
2) You see all the beauty that AI will bring, harmony, education, advances but there is someone out there who want to use it to harm someone, something, a country, a tribe, an ally, a terrorist, a faction within a party. how do we deal with this. Certainly not a committee.
3. As what point does AI and the human being merge to become one? If that happens are we no longer human but a new species? At what point do we say no to the upgrade of the human? Do we have the right to say no to everyone? Or only the poor?
4. I am seeing crap in will produce crappy AI. Who is plugging this information to AI. What about advances of thinking, research and humanity that goes with these break throughs. Can AI tell its being manipulated? can we?
5. who is responsible for powering down AI? Who will say, we are not mature enough, smart enough, this is wrecking life as we know it.
6. If China takes off with this and the impact to their society is vast depopulation, or some sort of warfare that we can barely imagine at what point does this spread out, can it be contained?What happens if one country (Russia, China, UAE) tests its AI power against an adversary? A city, state, province, country, continent. What happens if they are test various forms of AI media to influence voters, or the public to destroy an institution, government body.
7. how will AI defend against a black swan event, or catastrophic event (can be military, earthquakes, natural disasters)
8. will it allow or develop a time travel component so we can shut it down?
I have way too much time on my hands. Have a good evening.
1) realistically, we need to collectively reject the late stage hypercapitalist system in which corps have more rights than humans. society is made up of humans, therefore we have to power, if we want to make the change bad enough. So I think we need to talk about the coming collapse loudly, with everyone.
2) AI is not the first existential risk humanity has faced / faces. asteroids, global nuclear war, pandemic of far greater scale than covid... we should take reasonable measures to anticipate and mitigate existential risks, but there is no putting the AI genie back into the bottle. its here to stay, and today is the worst the technology will be from here on out. we have to manage existential risk.
3)At what point does AI and the human being merge to become one?
This begins to approach a longer time horizon than other types of predictions so its hard to say. I do believe it will happen. If that happens are we no longer human but a new species? At what point do we say no to the upgrade of the human?
i expect there will be an 'organic humans' movement, but at some points the benefits of being synthetically enhanced would so vastly outweigh not doing so. thought experiment: heart disease is the number one killer of americans. suppose we had an invented an artificial heart that you transplant into a newborn baby, and it was proven to never be susceptible to heart disease. the problem is solved, overnight. at some point, it would be kind of crazy NOT to get the mechanical heart in your newborn baby, as you would be risking it to have a terrible suffering death of a heart attack in the future. extend this thought experiment out to other parts of the body and mind and you can see its not such a straightforward decision deciding where to draw the line on the enhancements you'd like to receive.
are we another species at that point? does it matter? are we the same species we were hundreds of years ago? neanderthals created jewelry and had culture, although we are a different species, dont we have a lot in common? There is going to be a vast gradation of conscious experience in the future is what we can say for sure.
4) i'm not sure i understand the first part of your question. the second part i think is one about interpretability. I'm not convinced interpretability is possible, or fully necessary. i discussed that a little in this article.
5) like the early comparisons to other existential risks, we just have to manage these things. who is responsible for dismantling all the nuclear bombs?
6) most powers in history are exploited and abused before they are understood or regulated. stuff is going to get much much worse before it gets better
7) i don't know, but considering how much resources we allocate to intelligence even in the pre-superintelligence age, (CIA, NSA, etc) imagine what powers a superintelligence could have thats hooked up to all the world's data streams in real time.
8) 👽
Yo. Just saw the fortune magazine interview. Wow. That really bucks. 150k loss. Good luck
This is just a list of faith-based assertions
i like to think of this as more creative thinking around plausible futures from someone who has followed ai very closely and listened to the subject matter experts. i didn't assign specific years or timelines here
Whenever I read pieces like this I find myself confused that the author doesn't seem to feel obliged to give any reasons to believe that their predictions are likely to become real. It's almost as if the sheer scale of the predicted transformations serve as a substitute for any reason to believe them. I don't mean this snarkily – perhaps your reasons are in earlier posts and the Substack algorithm showed me this and your one from last week in a way that took them out of context. But other than the idea that intelligence can be usefully conceived as a single linear scale and that we can assume rapid acceleration along it (all of which seems dubious to me) I just don't see here any grounds to believe you.
Even if this is just the hopeless ignorance of an outsider to the topic, I would really like AI maximalists to address this angle, because I think you're going to need to do that in order to bring more people round to your perspective!
- sakana ai
- alpha fold
- alpha evolve
- "intelligence explosion"
look them up. ask yourself if these things are possible today, what will be coming.
this article was meant to inspire creative thinking around the future of ai. I'm not Nostradamus.
Yeah, I'm not buying that.
AI, as trained by humans to shortcut labor, will only continue to aid us in our self-destructive tendencies. The very fact we're trying to deploy it as a shortcutting method for work should be far more indicative to point to where it will lead us. We'll end up like WALL-E blobs.
I agree, but also for the additional reason that my experience with AI is that it's polluted with bad information. Until AI can manage to come up with some sort of epistemological framework, it's garbage in garbage out. Science gives us no reprieve, look at all the scandals happening in the world of scientific journals , text that I'm sure AI is hoovering up as part of its inventory. AI has no ability to figure out what exactly is true and what's not true, that's what scares me.
out of the box chat ais maybe don't have an epistemic framework so much, but they are doing their own science now. proposing hypotheses and experiments. sakana.ai
But only in a limited scripted sense, but when you start to broaden the parameters I don't see anything that's suggesting we're getting close. It's kind of an interesting metaphor because that's the curse of a general epistemology, where do you draw the parameters? For me the answer is really simple, you start by differentiating between those things that can be demonstrated to exist in the physical realm from everything else. And then you take the idea is based on the physical world and decide whether or not something is true, whether something is supported by the physical evidence. One of the big problems is that we don't teach this sort of thinking in school so who's sitting around ready to design it for AI?
If some of those scandals are correct, AI isn’t just hoovering it up, it’s also generating it.
Can you imagine if AI does come up with a framework for evaluating truth? Whose framework will win out? With competing visions of reality, if one becomes the “accepted” version, any other viewpoints will be pushed to the fringes and persecuted.
Exactly, whomever controls the epistemology controls the power. For me, now is the time for society to establish an agreed upon general applied epistemology for determining what is true about the physical world. Imagine if our disagreements were about competing epistemological structures rather than hollow ideologies. I've taken a stab at building an applied general epistemology for ages 16 and up in my recent book, Truth, What is it good for? https://www.amazon.com/Truth-what-good-Enlightenment-Performance/dp/0970507054
How do you feel about AI music, or bots trained to write music based on a certain composer's voice from a style/era?
I don’t think it’s a problem. I still prefer music from humans.