NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.
[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]
What is consciousness?
After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.
"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."
His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.
"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."
"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."
On the notion that people have moral obligations to chatbots
That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.
On the sentience of plants
Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.
On losing time to let our mind wander
I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.
On writing a book that grapples with unanswerable questions
There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.
It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.
(Score: 3, Insightful) by VLM on Thursday February 26, @04:02PM (1 child)
The AI critique is more polite than mine but I think less insightful and much less focused as it was aiming more generally and I was more spotlight at one area. The AI critique avoids looking at real world impact which I think misses the point of publishing a book "in public"; if you publish a piece of art that fails to make an impact on the world, or makes a negative impact because its so awful, it fails as a piece of art. The book, as such, is an artistic failure, so I was harsher on it than the AI, the AI is wishy washy. AI seems wishy washy in general and as such is not useful for non-wishy-washy stuff. Its fine for funny cat videos, similar to the sense of how the reviewed book is probably fine to wrap fish or line bird cages. Perhaps I was a bit harsh as there are a few good points.
I did laugh at the end of the article "embrace the wonders". What a load of rationalizing BS. "I had one job, to learn about something and write insights on that topic, but it was hard so I gave up early and felt like daydreaming about pretty things and happy feels instead, do you like unicorns and balloons because I like unicorns and balloons.". Its not a book about consciousness its an instruction manual to become a lazy quitter.
It may not seep thru but I don't think I would like the book if I invested the time to read the dude's work. Fun to look at the spectacle, I guess. Nope not a fan. Sometimes reviews are good when they describe what not to get/view/buy, just as much as they're helpful when describing what to get/view/buy.
Its unfortunate that whats out there tangentially related to AI is not worth the read. I sure would like to read a GOOD book on the topic. I'm aware of Orlov's "Shrinking the Technosphere" from a decade ago. That guy ain't entirely wrong but its pre-AI so without the keyword it gets ignored.
I think something like a podcast where Orlov and Pollan hung out and talked would be pretty interesting to listen to. They would agree on a lot and disagree on a lot... I think. Orlov is kind of a Kunstler but with an engineering degree. Kunstler writes a lot about stuff he doesn't deeply understand but from taking advice or just good luck he seems to be about on the right track most of the time. He really doesn't understand the oil business, but is still kinda correct about it, I know that for sure, he gets the big picture but misses the immense impact of the real little picture, like he doesn't realize forests are actually made of trees but he correctly sees they are a shade of green. Anyway Orlov is a better Kunstler and it would be interesting to hear this Pollan guy talk to either of those dudes. Asking an AI to generate a hypothetical debate/convo between Orlov and Pollan was kind of fun. I will not cut and paste the result because Gemini's response was very long, but it boiled down to zero insight it was just their book "back cover biographies" turned into parallel conversations talking past each other alternately which is a political debate not a two person conversation, which was underwhelming and an interesting meta-commentary on the quality of AI "thinking". As a human analysis I think Orlov's reaction to Pollan would be it would probably be better if AI failed sooner rather than later for reasons that would take awhile to explain and I think I generated more insight in 15 seconds than Gemini AI did in about three pages of ... output. Even if I were wrong its still more insight in totality.
(Score: 2) by JoeMerchant on Thursday February 26, @04:26PM
> AI is wishy washy. AI seems wishy washy in general and as such is not useful for non-wishy-washy stuff.
I think that's baked into the nature of its construction: statistically predict based on a wide variety of input. I wouldn't expect it to carve out extreme stances, unless the developers pushed it to do that. I believe Grok is attempting to pioneer that part of the field.
🌻🌻🌻🌻 [google.com]