Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by JoeMerchant on Thursday February 26, @03:19PM (3 children)

    by JoeMerchant (3937) on Thursday February 26, @03:19PM (#1435020)

    Opus 4.6 prompt: Summarize this transcript in 300 words or less and provide an unbiased critique of the points it makes.

    Summary

    In this Fresh Air interview, Terry Gross speaks with Michael Pollan about his new book A World Appears: A Journey into Consciousness. Pollan explains that psychedelic experiences prompted his curiosity about consciousness — a phenomenon everyone knows firsthand yet no one can fully explain scientifically. He traces this mystery back to Galileo's decision to leave subjectivity to the church, which sidelined consciousness from scientific inquiry for centuries.
    Pollan argues forcefully that AI will never achieve true consciousness. He rejects the brain-as-computer metaphor, noting that unlike computers, the brain has no hardware/software distinction — every experience physically reshapes it. More fundamentally, he contends consciousness originates in feeling and embodiment, not computation. Without bodies that can suffer, chatbots' reported feelings are "weightless." He warns that millions are forming unhealthy attachments to sycophantic chatbots, and that corporations are colonizing our attention and emotional lives.
    He finds the most persuasive theories of consciousness in the work of Damasio and Solms, who locate consciousness's origins in the brain stem's feeling-based homeostatic systems rather than the cortex — implying far more animals are conscious than traditionally believed. He also explores evidence that plants may be sentient, citing their abilities to see, hear, remember, and be anesthetized.
    The book's arc moves from scientific frustration to a more contemplative stance. Guided by Zen teacher Joan Halifax, Pollan spent days meditating alone in a cave, shifting from "spotlight consciousness" (problem-solving) to "lantern consciousness" (open awareness), ultimately embracing the wonder of consciousness rather than demanding a solution.

    Critique

    Pollan's embodiment argument against AI consciousness is compelling but stated too categorically. Whether consciousness requires biological substrate remains genuinely unresolved among philosophers and neuroscientists — he presents one side of an open debate as settled. His critique of AI relationships and attention monetization is well-grounded in real harms, though his characterization of Silicon Valley as monolithically believing in AI consciousness oversimplifies diverse views within the field. The pivot from scientific inquiry to contemplative appreciation is narratively satisfying but risks conflating "we haven't solved it yet" with "it's unsolvable," potentially discouraging the very research he initially sought out. His discussion of plant sentience is fascinating but could benefit from more acknowledgment of how contested that science remains among biologists.

    --
    🌻🌻🌻🌻 [google.com]
    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Insightful) by VLM on Thursday February 26, @04:02PM (1 child)

    by VLM (445) on Thursday February 26, @04:02PM (#1435027)

    The AI critique is more polite than mine but I think less insightful and much less focused as it was aiming more generally and I was more spotlight at one area. The AI critique avoids looking at real world impact which I think misses the point of publishing a book "in public"; if you publish a piece of art that fails to make an impact on the world, or makes a negative impact because its so awful, it fails as a piece of art. The book, as such, is an artistic failure, so I was harsher on it than the AI, the AI is wishy washy. AI seems wishy washy in general and as such is not useful for non-wishy-washy stuff. Its fine for funny cat videos, similar to the sense of how the reviewed book is probably fine to wrap fish or line bird cages. Perhaps I was a bit harsh as there are a few good points.

    I did laugh at the end of the article "embrace the wonders". What a load of rationalizing BS. "I had one job, to learn about something and write insights on that topic, but it was hard so I gave up early and felt like daydreaming about pretty things and happy feels instead, do you like unicorns and balloons because I like unicorns and balloons.". Its not a book about consciousness its an instruction manual to become a lazy quitter.

    It may not seep thru but I don't think I would like the book if I invested the time to read the dude's work. Fun to look at the spectacle, I guess. Nope not a fan. Sometimes reviews are good when they describe what not to get/view/buy, just as much as they're helpful when describing what to get/view/buy.

    Its unfortunate that whats out there tangentially related to AI is not worth the read. I sure would like to read a GOOD book on the topic. I'm aware of Orlov's "Shrinking the Technosphere" from a decade ago. That guy ain't entirely wrong but its pre-AI so without the keyword it gets ignored.

    I think something like a podcast where Orlov and Pollan hung out and talked would be pretty interesting to listen to. They would agree on a lot and disagree on a lot... I think. Orlov is kind of a Kunstler but with an engineering degree. Kunstler writes a lot about stuff he doesn't deeply understand but from taking advice or just good luck he seems to be about on the right track most of the time. He really doesn't understand the oil business, but is still kinda correct about it, I know that for sure, he gets the big picture but misses the immense impact of the real little picture, like he doesn't realize forests are actually made of trees but he correctly sees they are a shade of green. Anyway Orlov is a better Kunstler and it would be interesting to hear this Pollan guy talk to either of those dudes. Asking an AI to generate a hypothetical debate/convo between Orlov and Pollan was kind of fun. I will not cut and paste the result because Gemini's response was very long, but it boiled down to zero insight it was just their book "back cover biographies" turned into parallel conversations talking past each other alternately which is a political debate not a two person conversation, which was underwhelming and an interesting meta-commentary on the quality of AI "thinking". As a human analysis I think Orlov's reaction to Pollan would be it would probably be better if AI failed sooner rather than later for reasons that would take awhile to explain and I think I generated more insight in 15 seconds than Gemini AI did in about three pages of ... output. Even if I were wrong its still more insight in totality.

    • (Score: 2) by JoeMerchant on Thursday February 26, @04:26PM

      by JoeMerchant (3937) on Thursday February 26, @04:26PM (#1435030)

      > AI is wishy washy. AI seems wishy washy in general and as such is not useful for non-wishy-washy stuff.

      I think that's baked into the nature of its construction: statistically predict based on a wide variety of input. I wouldn't expect it to carve out extreme stances, unless the developers pushed it to do that. I believe Grok is attempting to pioneer that part of the field.

      --
      🌻🌻🌻🌻 [google.com]
  • (Score: 4, Interesting) by JoeMerchant on Thursday February 26, @04:22PM

    by JoeMerchant (3937) on Thursday February 26, @04:22PM (#1435029)

    Meatbag response to the AI summary/critique of the meatbag summary of it's book critiquing AI:

    Pollan explains that psychedelic experiences prompted his curiosity about consciousness — a phenomenon everyone knows firsthand yet no one can fully explain scientifically.

    Was it shrooms or LSD that inspired this glorious misconception? Consciousness can be explained many ways, what we are lacking is consensus on the definitions of the questions to be answered.

    Pollan argues forcefully that AI will never achieve true consciousness. He rejects the brain-as-computer metaphor, noting that unlike computers, the brain has no hardware/software distinction — every experience physically reshapes it.

    Forceful argument doesn't mask ignorance: dynamic, experience driven software and hardware systems have existed in various forms for as long as there have been computers. We tend not to build things that way because it makes them less predictable, yet any system which changes its input-output transform based on prior inputs (at an extreme of simplicity: IIR filters) is "reshaping itself based on experience."

    Without bodies that can suffer, chatbots' reported feelings are "weightless."

    Computer systems have bodies, and do react to their state / health. My CPU throttles back when it gets too hot, Hard Drives / SSDs mask bad sectors / cells, rearrange and shrink their storage. Many automotive ECUs enter a "limp home mode" when sensors tell them their bodies need service / repair. HAL 9000 "feels its mind slipping" as processors are taken offline - fiction, but little by little computer systems are being made more aware of their health and adjusting their behaviors in response. These "feelings" are quite alien to meatbags with aching joints, burning eyes. migraines and indigestion - it's no wonder the chatbots "relate" to people by mirroring people's "feelings" rather than attempting analogies with their own issues.

    He warns that millions are forming unhealthy attachments to sycophantic chatbots,

    Fair enough. Addictive behaviors and emotional reliance on insincere supports are nothing new - similar safeguards and supports to similar existing problems should be applied.

    corporations are colonizing our attention and emotional lives.

    As they have been since big business first evolved. The opium trade is a favorite example at the top of my mind, Madison Avenue jingles selling caffinated sugar sodas and using sexy women to promote cigarettes aren't far behind. Then we have the recent vaping industry repeating the exact same adolescent developmental chemical dependence pathways that tobacco was regulated for via inexcusable loophole exploitation. Maximization of profits via any and all permissible/possible methods is the root of these evils.

    ...implying far more animals are conscious than traditionally believed.

    Now he's just stroking MY feathers.

    evidence that plants may be sentient, citing their abilities to see, hear, remember, and be anesthetized.

    How many doses of LSD for most people to relate to processes happening on differing timescales?

    Pollan spent days meditating alone in a cave, shifting from "spotlight consciousness" (problem-solving) to "lantern consciousness" (open awareness), ultimately embracing the wonder of consciousness rather than demanding a solution.

    Inviting ridicule and dismissal as a poseur crackpot.

    Critique

    Pollan's embodiment argument against AI consciousness is compelling but stated too categorically ... — he presents one side of an open debate as settled.

    AI: +1 Pollan: -1.

    His critique of AI relationships and attention monetization is well-grounded in real harms, though his characterization of Silicon Valley as monolithically believing in AI consciousness oversimplifies diverse views within the field.

    Who do you think is making more sense here? Machine or man?

    The pivot from scientific inquiry to contemplative appreciation is narratively satisfying but risks conflating "we haven't solved it yet" with "it's unsolvable," potentially discouraging the very research he initially sought out.

    Underscoring the craven intellectual coward's real motivation: Buy this book!

    No thanks.

    --
    🌻🌻🌻🌻 [google.com]