Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Funny) by Snotnose on Thursday February 26, @01:34PM (5 children)

    by Snotnose (1623) Subscriber Badge on Thursday February 26, @01:34PM (#1435007)

    Do submarines swim?

    Someone was using that .sig line in Usenet back in the 80s.

    --
    Trump's Grave will be the world's most popular open air toilet.
    Starting Score:    1  point
    Moderation   +1  
       Funny=1, Total=1
    Extra 'Funny' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by Rich on Thursday February 26, @02:00PM (4 children)

    by Rich (945) on Thursday February 26, @02:00PM (#1435010) Journal

    Fortunately, we can relegate such petty philosophical questions to AI now:

    > Do submarines swim?
      What if the submarine had little arms as a propulsion system?
    > If a submarine used little arms as its propulsion system, then yes — in a playful, biological sense, it would be much closer to swimming.

    There you go. :)

    That said, TFA mentions "the voice in your head". Reading that, I thought "the voice in my head just talks to me in English, therefore it seems to work much like the token feedback of a reasoning LLM".

    • (Score: 3, Funny) by Rich on Thursday February 26, @02:02PM (1 child)

      by Rich (945) on Thursday February 26, @02:02PM (#1435011) Journal

      I messed up the dialog with angled brackets for the comm direction. Here it is fully corrected:

      ME: Do submarines swim?
      LLM: Submarines don’t swim — they navigate or operate underwater. [...]
      ME: What if the submarine had little arms as a propulsion system?
      LLM: If a submarine used little arms as its propulsion system, then yes — in a playful, biological sense, it would be much closer to swimming. [...]

      • (Score: 3, Funny) by krishnoid on Thursday February 26, @05:04PM

        by krishnoid (1156) on Thursday February 26, @05:04PM (#1435037)

        ME: What if the submarine had little arms as a propulsion system?

        The submarine would then still be propelling herself -- only, swimmingly so. What part of this is so difficult to understand? :-)

    • (Score: 2) by istartedi on Thursday February 26, @05:31PM (1 child)

      by istartedi (123) on Thursday February 26, @05:31PM (#1435042) Journal

      Maybe the voice in your head talks in English, but when I go to sleep mine speaks an incomprehensible language of sight, sound and feelings that are neither words nor pictures. Feelings are a bit like that too, but generally align with what other people report so they have names in English but subjectively they're not any language in the traditional sense. Those weird states of mind before you doze off? I call them "sing-song thoughts".

      --
      Appended to the end of comments you post. Max: 120 chars.
      • (Score: 2) by Rich on Thursday February 26, @08:12PM

        by Rich (945) on Thursday February 26, @08:12PM (#1435056) Journal

        It would be short-sighted (pun intended) to assume the LLM mechanism works only on text. It isn't that ASCII values are passed into the big neural networks straight away, they undergo some sort of encoding. Same for most generative visuals, these operate in a coarse voxel space framed by "autoencoders". I can easily imagine that some boffin unifies the encodings, so they get fed into the same predictive neural net. If you have seen how LoRAs work, it might even be possible to train different input and output classes separately for the core network. You'd then overlay models for text language, audio, visuals, and Kung-Fu dances (as lately seen). When this unified model does a reasoning-style feedback, it would not only reason over text, like I did when reading, but also everything else, like you described. The only thing left is some reward/punishment system for self-reinforcement, and maybe a sleep-like garbage collection, and then may any deities help us if that machine is bent on maximizing the production of paperclips, at all cost.