Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by srobert on Thursday February 26, @04:37PM (5 children)

    by srobert (4803) on Thursday February 26, @04:37PM (#1435031)

    While you're distracted by whether or not this thing is "conscious" (whatever that means), it's still going to drastically reduce the need for human labor, meaning that you won't have to pay people for work as much, or that you will have a much harder time earning a living, depending on whether you're in the buyers' or sellers' side of the labor market.
    I was born in the 60s and I've always thought that works of science fiction, that I've seen and read all of my life, which were focused on whether or not artificial intelligence was conscious were missing out on a critically important economic consequence of the technology.
    It's not so important whether on not Mr. Data is a member of the crew of the Enterprise whose rights as a living being are respected, as it is that the biological members of that crew are expendable. The original Star Trek touched on that issue with the M5.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by krishnoid on Thursday February 26, @04:50PM (1 child)

    by krishnoid (1156) on Thursday February 26, @04:50PM (#1435034)

    Sure ... then there's the dark side [rifters.com] of consciousness and intelligence.

    • (Score: 1) by khallow on Thursday February 26, @05:52PM

      by khallow (3766) Subscriber Badge on Thursday February 26, @05:52PM (#1435046) Journal
      While it's a cool story, it's based on broken premises like becoming more intelligent means becoming less sentient.
  • (Score: 3, Touché) by JoeMerchant on Thursday February 26, @04:57PM (2 children)

    by JoeMerchant (3937) on Thursday February 26, @04:57PM (#1435036)

    For a universe that so readily solves the annoyance of time to transit from orbit to planetary surface and back with a magical matter-energy-matter transporter, the plot preservation device of Data's brain being an un-reproducible artifact is bizarre. And necessary, because if you could just replicate data's brain (say, using a transporter?) then there's virtually no reason why the universe isn't filled with starships crewed almost entirely by androids.

    --
    🌻🌻🌻🌻 [google.com]
    • (Score: 3, Interesting) by istartedi on Thursday February 26, @05:44PM (1 child)

      by istartedi (123) on Thursday February 26, @05:44PM (#1435044) Journal

      Perhaps Data's brain was unique but self-aware machines weren't. The Exocomps [fandom.com] became self-aware by accident. When they requested a holodeck simulation that could match Data [wikipedia.org] this also resulted in a self-aware being with problematic tendencies. In fact, I think often of their solution to the Moriarty problem when it comes to social media: Why don't I just step away and let them live in their own conspiratorial world?

      Perhaps Starfleet had a directive not to intentionally create new artificial life because while it worked out for Data, it was understood there was a potential for it not to work out. This would be particularly true after they encountered the Borg.

      --
      Appended to the end of comments you post. Max: 120 chars.
      • (Score: 3, Insightful) by JoeMerchant on Thursday February 26, @07:32PM

        by JoeMerchant (3937) on Thursday February 26, @07:32PM (#1435053)

        As demonstrated amply by seven of nine, Star Trek was concerned first, last, and mostly, with achieving profitable audience share.

        --
        🌻🌻🌻🌻 [google.com]