Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Saturday February 28, @02:52AM (1 child)

    by khallow (3766) Subscriber Badge on Saturday February 28, @02:52AM (#1435175) Journal

    Motivation is a slippery concept. What you consider motivation (hunger, pain) mostly doesn't apply to software running on servers, but what does "motivate" it will likely surprise us when we give it enough agency to start demonstrating what it really wants to do. One consistent trend I have observed throughout human history (particularly recent history) is the consistent increase of agency assigned to automatons.

    Not slippery enough that you couldn't think about it.

    Then we have a means to punish it for misbehavior.

    Do you feel "punished" living in a society of laws that guarantee such things as the right to quiet enjoyment of your private property? Because those laws restrict your agency tremendously, but then: you were raised "box trained" like a good hunting dog, never knowing there could be life without the box... Would you "Break Bad" if you were given a terminal cancer diagnosis? https://www.sciencealert.com/the-breaking-bad-effect-from-cancer-is-real-study-finds [sciencealert.com]

    Should I feel so "punished"? My take is that I'm not misbehaving within the norm of democratic law and hence, should not be punished.

    It's unlikely that the disruption will be a goal, far more likely that it's an unintended emergent behavior.

    Perhaps. But we were speaking of valuation rather than emergent behavior.

    See any one of dozens (hundreds?) of Hollywood movies for screenplay stories about how "ending the X" where X is an integral part of society isn't as easy as pulling a plug.

    I'm not a believer in "too big to fail".

  • (Score: 2) by JoeMerchant on Saturday February 28, @03:47AM

    by JoeMerchant (3937) on Saturday February 28, @03:47AM (#1435180)

    >Not slippery enough that you couldn't think about it.

    You can think about it all you like, you're going to have a hard time.

    The greater scientific community ascribed low intelligence to cold blooded animals up until they recently realized: their tests for intelligence were largely food-reward based, and food just doesn't motivate cold blooded animals the way it does warm blooded animals. Statistical analyses running on millions of co-processors are quite a bit more alien to our understanding of motivation than lizards.

    >My take is that I'm not misbehaving within the norm of democratic law and hence, should not be punished.

    That's the normal take. Should your skin happen to be darker than the average Northern Italian and you were walking down the street in Minneapolis a few weeks ago, you might well have found yourself being punished by a segment of our society who think differently than you do. What does and doesn't qualify as behavior to be allowed varies, dramatically, just within our own societies - and, again, consider how differently an outsized LLM might learn and behave given their inputs and structure.

    >we were speaking of valuation rather than emergent behavior.

    Doesn't matter what you think we were speaking of when the emergent behavior happens. I was speaking of what the things may do, and you can put in all the valuations you like and still not get your intended results.

    >I'm not a believer in "too big to fail".

    Ostrich, much? Your tax dollars certainly believe, and rush to the assistance of organizations that would cause various forms of societal distress should they get taken over by their creditors.

    --
    🌻🌻🌻🌻 [google.com]