NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.
[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]
What is consciousness?
After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.
"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."
His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.
"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."
"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."
On the notion that people have moral obligations to chatbots
That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.
On the sentience of plants
Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.
On losing time to let our mind wander
I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.
On writing a book that grapples with unanswerable questions
There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.
It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.
(Score: 2, Interesting) by khallow on Thursday February 26, @05:35PM (7 children)
Yet another sanctimonious ranter out there talking about what "we" do. If "we" believe and do all this stuff, then that's it. There is no other party to argue otherwise, right? But it's painfully clear that Pollan is not part of "we". And the many others who extend the moral consideration to billions of people and animals isn't part of "we" either. I'll note that democracies have explicit moral consideration for their citizens/residents and this often extends to animals. "We" is a peculiarly undefined group with a lot of exceptions that has yet to make contact with reality - it's typical moralizing in a vacuum. So why consider it odd that someone cares about the possible consciousness of chatbots? Perhaps they aren't part of "we" either?
Moving on, there is another issue. Note that one of his concerns is "We would lose control of them completely". What is it about chatbots that requires us to control them? Do we only have moral consideration for fellow humans if we control them too? My view is that a large part of the world has already figured out how to deal with autonomous, conscious, dangerous beings. And that this is reflected in the moral considerations of that part of the world. Perhaps we should go with what works?
My view is that even if we couldn't control the actions of a "chatbot" or other AI, we can still punish it for bad behavior just like any human or animal. Though that seems weird as a standard of consciousness.
(Score: 3, Insightful) by JoeMerchant on Thursday February 26, @09:15PM (6 children)
I don't know what the point of "punishment" is for AI... (for that matter, it often seems counterproductive in the long run with animals, plants and humans as well...)
You either allow it to do a thing, or you don't. If it proves itself problematic, don't let it be a problem.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Friday February 27, @02:59PM (5 children)
(Score: 2) by JoeMerchant on Friday February 27, @04:31PM (4 children)
> An AI with any sort of want can be so curbed by making the payout for complying with law or whatever higher than the value of not doing so.
This only works so long as the AI accepts the value system you propose to it - and in this case I'd not call it punishment so much as instruction in relative values.
When AI starts weighing values for itself, we'll have significant problems - unless the ability to restrict its agency is securely implemented (out of the agents' control.)
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Friday February 27, @04:51PM (3 children)
Then we have a means to punish it for misbehavior. And if it's value is in merely causing this disruption (which can be a real goal) then we can always end the AI and thus end the dilemma.
(Score: 2) by JoeMerchant on Friday February 27, @08:41PM (2 children)
> it probably doesn't have motivation to cause trouble.
Motivation is a slippery concept. What you consider motivation (hunger, pain) mostly doesn't apply to software running on servers, but what does "motivate" it will likely surprise us when we give it enough agency to start demonstrating what it really wants to do. One consistent trend I have observed throughout human history (particularly recent history) is the consistent increase of agency assigned to automatons.
>Then we have a means to punish it for misbehavior.
Do you feel "punished" living in a society of laws that guarantee such things as the right to quiet enjoyment of your private property? Because those laws restrict your agency tremendously, but then: you were raised "box trained" like a good hunting dog, never knowing there could be life without the box... Would you "Break Bad" if you were given a terminal cancer diagnosis? https://www.sciencealert.com/the-breaking-bad-effect-from-cancer-is-real-study-finds [sciencealert.com]
>And if it's value is in merely causing this disruption (which can be a real goal)
It's unlikely that the disruption will be a goal, far more likely that it's an unintended emergent behavior.
>then we can always end the AI and thus end the dilemma.
See any one of dozens (hundreds?) of Hollywood movies for screenplay stories about how "ending the X" where X is an integral part of society isn't as easy as pulling a plug.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Saturday February 28, @02:52AM (1 child)
Not slippery enough that you couldn't think about it.
Should I feel so "punished"? My take is that I'm not misbehaving within the norm of democratic law and hence, should not be punished.
Perhaps. But we were speaking of valuation rather than emergent behavior.
I'm not a believer in "too big to fail".
(Score: 2) by JoeMerchant on Saturday February 28, @03:47AM
>Not slippery enough that you couldn't think about it.
You can think about it all you like, you're going to have a hard time.
The greater scientific community ascribed low intelligence to cold blooded animals up until they recently realized: their tests for intelligence were largely food-reward based, and food just doesn't motivate cold blooded animals the way it does warm blooded animals. Statistical analyses running on millions of co-processors are quite a bit more alien to our understanding of motivation than lizards.
>My take is that I'm not misbehaving within the norm of democratic law and hence, should not be punished.
That's the normal take. Should your skin happen to be darker than the average Northern Italian and you were walking down the street in Minneapolis a few weeks ago, you might well have found yourself being punished by a segment of our society who think differently than you do. What does and doesn't qualify as behavior to be allowed varies, dramatically, just within our own societies - and, again, consider how differently an outsized LLM might learn and behave given their inputs and structure.
>we were speaking of valuation rather than emergent behavior.
Doesn't matter what you think we were speaking of when the emergent behavior happens. I was speaking of what the things may do, and you can put in all the valuations you like and still not get your intended results.
>I'm not a believer in "too big to fail".
Ostrich, much? Your tax dollars certainly believe, and rush to the assistance of organizations that would cause various forms of societal distress should they get taken over by their creditors.
🌻🌻🌻🌻 [google.com]