Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday February 26, @11:48AM   Printer-friendly

NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.

[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]

What is consciousness?

After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.

"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."

His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.

"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."

"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."

On the notion that people have moral obligations to chatbots

That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.

On the sentience of plants

Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.

On losing time to let our mind wander

I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.

On writing a book that grapples with unanswerable questions

There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.

It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Undefined on Friday February 27, @02:49PM

    by Undefined (50365) on Friday February 27, @02:49PM (#1435113)

    Define think.

    We don't have to formally define thinking to identify and check for known fundamental elements of thinking, just as we don't have to formally define consciousness to identify and check for known fundamental elements of consciousness.

    With that in mind, here are some of the missing elements of thinking for current LLM systems:

    • The ability to revise one's internal knowledge corpus. LLMs have fixed memory weights outside of the current context window. Even when they store new information outside of their own internal memory, the effectiveness of that storage is limited to the context window size. Even cats and dogs have the ability to revise their concepts of the world, and certainly humans do. Any LLM that has errors in the training data is stuck with them; any phrase generation that touches on those errors will always be wrong unless the current context window provides a temporary revision (and even then the error can remain unresolved in the prompt response(s.)
    • The ability to learn new/any skills: As an example taken from interactions with the LLM GPT4All [nomic.ai], when you tell the LLM to revise an attempt at a haiku, explaining what's wrong in terms of syllable count, fixing the problem requires both phrase generation and awareness of syllable identities. Without add-on code, GPT4All [nomic.ai] can't address the problem because it cannot learn new skills — LLMs don't have skills, they have probabilistic word association generation. The argument that we often also do this (just word-associate) is accurate, but it is clearly not all we can do to address a problem and in particular a new problem and/or one for which we have no experience base. Many of the existing LLMs do have various chunks of add-on code that provide a post-assembly winnowing / reasoning process; but again, these are fixed and they do very poorly at fixing errors outside of the specific programmed competency. One example of this is when an LLM has a math engine add-on; when you ask it do to math, it's not the LLM doing it, it's the add-on, and the add-on cannot help the LLM assemble the initial response because it is entirely downstream from the associative phrase assembly. Again using GPT4All [nomic.ai] as the example, it's got some good add-on math skills... but it can't count syllables during response generation because the assembly of the example haiku itself is done prior to the math engine application. So it can identify the problem as a post-process, but it can't fix it, no more than it can fix it if you describe the problem to it (e.g. the first line must have five syllables, not six.)
    • Also related, the aspect of thinking we call self-awareness requires both knowledge of one's shortcomings (as internally perceived) and the ability to revise them. If you can think, you can both improve yourself and reduce your abilities depending on the choices you make and incorporate. LLMs cannot do this beyond the context window. This is directly consequent to the inability to change the existing core learned corpus. And that, in turn, is a problem due to the fact that it takes a great deal of compute power to assemble that corpus — far too long to do it on the fly, as we (and cats and dogs, etc.) are able to do.
    • Invention is another area where LLMs (not non-LLM, specialized domain-aware iterative ML) fail. The problem is that unless the idea is already incorporated in the knowledge corpus, the only path to a new idea is through randomized assembly of memorized concepts that are nearby in conceptual space — this lends itself to minor lucky hits if the input data has already come close to solving the issue, but as there is no new reasoning involved, there's no path to new ideas that have no previous referents. When humans think, this is not a problem. We can establish entire new structures of imaginary and/or theoretical referents and then reason inside those structures to achieve entirely new insights. An excellent and well-known example of this is Einstein's thought process that led to special relativity in 1905.
    • Inductive and deductive reasoning are mirror tools for analyzing information. Inductive reasoning forms broad generalizations from observed patterns, whereas deductive reasoning applies general laws to reach certain conclusions about specific cases. Both are essential for problem-solving, decision-making, and critical thinking. With an LLM, the existing patterns are incorporated into the learned corpus (correctly or not) as they exist in the relationships it has been trained on. However, re-assembly of those patterns as applied to the current prompt is not reasoned; instead, it is probabilistic, which gives rise to misprediction (very inaccurately termed "hallucination"), such as reporting that author X is the source of work Y when in fact that is not the case.
    --
    I use a dedicated preprocessor to elaborate abbreviations.
    Hover to reveal elaborations.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2