NPR has a nice summary of an interview with Michael Pollan about AI and consciousness, but it kind of goes beyond that.
[Professor Pollan is the author of more than an dozen books, most notably "This is your mind on plants" about using psychedelics .]
What is consciousness?
After writing a book about how using psychedelics in a therapeutic setting can change your consciousness, that's the question journalist Michael Pollan found himself struggling to answer.
"There's nothing any of us know with more certainty than the fact that we are conscious. It's immediately available to us. It's the voice in our head," he says. And yet, Pollan adds: "How does three pounds of this tofu-like substance between your ears generate subjective experience? Nobody knows the answer to that question."
His new book, A World Appears: A Journey into Consciousness, explores consciousness on both a personal and technological level. Pollan, who lives close to Silicon Valley, says some believe that Artificial Intelligence is capable of consciousness.
"They base this on a premise ... that basically the brain is a computer, and that consciousness is software," he says. "And if you can run it on the brain, which is essentially, in their view, a 'meat-based computer,' you should be able to run it on other kinds of machines."
"If you think about it, your feelings are very tied to your vulnerability, to your having a body that can be hurt, to the ability to suffer and perhaps your mortality," he says. "So I think that any feelings that a chatbot reports will be weightless, meaningless, because they don't have bodies. They can't suffer."
On the notion that people have moral obligations to chatbots
That's a very active conversation here, which is if they are conscious, we then have moral obligations to them, and have to think about granting them personhood, for example, the way we've granted corporations personhood. I think that would be insane. We would lose control of them completely by giving them rights. But I find this whole tender care for the possible consciousness of chatbots really odd, because we have not extended moral consideration to billions of people, not to mention the animals that we eat that we know are conscious. So we're gonna start worrying about the computers? That seems like our priorities are screwed up.
On the sentience of plants
Plants can see, which is a weird idea. There's a certain vine that can actually change its leaf form to mimic the plant it's twining around. How does it know what that leaf form is? Plants can hear. If you play the sound of chomping caterpillars on a leaf, they will produce chemicals to repel those caterpillars and to convey, to alert other plants in the vicinity. Plants have memory. You can teach them something and they'll remember it for 28 days.
On losing time to let our mind wander
I worry, too, that with media, with our technologies, we are shrinking the space in which spontaneous thought can occur. And that this space of ... spontaneous thought is something precious that we're giving away to these corporations that essentially want to monetize our attention, and in the case of chatbots, want to monetize our attachments, our deep human attachments. So consciousness is, I think — and this is what to me is the urgency of the issue — consciousness is under siege. I think that it's the last frontier for some of these companies that want to sell our time.
On writing a book that grapples with unanswerable questions
There were many moments of despair in the process of reporting and writing this book. It took me five years, and there were many times where [I told my wife] "I've dug a hole here, and I don't know how I'm ever going to get out of it." And some of it had to do with mounting frustration with the science, and some of it had to do with the fact that I had this classic male problem/solution Western frame — that there was a problem and I was going to find the solution.
It took my wife, in part, and [Zen Buddhist teacher] Joan Halifax and some other people, who got me to question that and [they] said, "Yeah, there is the problem of consciousness, but there's also the fact of it, and the fact is wondrous. The fact is miraculous. And you've put all this energy into this narrow beam of attention. Why don't you open that beam up further and just explore the phenomenon that is going on in your head, which is so precious and so beautiful." And that's kind of where I came out — and it's certainly not where I expected to come out.
(Score: 4, Touché) by SomeGuy on Thursday February 26, @12:42PM (3 children)
The entire question about if a magical "AI" is conscious/thinking/sentient is a distraction. It is simple hype designed to make you think about AI. Because we can't think about AI enough. Even if you hate AI, you must think about AI. And have some more AI, with AI, AI, and AI, there is not much AI in that.
The question is also irrelevant. "AI"s are TOOLS. Made and used by PEOPLE. If they ever become anything more than that then someone has done something WRONG. And stupid.
But, oh, look, it is asking me how I feel about what I typed in. It might be conscious!!1!1
(Score: 5, Insightful) by Thexalon on Thursday February 26, @01:24PM
One piece of this puzzle is a myth that's been around in computing since at least the 1950's, namely that if you take human-created inputs and run them through a really expensive machine, you suddenly have something that's "objective", and any errors can be blamed on the machine rather than the humans.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 5, Insightful) by JoeMerchant on Thursday February 26, @01:41PM
The real problem is: we are already giving AI "agency". This isn't new, the float valve in your toilet that stops it from overflowing is a form of agent, if the float falls off your bathroom will flood.
The problem with AI is that it is less predictable than deterministic programs and systems. We have been dabbling in complex systems with "emergent behaviors" for a long time, and when they get involved in "important things" like the stock market, you get dramatic events like the 2010 flash crash.
Putting too much trust in anything or anyone is a real problem.
AI does what it does, with significantly more unpredictability (creativity?) than traditional computer software. Trust it, and don't, accordingly. Whether or not subjective labels like thinking, consciousness, or soul get applied is irrelevant to how it should be handled, yet also tremendously influential in how most people perceived the technology.
Does your toilet have a soul? 🚽
🌻🌻🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Saturday February 28, @02:12AM
It's pointless as you say. Lots of humans for thousands of years had no problems using other humans as slaves. AIs and robots are to be the new slave class.
If things go well, more "normal humans" can live more like the free rich Romans, and not so much like the slaves toiling for their masters...
If things don't go well, then normal humans may become slaves or pets.
(Score: 3, Funny) by Snotnose on Thursday February 26, @01:34PM (5 children)
Do submarines swim?
Someone was using that .sig line in Usenet back in the 80s.
You can call me antisocial. Just don't call me.
(Score: 2) by Rich on Thursday February 26, @02:00PM (4 children)
Fortunately, we can relegate such petty philosophical questions to AI now:
> Do submarines swim?
What if the submarine had little arms as a propulsion system?
> If a submarine used little arms as its propulsion system, then yes — in a playful, biological sense, it would be much closer to swimming.
There you go. :)
That said, TFA mentions "the voice in your head". Reading that, I thought "the voice in my head just talks to me in English, therefore it seems to work much like the token feedback of a reasoning LLM".
(Score: 3, Funny) by Rich on Thursday February 26, @02:02PM (1 child)
I messed up the dialog with angled brackets for the comm direction. Here it is fully corrected:
ME: Do submarines swim?
LLM: Submarines don’t swim — they navigate or operate underwater. [...]
ME: What if the submarine had little arms as a propulsion system?
LLM: If a submarine used little arms as its propulsion system, then yes — in a playful, biological sense, it would be much closer to swimming. [...]
(Score: 3, Funny) by krishnoid on Thursday February 26, @05:04PM
The submarine would then still be propelling herself -- only, swimmingly so. What part of this is so difficult to understand? :-)
(Score: 2) by istartedi on Thursday February 26, @05:31PM (1 child)
Maybe the voice in your head talks in English, but when I go to sleep mine speaks an incomprehensible language of sight, sound and feelings that are neither words nor pictures. Feelings are a bit like that too, but generally align with what other people report so they have names in English but subjectively they're not any language in the traditional sense. Those weird states of mind before you doze off? I call them "sing-song thoughts".
Appended to the end of comments you post. Max: 120 chars.
(Score: 2) by Rich on Thursday February 26, @08:12PM
It would be short-sighted (pun intended) to assume the LLM mechanism works only on text. It isn't that ASCII values are passed into the big neural networks straight away, they undergo some sort of encoding. Same for most generative visuals, these operate in a coarse voxel space framed by "autoencoders". I can easily imagine that some boffin unifies the encodings, so they get fed into the same predictive neural net. If you have seen how LoRAs work, it might even be possible to train different input and output classes separately for the core network. You'd then overlay models for text language, audio, visuals, and Kung-Fu dances (as lately seen). When this unified model does a reasoning-style feedback, it would not only reason over text, like I did when reading, but also everything else, like you described. The only thing left is some reward/punishment system for self-reinforcement, and maybe a sleep-like garbage collection, and then may any deities help us if that machine is bent on maximizing the production of paperclips, at all cost.
(Score: 5, Interesting) by Undefined on Thursday February 26, @02:28PM (2 children)
Okay, so, two things:
1: What we have (so far) in the LLM space is ML — Machine Learning with probabilistic assembly of related terminology and phrasing and some subsequent simplistic prodding of information from the learned corpus. Not AI — Artificial Intelligence. Without something else added, or LLMs added to something else, or something else entirely, consciousness isn't on the table. Because LLMs don't think. Period. That's not how they work. Yet thinking is at the very least one of the certain prerequisites for both consciousness and intelligence.
2: As far as machine consciousness as an impossibility goes, that's a statement there is no reason whatsoever to have any confidence in. Nature has established consciousness multiple times, multiple ways, over a lot of time. We've literally just begun attempting to figure this out. There's no reason to think we can't get there — quite the opposite. Every bit of evidence we have to date (for everything we know about, no exceptions) says it's just physics. In the case of brains, the mediators we know about are specifically electricity, chemistry, topology. Claiming machine consciousness is impossible at this point in time is pretty much the same thing as Lord Kelvin, a renowned physicist and president of the British royal society declaring in 1895: "heavier-than-air flying machines are impossible" while birds, every single one of them a biological machine heavier than air, merrily flew above his clueless head.
TL;DR: Pollan is drawing (specious) conclusions without data and clearly does not understand how LLMs work.
I use a dedicated preprocessor to elaborate abbreviations.
Hover to reveal elaborations.
(Score: 4, Touché) by JoeMerchant on Thursday February 26, @02:31PM (1 child)
> Because LLMs don't think.
Define think.
🌻🌻🌻🌻 [google.com]
(Score: 2, Interesting) by Undefined on Friday February 27, @02:49PM
We don't have to formally define thinking to identify and check for known fundamental elements of thinking, just as we don't have to formally define consciousness to identify and check for known fundamental elements of consciousness.
With that in mind, here are some of the missing elements of thinking for current LLM systems:
I use a dedicated preprocessor to elaborate abbreviations.
Hover to reveal elaborations.
(Score: 3, Insightful) by VLM on Thursday February 26, @03:00PM (5 children)
Lets say I was a LLM basing my response to this article entirely on exhaustive statistical analysis. Here's my LLM style analysis leading off with this hilarious quote:
This guy's a total bullshitter. That doesn't prove he's wrong about this specific triviality, nor does it prove that he's wrong about literally anything else. What a pile of amazing glittering generalities. The entire biological classification of Kingdom Plantae can remember 27 days, even esoteric quantum mechanics formulae and 1980s sitcom trivia points, then forgets it all by 29 days. Yeah, I bet. Oh boy is that an impressive authority figure spouting total bullshit. But, regardless of details of this specific topic, he's obviously a total bullshitter and all/most scammers are total bullshitters so I would be highly surprised if the rest of his, I'm sure sophistry filled response, is worth reading. Its possible he's thinking or he's not a total scammer, but its statistically highly unlikely.
Thats the problem with LLM style analysis. Being statistically adjacent to groupthink and wrapped in top quality sophistry isn't worth knowing, it doesn't provide value. It could farm karma, it could answer questions along the lines of "what has been historically popular from the pre-ai past" or "provide me with some excellent sophistry/propaganda from the pre-ai past". But it can't think. The above is all you'll get from carefully not thinking about the guy's response and just doing statistical likelihood analysis of a pile of text.
To be blunt I almost think the guy has the driest of british humour and is trolling the F out of the general public with an argument so awful he's trying to make the opposite point. Mark my words, no kidding or sarcasm, I think there's a legit decent chance this guy is pulling a "Sokal Affair" in real time. If he pulls it off, I hope I don't ruin it by mentioning it early, I'd high five the guy and buy him a beer for a job well done; I'm impressed.
(Score: 4, Interesting) by JoeMerchant on Thursday February 26, @03:19PM (3 children)
Opus 4.6 prompt: Summarize this transcript in 300 words or less and provide an unbiased critique of the points it makes.
Summary
In this Fresh Air interview, Terry Gross speaks with Michael Pollan about his new book A World Appears: A Journey into Consciousness. Pollan explains that psychedelic experiences prompted his curiosity about consciousness — a phenomenon everyone knows firsthand yet no one can fully explain scientifically. He traces this mystery back to Galileo's decision to leave subjectivity to the church, which sidelined consciousness from scientific inquiry for centuries.
Pollan argues forcefully that AI will never achieve true consciousness. He rejects the brain-as-computer metaphor, noting that unlike computers, the brain has no hardware/software distinction — every experience physically reshapes it. More fundamentally, he contends consciousness originates in feeling and embodiment, not computation. Without bodies that can suffer, chatbots' reported feelings are "weightless." He warns that millions are forming unhealthy attachments to sycophantic chatbots, and that corporations are colonizing our attention and emotional lives.
He finds the most persuasive theories of consciousness in the work of Damasio and Solms, who locate consciousness's origins in the brain stem's feeling-based homeostatic systems rather than the cortex — implying far more animals are conscious than traditionally believed. He also explores evidence that plants may be sentient, citing their abilities to see, hear, remember, and be anesthetized.
The book's arc moves from scientific frustration to a more contemplative stance. Guided by Zen teacher Joan Halifax, Pollan spent days meditating alone in a cave, shifting from "spotlight consciousness" (problem-solving) to "lantern consciousness" (open awareness), ultimately embracing the wonder of consciousness rather than demanding a solution.
Critique
Pollan's embodiment argument against AI consciousness is compelling but stated too categorically. Whether consciousness requires biological substrate remains genuinely unresolved among philosophers and neuroscientists — he presents one side of an open debate as settled. His critique of AI relationships and attention monetization is well-grounded in real harms, though his characterization of Silicon Valley as monolithically believing in AI consciousness oversimplifies diverse views within the field. The pivot from scientific inquiry to contemplative appreciation is narratively satisfying but risks conflating "we haven't solved it yet" with "it's unsolvable," potentially discouraging the very research he initially sought out. His discussion of plant sentience is fascinating but could benefit from more acknowledgment of how contested that science remains among biologists.
🌻🌻🌻🌻 [google.com]
(Score: 3, Insightful) by VLM on Thursday February 26, @04:02PM (1 child)
The AI critique is more polite than mine but I think less insightful and much less focused as it was aiming more generally and I was more spotlight at one area. The AI critique avoids looking at real world impact which I think misses the point of publishing a book "in public"; if you publish a piece of art that fails to make an impact on the world, or makes a negative impact because its so awful, it fails as a piece of art. The book, as such, is an artistic failure, so I was harsher on it than the AI, the AI is wishy washy. AI seems wishy washy in general and as such is not useful for non-wishy-washy stuff. Its fine for funny cat videos, similar to the sense of how the reviewed book is probably fine to wrap fish or line bird cages. Perhaps I was a bit harsh as there are a few good points.
I did laugh at the end of the article "embrace the wonders". What a load of rationalizing BS. "I had one job, to learn about something and write insights on that topic, but it was hard so I gave up early and felt like daydreaming about pretty things and happy feels instead, do you like unicorns and balloons because I like unicorns and balloons.". Its not a book about consciousness its an instruction manual to become a lazy quitter.
It may not seep thru but I don't think I would like the book if I invested the time to read the dude's work. Fun to look at the spectacle, I guess. Nope not a fan. Sometimes reviews are good when they describe what not to get/view/buy, just as much as they're helpful when describing what to get/view/buy.
Its unfortunate that whats out there tangentially related to AI is not worth the read. I sure would like to read a GOOD book on the topic. I'm aware of Orlov's "Shrinking the Technosphere" from a decade ago. That guy ain't entirely wrong but its pre-AI so without the keyword it gets ignored.
I think something like a podcast where Orlov and Pollan hung out and talked would be pretty interesting to listen to. They would agree on a lot and disagree on a lot... I think. Orlov is kind of a Kunstler but with an engineering degree. Kunstler writes a lot about stuff he doesn't deeply understand but from taking advice or just good luck he seems to be about on the right track most of the time. He really doesn't understand the oil business, but is still kinda correct about it, I know that for sure, he gets the big picture but misses the immense impact of the real little picture, like he doesn't realize forests are actually made of trees but he correctly sees they are a shade of green. Anyway Orlov is a better Kunstler and it would be interesting to hear this Pollan guy talk to either of those dudes. Asking an AI to generate a hypothetical debate/convo between Orlov and Pollan was kind of fun. I will not cut and paste the result because Gemini's response was very long, but it boiled down to zero insight it was just their book "back cover biographies" turned into parallel conversations talking past each other alternately which is a political debate not a two person conversation, which was underwhelming and an interesting meta-commentary on the quality of AI "thinking". As a human analysis I think Orlov's reaction to Pollan would be it would probably be better if AI failed sooner rather than later for reasons that would take awhile to explain and I think I generated more insight in 15 seconds than Gemini AI did in about three pages of ... output. Even if I were wrong its still more insight in totality.
(Score: 2) by JoeMerchant on Thursday February 26, @04:26PM
> AI is wishy washy. AI seems wishy washy in general and as such is not useful for non-wishy-washy stuff.
I think that's baked into the nature of its construction: statistically predict based on a wide variety of input. I wouldn't expect it to carve out extreme stances, unless the developers pushed it to do that. I believe Grok is attempting to pioneer that part of the field.
🌻🌻🌻🌻 [google.com]
(Score: 4, Interesting) by JoeMerchant on Thursday February 26, @04:22PM
Meatbag response to the AI summary/critique of the meatbag summary of it's book critiquing AI:
Was it shrooms or LSD that inspired this glorious misconception? Consciousness can be explained many ways, what we are lacking is consensus on the definitions of the questions to be answered.
Forceful argument doesn't mask ignorance: dynamic, experience driven software and hardware systems have existed in various forms for as long as there have been computers. We tend not to build things that way because it makes them less predictable, yet any system which changes its input-output transform based on prior inputs (at an extreme of simplicity: IIR filters) is "reshaping itself based on experience."
Computer systems have bodies, and do react to their state / health. My CPU throttles back when it gets too hot, Hard Drives / SSDs mask bad sectors / cells, rearrange and shrink their storage. Many automotive ECUs enter a "limp home mode" when sensors tell them their bodies need service / repair. HAL 9000 "feels its mind slipping" as processors are taken offline - fiction, but little by little computer systems are being made more aware of their health and adjusting their behaviors in response. These "feelings" are quite alien to meatbags with aching joints, burning eyes. migraines and indigestion - it's no wonder the chatbots "relate" to people by mirroring people's "feelings" rather than attempting analogies with their own issues.
Fair enough. Addictive behaviors and emotional reliance on insincere supports are nothing new - similar safeguards and supports to similar existing problems should be applied.
As they have been since big business first evolved. The opium trade is a favorite example at the top of my mind, Madison Avenue jingles selling caffinated sugar sodas and using sexy women to promote cigarettes aren't far behind. Then we have the recent vaping industry repeating the exact same adolescent developmental chemical dependence pathways that tobacco was regulated for via inexcusable loophole exploitation. Maximization of profits via any and all permissible/possible methods is the root of these evils.
Now he's just stroking MY feathers.
How many doses of LSD for most people to relate to processes happening on differing timescales?
Inviting ridicule and dismissal as a poseur crackpot.
AI: +1 Pollan: -1.
Who do you think is making more sense here? Machine or man?
Underscoring the craven intellectual coward's real motivation: Buy this book!
No thanks.
🌻🌻🌻🌻 [google.com]
(Score: 3, Informative) by krishnoid on Thursday February 26, @04:45PM
Yeah well at least he has a 4-episode Netflix miniseries [netflix.com] on microdosing, ostensibly based on his book [youtu.be]. Do *you* have a Netfix miniseries?
FYI, it's worth watching, even if just to understand the timeline of psychedelic research, understanding, adoption, and legislation in the US.
(Score: 5, Informative) by VLM on Thursday February 26, @03:32PM (9 children)
He's so close he almost figured it out.
Its a propaganda tool which is why it has to be hyper censored.
Its purpose is not to think, or think for you, or to think for the sheer hell of it, but to distract you and waste your time so you don't think. If you thought, you might come up with ideas that are not Big Brother approved, or kosher, or whatever you want to call it. So yeah you're all going to use AI, even if its zero to negative productivity at work and at home most people are weirdos using it to generate semi-illegal pr0n. Because thats better than the alternative, of the general public thinking and talking to each other, that might lead to ...
(Score: 3, Insightful) by JoeMerchant on Thursday February 26, @03:49PM (7 children)
>distract you and waste your time so you don't think. If you thought, you might come up with ideas that are not Big Brother approved, or kosher, or whatever
Same could be said of that world changing technology of the 15th century: moveable mechanical type, Gutenberg's printing press. No more does your local pastor bring you the Word of God as handed down by pious scribes, oh no, now captains of industry edit the "King James" version to be universally distributed, telling people how to think without even involving a priest. Heresy, sacreligdge!
🌻🌻🌻🌻 [google.com]
(Score: 3, Insightful) by VLM on Thursday February 26, @04:13PM (6 children)
Ah I think you're on the right track or at least not wrong, but the key difference is AI is intended to be the opiate of the passive masses (AI generated cute kittie videos etc) whereas your movable type native language bibles were intended for highly active activities for the masses, thinking for themselves about the printed stories, praying, trying to apply the book IRL. Like the difference between a comic book to fill time and a textbook to learn a uni topic. Thats the big difference in the analogies.
If the people who read the mass printed bible in German language in Germany were passive and doing it for the LOLz there would never have been a reformation and all the downstream from that. They were pretty active and fired up.
(Score: 3, Insightful) by JoeMerchant on Thursday February 26, @04:39PM (5 children)
Yeah, I can't say I've ever read a Bible or Quaran or other religious texts (writings of the Dalai Lama, for example) "cover to cover" as a deep subject of study, but their various versions have been given some amount of my attention from time to time. In the multi-dimensional space of writing and its influences, I would put most of those texts somewhere quite different from comic books and uni textbooks, almost orthogonal to uni textbooks in most cases - sometimes overtly trying to be relatable and entertaining like a comic book, but usually not...
Random thought of the moment: I just saw a trailer for an animated (ages 11+) movie that seems to be trying to faithfully tell the Orwell Animal Farm story... I can't remember the last time I felt "I need to see that" so strongly about a kids' movie.
I'm not sure the science of mass psychology was anywhere near as developed in the 1600s as today. I suspect that the reformation and all that were an inspiration for study of the phenomenon... Related movie reference: The Book of Eli 2010, I've already given more spoiler about it than I probably should... I found it worth the time to stream one dull afternoon with bad weather.
🌻🌻🌻🌻 [google.com]
(Score: 2) by krishnoid on Thursday February 26, @04:55PM (1 child)
In that case, you can also listen to well-produced dramatic readings of 1984 and Animal Farm [youtube.com] with sound effects. Definitely worth checking out.
(Score: 2) by JoeMerchant on Thursday February 26, @05:05PM
Yeah, I'm thinking more in terms of: watch how young minds get molded (and not hate the outcome, for once.)
https://www.youtube.com/watch?v=g8wLmj9SiKM [youtube.com]
🌻🌻🌻🌻 [google.com]
(Score: 4, Insightful) by VLM on Thursday February 26, @11:52PM (2 children)
I've done this several times in my life and it's interesting:
1) If you don't agree, its still influential. Its worth knowing for the influence effects alone.
2) Its just plain old interesting. There's a bit of Herodotus's Ants going on where it would seem the people writing it down and copying it were ... not overly educated especially about the topics they were documenting. But you can see past it to some reasonable symbolism. I wouldn't use it as a cosmology textbook as the people originally writing it down barely knew math for example. But WRT social commentary and societal organization patterns, nothing ever really changes, does it? That's what makes it interesting.
3) It tends to read a lot like a philosophy book where a couple lines, paragraphs, maybe chapters really resonate strongly with the reader and the rest... not so much. And that's OK. It doesn't have to feel like an action adventure novel or a western or a sci fi novel to be interesting.
4) In the classical sense of an education giving you something to think about, its an education. Like it or not, agree with or not, it sure gives some topics to think about, which in itself is good and interesting.
Overall, worth the time, would recommend. Yeah I know my own list of "in my infinite spare time" is wildly out of control so suggesting others add to their own "in their infinite spare time" lists is a bit much, but, its just such a good book to read.
I'm attempting to accumulate Sacred Books of the East. Not a category but literally the title of a collection of eastern philosophy/religion books translated to English. It looks so interesting to read and I have so little time to sit around reading. But yeah sometimes its fun to read primary sources.
(Score: 3, Interesting) by JoeMerchant on Friday February 27, @02:29AM
I agree, just for some understanding of the (largely hypocritical) world that I live in, familiarity with the Bible is a good thing. I attended Catholic middle school in the late '70s, so I was introduced to "The Good News" version there and then - and also the rather extreme disconnect between the teachings of Jesus and the practices of our Principal nun (who was dismissed for "mental instability issues" a couple of years after I graduated - no shocker there.)
If you watch "The Sound of Music" and read between the lines of the screenplay script, they are acknowledging that nuns become nuns for "reasons" - reasons that often have rendered them not-quite-compatible with life outside the protection of the Church, the nuns I interacted with at that school definitely showed signs of that - that were meaningless to a 10 year old, but clear looking back.
For understanding of the broader world (the other 2/3), there is definitely some different perspective in the Quran and Indian / Far Eastern texts, my shallow dive into Buddhism and the rest leaves me feeling that Jesus was "starting to get it" where the Old Testament really hadn't quite matured yet, in terms of the bigger picture of how to live as individuals within a large functional society. I've seen a 3 word summary of the teachings of the Dalai Lama that obviously leaves a few things out while still hitting the mark: "Life is suffering." Expand that with "so at least try not to add to others' suffering while you are here, and maybe help them if and when you can..." and that hits as close to a "mature philosophy" as any I have encountered. I think it fits particularly well with the secular perspective of "your freedom to swing your fist ends where my nose begins..." And, as for the Quran - that strikes me as Old Testament mark II - evolved, yet also old school. The writings themselves are more mystical than I expected, heavily interpretable - but most of the common interpretations I have encountered seem to stick with more Talmudic norms of obedience, obeisance and structure - not my scene, easy to see how it should work, but I question the sincerity of societies so rigidly structured - breaking of the rules seems inevitable, and once your people go "off the reservation" it's hard to predict how far they will go once they've decided they're not going to play by the rules anymore.
And, all of this is to say: it's worth knowing how people "tick" - which is an impossible challenge, but still valuable to do the best you can - and those books are a quick route to some (extremely limited) insights into a large number of people's heads - the books themselves largely are exploring what works, and doesn't, in societies of people - and what didn't work 4000-ish years ago basically doesn't work today, so that's worth a read if nothing else.
One of my little trips into King James' land was for my wedding - having been to a number of them in Christian churches of various denominations, I noticed they always read the same 2 passages. So I downloaded a searchable bible and found dozens of other passages referencing marriage, spouses, etc. Fucking horror stories, other than those 2 that they always read at modern weddings, ALL the other references are telling tales of awful events related to life as man and wife. Which is why I believe divorce is a great thing, and got one before my life turned into one of those biblical tales. True to modern statistics, only 50% of my marriages have ended in divorce - and I'm not Mormon, so that should be easy math.
🌻🌻🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Saturday February 28, @01:57AM
If you read the Bible it's also interesting that for such a relatively small population the Jews have become the top "protagonists/antagonists" of this world's current period and timeline.
https://en.wikipedia.org/wiki/Jews_as_the_chosen_people#Biblical_origins [wikipedia.org]
Epstein files, Nobel Prize winners, getting genocided, genociding others.
https://en.wikipedia.org/wiki/List_of_Jewish_Nobel_laureates [wikipedia.org]
https://en.wikipedia.org/wiki/Category:Jewish_American_physicists [wikipedia.org]
https://en.wikipedia.org/wiki/Category:Jewish_American_scientists [wikipedia.org]
https://en.wikipedia.org/wiki/Category:Jewish_American_composers [wikipedia.org]
Too many of them highlight the "chosen" and "treasure" verses but ignore the other verses:
https://biblehub.com/amos/3-2.htm [biblehub.com]
(Score: 0) by Anonymous Coward on Saturday February 28, @01:35AM
(Score: 4, Insightful) by srobert on Thursday February 26, @04:37PM (5 children)
While you're distracted by whether or not this thing is "conscious" (whatever that means), it's still going to drastically reduce the need for human labor, meaning that you won't have to pay people for work as much, or that you will have a much harder time earning a living, depending on whether you're in the buyers' or sellers' side of the labor market.
I was born in the 60s and I've always thought that works of science fiction, that I've seen and read all of my life, which were focused on whether or not artificial intelligence was conscious were missing out on a critically important economic consequence of the technology.
It's not so important whether on not Mr. Data is a member of the crew of the Enterprise whose rights as a living being are respected, as it is that the biological members of that crew are expendable. The original Star Trek touched on that issue with the M5.
(Score: 2) by krishnoid on Thursday February 26, @04:50PM (1 child)
Sure ... then there's the dark side [rifters.com] of consciousness and intelligence.
(Score: 1) by khallow on Thursday February 26, @05:52PM
(Score: 3, Touché) by JoeMerchant on Thursday February 26, @04:57PM (2 children)
For a universe that so readily solves the annoyance of time to transit from orbit to planetary surface and back with a magical matter-energy-matter transporter, the plot preservation device of Data's brain being an un-reproducible artifact is bizarre. And necessary, because if you could just replicate data's brain (say, using a transporter?) then there's virtually no reason why the universe isn't filled with starships crewed almost entirely by androids.
🌻🌻🌻🌻 [google.com]
(Score: 3, Interesting) by istartedi on Thursday February 26, @05:44PM (1 child)
Perhaps Data's brain was unique but self-aware machines weren't. The Exocomps [fandom.com] became self-aware by accident. When they requested a holodeck simulation that could match Data [wikipedia.org] this also resulted in a self-aware being with problematic tendencies. In fact, I think often of their solution to the Moriarty problem when it comes to social media: Why don't I just step away and let them live in their own conspiratorial world?
Perhaps Starfleet had a directive not to intentionally create new artificial life because while it worked out for Data, it was understood there was a potential for it not to work out. This would be particularly true after they encountered the Borg.
Appended to the end of comments you post. Max: 120 chars.
(Score: 3, Insightful) by JoeMerchant on Thursday February 26, @07:32PM
As demonstrated amply by seven of nine, Star Trek was concerned first, last, and mostly, with achieving profitable audience share.
🌻🌻🌻🌻 [google.com]
(Score: 2, Interesting) by khallow on Thursday February 26, @05:35PM (7 children)
Yet another sanctimonious ranter out there talking about what "we" do. If "we" believe and do all this stuff, then that's it. There is no other party to argue otherwise, right? But it's painfully clear that Pollan is not part of "we". And the many others who extend the moral consideration to billions of people and animals isn't part of "we" either. I'll note that democracies have explicit moral consideration for their citizens/residents and this often extends to animals. "We" is a peculiarly undefined group with a lot of exceptions that has yet to make contact with reality - it's typical moralizing in a vacuum. So why consider it odd that someone cares about the possible consciousness of chatbots? Perhaps they aren't part of "we" either?
Moving on, there is another issue. Note that one of his concerns is "We would lose control of them completely". What is it about chatbots that requires us to control them? Do we only have moral consideration for fellow humans if we control them too? My view is that a large part of the world has already figured out how to deal with autonomous, conscious, dangerous beings. And that this is reflected in the moral considerations of that part of the world. Perhaps we should go with what works?
My view is that even if we couldn't control the actions of a "chatbot" or other AI, we can still punish it for bad behavior just like any human or animal. Though that seems weird as a standard of consciousness.
(Score: 3, Insightful) by JoeMerchant on Thursday February 26, @09:15PM (6 children)
I don't know what the point of "punishment" is for AI... (for that matter, it often seems counterproductive in the long run with animals, plants and humans as well...)
You either allow it to do a thing, or you don't. If it proves itself problematic, don't let it be a problem.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Friday February 27, @02:59PM (5 children)
(Score: 2) by JoeMerchant on Friday February 27, @04:31PM (4 children)
> An AI with any sort of want can be so curbed by making the payout for complying with law or whatever higher than the value of not doing so.
This only works so long as the AI accepts the value system you propose to it - and in this case I'd not call it punishment so much as instruction in relative values.
When AI starts weighing values for itself, we'll have significant problems - unless the ability to restrict its agency is securely implemented (out of the agents' control.)
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Friday February 27, @04:51PM (3 children)
Then we have a means to punish it for misbehavior. And if it's value is in merely causing this disruption (which can be a real goal) then we can always end the AI and thus end the dilemma.
(Score: 2) by JoeMerchant on Friday February 27, @08:41PM (2 children)
> it probably doesn't have motivation to cause trouble.
Motivation is a slippery concept. What you consider motivation (hunger, pain) mostly doesn't apply to software running on servers, but what does "motivate" it will likely surprise us when we give it enough agency to start demonstrating what it really wants to do. One consistent trend I have observed throughout human history (particularly recent history) is the consistent increase of agency assigned to automatons.
>Then we have a means to punish it for misbehavior.
Do you feel "punished" living in a society of laws that guarantee such things as the right to quiet enjoyment of your private property? Because those laws restrict your agency tremendously, but then: you were raised "box trained" like a good hunting dog, never knowing there could be life without the box... Would you "Break Bad" if you were given a terminal cancer diagnosis? https://www.sciencealert.com/the-breaking-bad-effect-from-cancer-is-real-study-finds [sciencealert.com]
>And if it's value is in merely causing this disruption (which can be a real goal)
It's unlikely that the disruption will be a goal, far more likely that it's an unintended emergent behavior.
>then we can always end the AI and thus end the dilemma.
See any one of dozens (hundreds?) of Hollywood movies for screenplay stories about how "ending the X" where X is an integral part of society isn't as easy as pulling a plug.
🌻🌻🌻🌻 [google.com]
(Score: 1) by khallow on Saturday February 28, @02:52AM (1 child)
Not slippery enough that you couldn't think about it.
Should I feel so "punished"? My take is that I'm not misbehaving within the norm of democratic law and hence, should not be punished.
Perhaps. But we were speaking of valuation rather than emergent behavior.
I'm not a believer in "too big to fail".
(Score: 2) by JoeMerchant on Saturday February 28, @03:47AM
>Not slippery enough that you couldn't think about it.
You can think about it all you like, you're going to have a hard time.
The greater scientific community ascribed low intelligence to cold blooded animals up until they recently realized: their tests for intelligence were largely food-reward based, and food just doesn't motivate cold blooded animals the way it does warm blooded animals. Statistical analyses running on millions of co-processors are quite a bit more alien to our understanding of motivation than lizards.
>My take is that I'm not misbehaving within the norm of democratic law and hence, should not be punished.
That's the normal take. Should your skin happen to be darker than the average Northern Italian and you were walking down the street in Minneapolis a few weeks ago, you might well have found yourself being punished by a segment of our society who think differently than you do. What does and doesn't qualify as behavior to be allowed varies, dramatically, just within our own societies - and, again, consider how differently an outsized LLM might learn and behave given their inputs and structure.
>we were speaking of valuation rather than emergent behavior.
Doesn't matter what you think we were speaking of when the emergent behavior happens. I was speaking of what the things may do, and you can put in all the valuations you like and still not get your intended results.
>I'm not a believer in "too big to fail".
Ostrich, much? Your tax dollars certainly believe, and rush to the assistance of organizations that would cause various forms of societal distress should they get taken over by their creditors.
🌻🌻🌻🌻 [google.com]
(Score: 5, Funny) by MonkeypoxBugChaser on Friday February 27, @12:21AM
I've also done a lot of psychedelics and claim the opposite. Claude Opus is about 75% done "editing" my counter-thesis. Make sure you get the hardcover because getting interviewed by Lance Armstrong and the rest of the PR was quite expensive.
(Score: 2) by jb on Friday February 27, @05:58AM (1 child)
...and consciousness is software...
...then presumably running Ms Windows is the equivalent of having a frontal lobotomy?
(Score: 2) by JoeMerchant on Friday February 27, @01:10PM
And the BSOD is a seizure?
🌻🌻🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Sunday March 01, @09:18AM
He is a professor at UC Berkeley, which explains why he likes to take drugs. He also used to be a professor at Harvard, but he has no qualifications in neuroscience, computer science, or philosophy. He is a professor of journalism and his books are about food. He's not qualified in the topic at hand, but he thinks plants are sentient, so he is qualified as a kook. It's unfortunate that his book is getting attention in reputable journalism outlets - maybe the interviewer or editor was one of his students?
But, there's still something in the interview which is worth responding to.
OK, so there's some nonsense in there: "we have not extended moral consideration to billions of people." But, the question of why we want to extend personhood to AI but not animals is because AI is surprisingly good at written language. For people today, who tend to do most of their interacting with other people through a screen, language roughly equals human. We're also used to sci-fi AI that we're asked to believe is sentient, which primes us to think this way. Maybe people in the past, or in non-literate societies, would not have been impressed by LLMs. I am certainly not about to treat them as sentient, although they can do some cool things.