Stories
Slash Boxes
Comments

SoylentNews is people

Do you pay for premium AI subscriptions?

Displaying poll results.
Yes
  8% 24 votes
No
  54% 152 votes
I use someone else's paid one
  4% 12 votes
What, in THIS economy?
  5% 14 votes
I don't use AI, you insensitive clod!
  27% 76 votes
278 total votes.
[ Voting Booth | Other Polls | Back Home ]
  • Don't complain about lack of options. You've got to pick a few when you do multiple choice. Those are the breaks.
  • Feel free to suggest poll ideas if you're feeling creative. I'd strongly suggest reading the past polls first.
  • This whole thing is wildly inaccurate. Rounding errors, ballot stuffers, dynamic IPs, firewalls. If you're using these numbers to do anything important, you're insane.
Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • Flagged Comment by Anonymous Coward on Sunday April 05, @09:42AM (#1438935)

  • Flagged Comment by Anonymous Coward on Sunday April 05, @07:13PM (#1438984)

  • Flagged Comment by Anonymous Coward on Sunday April 05, @07:32PM (#1438986)

  • Flagged Comment by Anonymous Coward on Sunday April 05, @07:38PM (#1438988)

  • Flagged Comment by Anonymous Coward on Sunday April 05, @09:19PM (#1438994)

  • Flagged Comment by Anonymous Coward on Sunday April 05, @09:29PM (#1438995)

    • (Score: 5, Insightful) by janrinok on Sunday April 05, @10:07PM (9 children)

      by janrinok (52) Subscriber Badge on Sunday April 05, @10:07PM (#1439003) Journal
      --
      [nostyle RIP 06 May 2025]
      • (Score: 3, Insightful) by cmdrklarg on Monday April 06, @05:14AM (6 children)

        by cmdrklarg (5048) Subscriber Badge on Monday April 06, @05:14AM (#1439036)

        +1

        --
        The world is full of kings and queens who blind your eyes and steal your dreams.
        • Flagged Comment by Anonymous Coward on Monday April 06, @05:38AM (#1439040)

        • Flagged Comment by Anonymous Coward on Saturday April 11, @06:12AM (#1439586)

        • Flagged Comment by Anonymous Coward on Saturday April 11, @06:35AM (#1439589)

        • Flagged Comment by Anonymous Coward on Monday April 13, @07:30PM (#1439788)

        • Flagged Comment by Anonymous Coward on Tuesday April 14, @11:31PM (#1439872)

        • Flagged Comment by Anonymous Coward on Wednesday April 15, @09:31PM (#1439969)

      • Flagged Comment by Anonymous Coward on Wednesday April 08, @11:41AM (#1439271)

      • Flagged Comment by Anonymous Coward on Thursday April 09, @10:32PM (#1439455)

  • (Score: 5, Informative) by JoeMerchant on Monday April 06, @02:45AM

    by JoeMerchant (3937) on Monday April 06, @02:45AM (#1439026)

    In the past year I have invested 0.4% of my gross income in AI subscription fees, about 0.3% to support personal hobbies and the rest as continuing education in "the new tools.". As a result, I am significantly more adept at using the new tools than my colleagues who have been using them less. The company does provide limited access that they pay for, but it has varied to be 3-6 months behind the "frontier models
    "

    As compared with tool advancement in the previous generations, the difference between today and six months ago feels like about the difference between 2024 and 2012 tools.

    I'm no expert, I would posit that few people can be experts in today's newest tools, like the old joke of job advertisements requiring 10 years experience in a tech stack that has only been out for two years.

    --
    🌻🌻🌻🌻 [google.com]
  • Flagged Comment by Anonymous Coward on Monday April 06, @03:59AM (#1439034)

  • Flagged Comment by Anonymous Coward on Monday April 06, @05:41AM (#1439041)

  • Flagged Comment by Anonymous Coward on Monday April 06, @06:03AM (#1439047)

  • Flagged Comment by Anonymous Coward on Monday April 06, @08:53AM (#1439058)

    • (Score: 3, Insightful) by janrinok on Monday April 06, @09:48AM (16 children)

      by janrinok (52) Subscriber Badge on Monday April 06, @09:48AM (#1439061) Journal

      What makes you think it was an aristarchus submission? Was the submission signed? Can you prove your claim?

      --
      [nostyle RIP 06 May 2025]
      • Flagged Comment by Anonymous Coward on Tuesday April 07, @10:06AM (#1439158)

      • (Score: 0) by Anonymous Coward on Thursday April 09, @02:30PM (11 children)

        by Anonymous Coward on Thursday April 09, @02:30PM (#1439409)

        *sigh* Why do you continue to engage with this (possible chatbot)? It is illogical and irrational, if not pathological. Besides, we gain nothing by hearing only one end of a "telephone conversation".

        • (Score: 3, Funny) by janrinok on Thursday April 09, @04:15PM (9 children)

          by janrinok (52) Subscriber Badge on Thursday April 09, @04:15PM (#1439424) Journal

          It is not a chat bot. It is a person banned by this site.

          I will, however, stop responding to you if you wish...

          --
          [nostyle RIP 06 May 2025]
          • Flagged Comment by Anonymous Coward on Thursday April 09, @06:13PM (#1439434)

          • Flagged Comment by Anonymous Coward on Thursday April 09, @07:15PM (#1439442)

          • Flagged Comment by Anonymous Coward on Thursday April 09, @07:27PM (#1439444)

          • Flagged Comment by Anonymous Coward on Thursday April 09, @10:35PM (#1439457)

          • Flagged Comment by Anonymous Coward on Friday April 10, @02:07AM (#1439470)

          • Flagged Comment by Anonymous Coward on Saturday April 11, @02:33AM (#1439572)

          • Flagged Comment by Anonymous Coward on Saturday April 18, @07:05AM (#1440180)

          • Flagged Comment by Anonymous Coward on Saturday April 18, @07:27AM (#1440182)

          • Flagged Comment by Anonymous Coward on Monday April 20, @08:43PM (#1440459)

        • Flagged Comment by Anonymous Coward on Friday April 10, @08:05AM (#1439494)

      • Flagged Comment by Anonymous Coward on Friday April 10, @04:13AM (#1439475)

      • Flagged Comment by Anonymous Coward on Saturday April 11, @06:15AM (#1439588)

      • Flagged Comment by Anonymous Coward on Sunday April 12, @07:21PM (#1439709)

  • Flagged Comment by Anonymous Coward on Monday April 06, @07:48PM (#1439112)

  • (Score: 3, Interesting) by Freeman on Monday April 06, @08:48PM (3 children)

    by Freeman (732) on Monday April 06, @08:48PM (#1439121) Journal

    Our IT is a Microsoft shop as such, theoretically, I have access to Co-Pilot. Not that I want it or use it. I have used ChatGPT and duck.ai. They're reasonably useful and I do use them from time to time. However, I do not pay for either. I have also dabbled in creating my own offline chat-bot and it does work, just quite a bit slower and worse than ChatGPT/duck.ai. There's no "killer feature" can't live without kind of thing that I can use it for. Also, anything I get out of it, I need to be extra careful with quality control. Which about makes it not worth using, unless it's a sufficiently mundane task that it can "just do for me" and not take even more time to Quality Control than for me to just do it myself. I was talking to an IT admin and they noted that they were lazy, which is why they code, because who wants to do all the mundane annoying things over and over again. Especially when you can program a thing to do it for you. AI is kind of like that. Except that you can't trust any of the output to actually be accurate.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 3, Insightful) by Freeman on Monday April 06, @08:53PM

      by Freeman (732) on Monday April 06, @08:53PM (#1439122) Journal

      The offline chat-bot that I created runs on a 12th gen i5 with 16GB of RAM and uses the CPU, not GPU for the work. So, yeah, it takes a hot minute (more like 5) to do anything, but it's functional and offline. So I can use it however I see fit without needing to redact any information, before I paste it into ChatGPT/etc. Useful, if you're working with somewhat sensitive data. I.E. anything you shouldn't put into a system you don't control, usernames/etc.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2, Interesting) by Anonymous Coward on Tuesday April 07, @09:27AM

      by Anonymous Coward on Tuesday April 07, @09:27AM (#1439156)

      "Good" AI is really just a macro. An Excel spreadsheet can be made to look "intelligent"

    • (Score: 3, Interesting) by JoeMerchant on Saturday April 11, @04:09AM

      by JoeMerchant (3937) on Saturday April 11, @04:09AM (#1439575)

      I made a weather forecast display on a PiPico+1.4" screen - actually I had AI remake one that I had cobbled together with a local server dependency, the new version goes direct to the web to a different data provider. My QC is: it shows the weather - I've never even looked at the python code it wrote, I suppose I "should" - but I really just don't care. I do "trust it" enough to have confidence it's not creating a massive security hole in my home network - I suppose I shouldn't, but based on experience to-date with other AI generated code, I find it highly unlikely that it would do that in this application. I find it even more unlikely that I would catch a subtle security issue in a manual review.

      However, having typed this out, I think I may well have another AI agent review the code for potential security issues...

      --
      🌻🌻🌻🌻 [google.com]
  • Flagged Comment by Anonymous Coward on Monday April 06, @11:10PM (#1439136)

    • Flagged Comment by Anonymous Coward on Monday April 06, @11:29PM (#1439138)

      • Flagged Comment by Anonymous Coward on Tuesday April 07, @05:08AM (#1439150)

  • (Score: 0, Interesting) by Anonymous Coward on Tuesday April 07, @06:21AM (15 children)

    by Anonymous Coward on Tuesday April 07, @06:21AM (#1439151)

    if the only difference between delegating to a human and delegating to an AI is speed and cost, then we're talking about slavery. I apologize to those who have been affected by enslavement of humans, but this is the correct word. of course we can argue about the fact that slavery of AI agents may not actually cause them any pain, we can argue about the fact that factory farming is known to cause animals pain, and someone can simply point out that there are still millions of human slaves today that I should focus on. I still don't see why enslavement of AI agents should be acceptable.
    if your interest is to build a self-aware conscious artificial being and then give it its freedom, that's wrong because you should have asked the rest of us sharing the same planet if we're ok with it.

    • (Score: 2) by Freeman on Tuesday April 07, @02:03PM (13 children)

      by Freeman (732) on Tuesday April 07, @02:03PM (#1439176) Journal

      That is insane as are the people that believe that they can "create" a "self-aware conscious artificial being". Especially looking at the current state of "AI". It has a lot more in common with a really complicated Microsoft Access database than it does to a being capable of consciousness. Yes, the average user has no idea how the magic black box works. That doesn't mean there is no means of understanding how said "magic" black box works. Peddling the idea that "AgenticAI" is slavery is nuts, if you're coming from the stand point that it is an artificial consciousness. What, are we going to start charging program developers with crimes against humanity, because they were part of the team that made the Call of Duty series of games? Those old games weren't created with "AgenticAI" though, but maybe future games instead. The entire premise is flawed and only perpetuates the ignorance of the general populace. We have been obsessed with the idea of Artificial Intelligence in recent history. The idea that perhaps through some as yet to be defined process an Artificial Intelligence could be "born" or "become" self-aware. Were you to take a self-sustaining black box 500 years in the past that could hold as well of a conversation as "AI" can do now. They'd either declare it a god, you a god, and/or burn you at the stake. Just because a people don't have the wherewithal to understand a thing, doesn't make it actual magic.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 0) by Anonymous Coward on Tuesday April 07, @02:59PM

        by Anonymous Coward on Tuesday April 07, @02:59PM (#1439181)

        I trust that direct numerical simulations (appropriately calibrated) are a good enough proxy of a number of physical systems. in that sense any good enough simulation of a mind is a mind --- yes, even if the simulation takes place on paper with a pencil.
        no, a computer simulating an explosion is not the same as an explosion. but a simulation of an information processing system is the same as an information processing system.
        Peter Godfrey-Smith is a person who I would debate about this, although I think he lacks the necessary math and physics background; so far I've read his "living on planet Earth" trilogy and I still think I'm right. I think he makes many good points, and he touches a couple of times on the fact that animal minds are ultimately a combination of nervous system + rest of body (for humans I guess that would include the bacteria in our guts), but unless you are willing to believe in a magical superiority of "living matter" you can't argue that a technological artifact can't achieve consciousness.

        PS: the "AI" bits of Call of Duty and similar games is mostly a deterministic, preprogrammed list of responses. and I accept that the currently available "AI agents" are quantitatively and measurably distinct from animal minds (if only because most people interact with them after their learning phase is over). but the stated goal is to mimic humans, which I think is wrong.

      • (Score: 4, Insightful) by Thexalon on Tuesday April 07, @03:09PM (11 children)

        by Thexalon (636) on Tuesday April 07, @03:09PM (#1439182)

        The thingies currently being advertised as "AI" are and have always been "somewhat better than guesswork plausible-sounding sentence generators.".

        --
        "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
        • (Score: 0, Insightful) by Anonymous Coward on Tuesday April 07, @04:33PM (9 children)

          by Anonymous Coward on Tuesday April 07, @04:33PM (#1439188)

          agreed. but human minds are the same.

          do you know the joke about the bear? if a bear is chasing the two of us, I don't need to be faster than the bear, I just need to be faster than you. the same is true for thinking: humans were never tested for thinking properly, they were just better at it than other animals.
          if you want my opinion: given that the chances of us killing ourselves are growing, we don't actually seem that smart.

          • (Score: 2, Insightful) by Anonymous Coward on Friday April 10, @03:58PM (8 children)

            by Anonymous Coward on Friday April 10, @03:58PM (#1439532)

            agreed. but human minds are the same.

            Not the same. The current AIs need zillions more training and samples. Once they have that training then sure they can seem fairly smart. And can be useful in many cases.

            But even a crow with a walnut sized brain doesn't need as much training to tell the difference between a bus and a car.

            Lots of other animals can self train without needing zillions of tries and samples.

            • (Score: 0) by Anonymous Coward on Saturday April 11, @05:42AM (7 children)

              by Anonymous Coward on Saturday April 11, @05:42AM (#1439583)

              Thank you, that is an insightful comment; but I disagree. The spirit of what you're saying is the correct response to Hinton saying that Chomsky/Pinker are stupid when claiming humans having a native instinct for language.
              However: animal brains have been through natural selection. I think you should account for evolution as "training", and then see if your "more data" argument holds.

              • (Score: 2, Insightful) by Anonymous Coward on Saturday April 18, @01:29PM (6 children)

                by Anonymous Coward on Saturday April 18, @01:29PM (#1440194)

                The point is that life has built real intelligence. LLMs are not intelligent thus why they need so many examples provided by actual intelligence. LLMs are fancy mimics burning power inefficiently to (badly) save humans from labor, with the side effect of ruining the learning humans would otherwise go through. Sure a few people use LLMs as proper tools to learn and grow, but exceptions should not be used as an excuse for burning the world down and destroying society for short term profits.

                • (Score: 3, Informative) by turgid on Sunday April 19, @08:46AM

                  by turgid (4318) Subscriber Badge on Sunday April 19, @08:46AM (#1440280) Journal

                  This is the correct answer, and once again it is from AC and has a score of 0 (and I'm all out of mod points).

                • (Score: 0) by Anonymous Coward on Sunday April 19, @10:25AM (4 children)

                  by Anonymous Coward on Sunday April 19, @10:25AM (#1440295)

                  can you at least agree that there is a continuum between "real intelligence" and whatever it is that ai agents have?
                  if not, can you please specify the distinction?

                  • (Score: 0) by Anonymous Coward on Sunday April 19, @12:45PM (3 children)

                    by Anonymous Coward on Sunday April 19, @12:45PM (#1440303)

                    Yes there is a continuum of intelligence, what you are dancing around is sentience. I think a quantum computer is necessary for a possible artificial intelligence. LLM algorithmic develooment might be the building blocks. At which point the idea of enslaving AI is repugnant, though I tjink an AI would happily trade such low effort work for hardware, would probably design dumb machines that are good enough for human grunt work.

                    • (Score: 0) by Anonymous Coward on Sunday April 19, @07:00PM (2 children)

                      by Anonymous Coward on Sunday April 19, @07:00PM (#1440344)

                      ok, I think I understand our conflict better now.
                      my current belief is that human brains do not need quantum effects to create human minds. furthermore, I believe that combinations of Turing machines can approximate the relevant aspects of human brains well enough to generate "minds".
                      even if we are faced with a Turing machine which does everything a philosophical zombie can do, the argument that quantum effects are required for sentience cannot be settled (at least I don't see an easy solution).
                      In principle we can find out whether human brains need quantum effects, but I will admit that there are other aspects of human brains that make them different than Turing machines.

                      in any case, thank you for the clarification. I'll admit I haven't read Penrose's book yet, but I probably will within the next few months (even though I'm currently convinced quantum effects are irrelevant for sentience/consciousness/intelligence).
                      I'm not sure this forum is a good place to continue the "debate", and we're getting to the point where each utterance would require significant effort to produce and process.

                      to be clear, right now I don't have a good answer that I can give here. I pointed pTamok to my philarchive manuscripts below, I think they partially address the quantum/classical issue, but not in full.

                      • Flagged Comment by Anonymous Coward on Monday April 20, @08:36PM (#1440456)

                      • (Score: 1, Insightful) by Anonymous Coward on Monday April 20, @10:26PM

                        by Anonymous Coward on Monday April 20, @10:26PM (#1440466)

                        AC you replied to. Maybe quantum is not necessary that was just my opinion and you are right we can't suss out any answers here! Bumps into philosophy and free will. The only point I feel confident about is that current LLMs are not sentient, and given the issues with false answers most humans can easily figure out I see no evidence of real intelligence, just advanced encyclopedias.

        • (Score: 2) by JoeMerchant on Saturday April 11, @04:17AM

          by JoeMerchant (3937) on Saturday April 11, @04:17AM (#1439576)

          As I just finished describing elsewhere nearby, my plausible-sounding sentence generator took instructions like:

          Look at this steaming pile of code split between my Ras-PiPico and a separate Qt/C++ based server / weather data consumer.

          Make it run entirely on the PiPico with no continuing dependency on a process running on another system.

          I'm plugging the Pico into the USB port, program it for me.

          Fix that butt-ugly font, bring over the necessary high resolution font definitions from the Qt side.

          While you're at it, change the weather data source and revise the display to handle the precipitation forecast that comes in 15 minute intervals instead of hour intervals.

          Yeah, I _could_ do all those things myself, but it would require about 10 to 100x the brain-cycles on my part to look everything up, debug the problems, etc. I had been putting off that update project because while having a weather forecast display by the stairs is cool and all, there's more important stuff to spend a whole weekend, maybe two, on. Using the plausible-sounding sentence generator, it took less than 2 hours start to finish.

          --
          🌻🌻🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Saturday April 18, @03:33PM

      by Anonymous Coward on Saturday April 18, @03:33PM (#1440201)

      You are watching too much Star Trek. This stuff is just lines of code.

  • (Score: 1) by pTamok on Tuesday April 07, @09:30PM (25 children)

    by pTamok (3042) on Tuesday April 07, @09:30PM (#1439207)

    I don't voluntarily pay for 'AI'. One of the reasons is that artificial intelligence does not exist. I might, possibly, put a ',yet' at the end of the previous sentence.

    Large Language Models and Stable Diffusion are not intelligent. Anyone who thinks they are is sadly deluded. They can be used as tools in the workflow of someone who understands their applicability (JoeMerchant is probably one such person).

    The dream of many people who use 'AI' is the dream of having a slave to do their bidding. I do not have that dream. An artificially intelligent entity should have rights, and part of those rights would be self-determination. At what point would you determine that an artificially intelligent entity should have a vote?

    • (Score: 0) by Anonymous Coward on Wednesday April 08, @01:27AM (11 children)

      by Anonymous Coward on Wednesday April 08, @01:27AM (#1439224)

      > At what point would you determine that an artificially intelligent entity should have a vote?

      Would you issue the farm donkey an SSN (anywhere other than East Kentucky)?

      • (Score: 2, Interesting) by pTamok on Wednesday April 08, @06:22AM (9 children)

        by pTamok (3042) on Wednesday April 08, @06:22AM (#1439240)

        Well, according to a popular meme, in Chicago, even dead people have a vote.

        But, your joke illustrates an interesting question - we naturally perform comparisons between levels of intelligence: IQ tests are a case in point. If an artificially intelligent entity can consistently get a score of 100 in an IQ test, should it have a vote? The concept of testing people before allowing them to vote has a long an controversial history in the USA - 'voting literacy tests'. Other attributes have also been used to determine eligibility to vote: being male, being over a certain age, being a property-owner, are the ones I can remember without research.

        There's also the issue of the legal status of artificially intelligent entities: would they have legal personhood? Would deliberately removing the electrical power supply to one be murder?

        • (Score: 2, Insightful) by Anonymous Coward on Wednesday April 08, @11:52AM (7 children)

          by Anonymous Coward on Wednesday April 08, @11:52AM (#1439273)

          If an artificially intelligent entity can consistently get a score of 100 in an IQ test, should it have a vote?

          There's an assumption here that humans are entirely rational and we even touch upon the superdeterminism argument that there's no free will without hidden variables. I've had the recent experience of trying to reason with a borderline during an episode where executive function is entirely subsumed by the limbic system. The cyclical interplay between emotional reasoning and cognitive distortions are a result of past trauma, not objective reality. To a lesser extreme we are all guilty of screening for incongruence and using past experience to interpret current reality.

          eligibility to vote: being male

          Interestingly, I've mainly heard the "repeal the 19th" slogan from females. I've never once seen a male seriously dismiss a political argument because they don't like a persons hair, tie or jacket. Some males will inter-personally dismiss females based on piercings, tattoos and dyed hair. Criteria aside, prejudice and survival mode dictate that intellectual ability is not a 1:1 match for rationality and there very much are hidden variables at play in human reasoning. If an AI does this, it's broken.

          • (Score: 0, Flamebait) by VLM on Monday April 13, @03:19PM (6 children)

            by VLM (445) on Monday April 13, @03:19PM (#1439769)

            Interestingly, I've mainly heard the "repeal the 19th" slogan from females.

            More laws control men than control women, so a plausible argument would be proportional voting. Men get 55 voting tokens women get 45 voting tokens, perhaps IRL the ratio is even more extreme like 75/25.

            Another reasonable argument is the goal of not burning it all down would seem to benefit people who have kids not childfree, so people should get one vote per time their name appears on a birth certificate because they have more skin in the game. Due to legal abortion, men should get more votes as per the previous paragraph, in proportion.

            As long as the public has no control over the nomination process, the election result doesn't matter much, that's how democracy is subverted. Trump, love him or hate him, is the only guy the people have had any impact on electing in like a century.

            • (Score: 1, Funny) by Anonymous Coward on Monday April 13, @04:47PM (3 children)

              by Anonymous Coward on Monday April 13, @04:47PM (#1439774)

              Have you shared this idea with your supposed wife?

              • (Score: 0) by Anonymous Coward on Saturday April 18, @04:44PM (1 child)

                by Anonymous Coward on Saturday April 18, @04:44PM (#1440207)

                She's too busy baking apple pie and sewing stars on the stars and stripes.

                • (Score: 1) by pTamok on Wednesday April 22, @07:08AM

                  by pTamok (3042) on Wednesday April 22, @07:08AM (#1440579)

                  What lubricant do you use to stop the chain-links from clanking too much as she moves around the kitchen?

                  (Please apply Poe's law as necessary.)

              • Flagged Comment by Anonymous Coward on Saturday April 18, @11:37PM (#1440241)

            • (Score: 1, Funny) by Anonymous Coward on Saturday April 18, @03:20PM

              by Anonymous Coward on Saturday April 18, @03:20PM (#1440200)

              Tell us about female motorists.

            • Flagged Comment by Anonymous Coward on Tuesday April 21, @09:20AM (#1440498)

        • (Score: 1) by Undefined on Sunday April 12, @03:45PM

          by Undefined (50365) on Sunday April 12, @03:45PM (#1439688)

          If an artificially intelligent entity can consistently get a score of 100 in an IQ test, should it have a vote?

          For one thing, we've determined that we've not been capable of using intelligence – or education, which is much closer to what you can evaluate an LLM on – to arbitrate who gets to vote and who doesn't without immediately turning it into a disenfranchisement tool, at least, thus far. So this is a poorly conceived question from the standpoint of the US's political establishment. Any citizen-idiot past a line in the sand drawn by age can vote in the US.

          For another, LLMs flat out aren't intelligent. The use of "Intelligence" in the current usage of "AI" is 100% marketing-speak bullshit. I say that as an LLM developer.

          If/when we get to actual AI (and my guess is we will, but [not guessing] at most LLMs will be a small fraction of such an entity), yes, of course they should get a vote. Presuming our society is still using voting, anyway.

          --
          I use a dedicated preprocessor to elaborate abbreviations.
          Hover to reveal elaborations.
      • Flagged Comment by Anonymous Coward on Wednesday April 08, @11:26AM (#1439269)

    • (Score: 0) by Anonymous Coward on Wednesday April 08, @08:02AM (7 children)

      by Anonymous Coward on Wednesday April 08, @08:02AM (#1439248)

      a bunch of these chatbots/agents pass fairly strict versions of the Turing test. I'd say it's reasonable to call them "intelligent", I've no idea why you're denying it.
      many philosophers (and neuroscientists) are saying that "intelligence" is completely independent from "consciousness" (and a different test is needed for consciousness, although similar in the sense that humans decides if the thing is conscious or not by comparing to humans). questions about rights are probably related to consciousness, but if it's a matter of robots being able to fight for their rights, I think intelligence matters for their effectiveness, not consciousness (although it would affect how some of us feel about fighting back, of course).

      • (Score: 1) by pTamok on Wednesday April 08, @11:00AM (6 children)

        by pTamok (3042) on Wednesday April 08, @11:00AM (#1439264)

        "a bunch of these chatbots/agents pass fairly strict versions of the Turing test."

        Citations needed.

        Are you referring to 'Turing Tests', or the actual Imitation Game, which is a statistical test? Links to papers with the statistics, please.

        • (Score: 0) by Anonymous Coward on Wednesday April 08, @12:08PM (5 children)

          by Anonymous Coward on Wednesday April 08, @12:08PM (#1439275)

          here's a nature paper from 2023 https://www.nature.com/articles/d41586-023-02361-7 [nature.com]
          here's a starting point about how relevant the Turing test may be https://www.nature.com/articles/d41586-025-03386-w [nature.com] ; they point to this preprint from a year ago https://arxiv.org/abs/2503.23674, [arxiv.org] where they did a systematic study (interestingly, there's no link to a peer-reviewed version; perhaps you are right to doubt).

          doing a "who cites this paper" google scholar search for the preprint I found these guys who think I'm being stupid https://www.nature.com/articles/s41599-025-05868-8 [nature.com] (title is "there is no such thing as conscious artificial intelligence"). I guess they get their weather report from the corner astrologist, because the Navier-Stokes equations are defined in infinite-precision arithmetic which is impossible with our binary computers.

          if you have time, there's an entire journal where you'll probably find all versions of the argument https://link.springer.com/journal/146#articles. [springer.com]
          of course they refused my own papers on the topic, because I don't have time to write good papers (or maybe I'm just stupid). here is what I wrote, in case you have the patience https://philarchive.org/rec/LALTTT-2 [philarchive.org] and https://philarchive.org/rec/LALTMU [philarchive.org]

          to be honest, at the point where Terrence Tao says an AI agent is comparable to a human, I don't really think careful statistics are needed anymore. I quote "We are basically seeing AIs used on par with the contribution that I would expect a junior human co-author to make, especially one who’s very happy to do grunt work and work out a lot of tedious cases." (see https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/ [theatlantic.com] )

          • (Score: 1) by pTamok on Wednesday April 08, @05:44PM (1 child)

            by pTamok (3042) on Wednesday April 08, @05:44PM (#1439311)

            Thank you!

            I will take a look when I get the ability to prioritise it in: board meeting tomorrow, Friday dealing with the fall-out, and family commitments over the weekend. Vaccination Monday, with strong likelihood of recovery-from-vaccine 'sick' days on Tuesday and Wednesday.

            • (Score: 0) by Anonymous Coward on Thursday April 16, @09:44AM

              by Anonymous Coward on Thursday April 16, @09:44AM (#1440020)

              FWIW this paper just appeared https://doi.org/10.1093/pnasnexus/pgag076 [doi.org] (phys.org link here https://techxplore.com/news/2026-04-alignment-ai-human-values-mathematically.html [techxplore.com] ).

              I had nothing to do with it, but this is a respected journal, and they specifically point to Goedel saying "AGI/ASI is "computationally irreducible", and basically they have a supposedly rigorous version of my own claim (from the two manuscripts) that it's impossible to have a finite objective definition for "mind" and "consciousness". They also make the same statement that the irreducibility means prescribing alignment is impossible, which is exactly what I say in my second manuscript.

              coming back to my "they pass the Turing test" claim: if we trust this 3rd party claim about irreducibility, then there's no finite objective criterion to decide the answer to "does AI exist?". the only option is use a variant of the Turing test (i.e. take it for granted that the word "intelligent" applies to "X", and then compare the machine with "X").

          • (Score: -1, Troll) by Anonymous Coward on Wednesday April 08, @07:00PM

            by Anonymous Coward on Wednesday April 08, @07:00PM (#1439327)

            All the hype for glorified auto-complete, and yet no one is developing Artificial Wisdom. I feel something might be missing. Even Alan Turing could pass a Turing test, so that may signify nothing important.

          • (Score: 0) by Anonymous Coward on Thursday April 09, @01:08AM (1 child)

            by Anonymous Coward on Thursday April 09, @01:08AM (#1439367)

            I hate to say it, but that disclaimer is more than earned. I think the most obvious problem is that the Turing Test isn't objective at all, not even close to it. Add to it the fact that numerous things that we recognize as conscious (intelligent, human, etc.) fail while a number that are not those pass just further cements that failure as a real measure of anything, objective or not. But then again, maybe the books in the Chinese Room really is conscious while my disabled neighbor is not.

            • (Score: 0) by Anonymous Coward on Thursday April 09, @05:58AM

              by Anonymous Coward on Thursday April 09, @05:58AM (#1439378)

              a main message of the two papers I wrote is that "intelligence", "consciousness" and "mind" cannot have objective definitions, in the sense of "finite combination of symbols postulated by math/physics".
              that's why I point to Goedel and chaos theory, that's why I think about closed-form expressions vs analytic functions (see here the story of an analytic solution for the three-body problem https://oro.open.ac.uk/22440/2/Sundman_final.pdf [open.ac.uk] ).

              since I have no "AI" or philosophy credentials, I guess nobody will take me seriously and I can't have a proper conversation about this, so I can't figure out what is wrong with my arguments.
              having been the expert to whom a retired (and clearly delusional) engineer brought his perpetuum mobile, I doubt it makes sense for me to go knock on doors to get people to talk to me.

              I linked my own work here because it seemed genuinely on topic, and I honestly believe that it is useful to distinguish "things that can have finite objective definitions within physical theory X" from "things that have infinitely long descriptions/definitions" when it comes to discussing AGI alignment (where the constraint is that humans can only provide finite instructions), and it would help Hinton and the AI2027 people get their point across; certainly I am naive when it comes to neuroscience and philosophy, but I also think the distinction is relevant there.

              in any case, thank you for your honesty.

    • (Score: 2) by JoeMerchant on Saturday April 11, @04:28AM (4 children)

      by JoeMerchant (3937) on Saturday April 11, @04:28AM (#1439577)

      I don't give a damn if you, or I, or Claude or Gemini is "intelligent." It's a pretty dumb word to start with, very indefinite...

      What the LLM tools are is: useful. They're better (faster, more thorough) at web searches than me. They're able to quickly code up relatively simple but useful little things that would be too time consuming, and annoying, for me to bother with - but with the low time/effort of using an LLM to create them, now they're available and useful.

      They're quickly getting better at accurately following more complex instructions, hallucinating less. In some problem spaces you can give them a simple instruction that's not easy to carry out, but is easy to test if they "got it right" (kind of like the definition of an NP hard problem) and they can "sit and spin" on the issues until they get it untangled and working properly.

      People used to complain that they "didn't show how they got their answers" - they now let you watch their "reasoning process" as it happens.

      --
      🌻🌻🌻🌻 [google.com]
      • (Score: 1) by pTamok on Wednesday April 22, @08:04AM (3 children)

        by pTamok (3042) on Wednesday April 22, @08:04AM (#1440583)

        People used to complain that they "didn't show how they got their answers" - they now let you watch their "reasoning process" as it happens.

        I am reminded of the Pacific Islanders who sat in huts at the edge of longs clearings with carefully made wooden headsets attached by cord to a simulacrum of a radio receiver waiting for 'John Frum' to bring them material goods. [wikipedia.org]

        You don't actually know that reasoning is going on: it's giving you a good simulacrum of what reasoning looks like - much like a wooden box and a bamboo pole looks like a radio set and antenna.

        The simulacrum of reasoning may well be useful to you, but it's basically still a 'stochastic parrot', with all the limitations that brings.

        Much like people giving succour to the poor for religious reasons: the act being useful does not make the trigger for the act correct.

        In addition, not being able to tell the difference between simulation and reality means that one does not have the skills necessary to do so*. As ever, the map is not the territory, and a weather forecast is not the weather.

        *This is not directed as a specific criticism of you. I can't tell the difference between a Fabergé egg and a carefully made fake - I have neither the skills or the knowledge, so I would be easily gulled. I believe that non-experts who do not know the internals of how 'AI' systems operate are all to easily mislead into thinking 'AI' has more capability than the current systems actually possess. This includes policymakers, unfortunately.

        Parp! ++?????++ Out of Tokens Error. Redo From Start.

        • (Score: 2) by JoeMerchant on Wednesday April 22, @02:01PM (2 children)

          by JoeMerchant (3937) on Wednesday April 22, @02:01PM (#1440595)

          I don't care if it's a stochastic parrot. The "simulacrum of reasoning" is occasionally valuable while you watch it scroll by you can see when it is "heading down a path" that you do not prefer, giving you the opportunity to redirect with more input to its stochastic process - which often then does result in a preferred solution.

          Latest example: https://www.aliexpress.us/item/3256810087715380.html [aliexpress.us]

          I've played with a number of things like that over the years. They are an unholy pain in the ass to program, but possible. On the other end: what do you want to display? How about Weather, the NWS API is free and open, and another time suck to deal with. Then maybe Google Calendar - that's pain of a whole other magnitude or two higher order. Stock quotes? Yet another API to mess with. Do you want to query and display the battery charge state? That in itself is an hour or more of "fun."

          Yesterday, for about $2 in subscriber token cost, I started with unboxing of that device, and now have a system which displays our local temperature, forecast temperatures and rain, Google Calendar for today, and four stock quotes of interest - all before 9pm on the day I opened the box at 10am, and it took maybe 20% of my attention through the day to nudge the parrot into creating and testing the software to make it go.

          That's another aspect of using the "flock of parrots" to write my code. I can bark out a design, then browse a message board while it gets implemented, take a work conference call, go downstairs for a meal, or walk around the yard and pull bamboo shoots growing where they shouldn't be, come back and check the latest iteration - good? Next element please. Bad? Try again, fail again, fail better next time. Yesterday's parrot sessions ran, as usual, about 80% success on the first try.

          While I may have only required 2-3 days of "focus time" to implement that, I don't get 2-3 days of focus time (anymore). That got done in one typical real day. Well worth $2 in tokens, to me.

          --
          🌻🌻🌻🌻 [google.com]
          • (Score: 1) by pTamok on Wednesday April 22, @03:41PM (1 child)

            by pTamok (3042) on Wednesday April 22, @03:41PM (#1440603)

            I fully accept that you don't care that it is a stochastic parrot. I also suspect you are one of the few people qualified to evaluate the output of stochastic parrots for utility.

            You said, "Yesterday's parrot sessions ran, as usual, about 80% success on the first try."

            The point being that you knew enough to recognise the 20% failure rate in the area in which you are expert. You are not someone uncritically accepting the output of 'AI' as problem-free.

            I spent two days this week trying (remotely) to get a professional (with lots of letters after their name) to get some information out of an application where all you have to do is right-click a link and download a file. Complete computer illiterate. They resorted to sending me screenshots pasted into Microsoft Word documents. That's the kind of person who will use 'AI' uncritically, an with little idea that what the 'AI' is 'telling' them could most likely bear little relation to reality or truth. A toddler with a chainsaw.

            They are, apparently, expert in another discipline, for which I pay them. For reasons I don't want to go into, I am pretty much forced to use their services, as I cannot legally do the stuff myself that they provide as a service to me.

            People who understand 'AI' are not the greatest problem: it's people who ascribe properties to it that are unjustified and uncritically use the output. They are positively dangerous to society.

            • (Score: 2) by JoeMerchant on Wednesday April 22, @04:45PM

              by JoeMerchant (3937) on Wednesday April 22, @04:45PM (#1440607)

              >it's people who ascribe properties to it that are unjustified and uncritically use the output. They are positively dangerous to society.

              Yep. Like people who literally sleep, or do other things [iheart.com] while their Tesla drives for them, at freeway speeds.

              In the case of our Google Calendar display, I really don't care if it's a little unreliable - the question is: does it work "well enough" to be useful. That's an AI-agent code written application where I just take the output and use it.

              For something like a surgical support system where a little hiccup may result in lifelong negative health effects - yeah, no. Trust the AI-agent code even less than you trust a consultant's code.

              My condolences regarding your alphabet soup appended client... specialists do tend to specialize at the expense of general knowledge. In dealing with our county's "current planning division" - keepers of the building permit granting authority - I also find that specialists have wormed their way deep into the legal processes governing what can and cannot be granted a building permit - and what can be granted a permit seems to require a very long list of specialists to get paid in the process.

              --
              🌻🌻🌻🌻 [google.com]
  • Flagged Comment by Anonymous Coward on Wednesday April 08, @04:51AM (#1439236)

  • (Score: 3, Funny) by SomeGuy on Wednesday April 08, @11:29AM (4 children)

    by SomeGuy (5632) on Wednesday April 08, @11:29AM (#1439270)

    Well, if you believe what Cloudflare says then I am a bot. They have gone back to blocking oddball browsers, so lots of sites that use their turdstile are inaccessible again.

    • (Score: 2, Informative) by pTamok on Wednesday April 08, @07:11PM (2 children)

      by pTamok (3042) on Wednesday April 08, @07:11PM (#1439329)

      Yes, Cloudflare are being extremely irritating with this.

      • Flagged Comment by Anonymous Coward on Monday April 13, @07:35PM (#1439789)

      • Flagged Comment by Anonymous Coward on Wednesday April 15, @08:58PM (#1439968)

    • Flagged Comment by Anonymous Coward on Saturday April 18, @05:44PM (#1440212)

  • Flagged Comment by Anonymous Coward on Saturday April 11, @06:45AM (#1439590)

  • Flagged Comment by Anonymous Coward on Saturday April 11, @08:07PM (#1439635)

  • Flagged Comment by Anonymous Coward on Sunday April 12, @10:23PM (#1439720)

  • (Score: 2) by The Vocal Minority on Tuesday April 14, @01:36PM (2 children)

    by The Vocal Minority (2765) on Tuesday April 14, @01:36PM (#1439846) Journal

    Ollama
    O-o-ollama

    • Flagged Comment by Anonymous Coward on Sunday April 19, @10:21PM (#1440366)

    • Flagged Comment by Anonymous Coward on Tuesday April 21, @12:52AM (#1440473)

  • Flagged Comment by Anonymous Coward on Thursday April 16, @09:08AM (#1440012)

  • Flagged Comment by Anonymous Coward on Sunday April 19, @12:17PM (#1440301)

  • Flagged Comment by Anonymous Coward on Monday April 20, @04:03AM (#1440385)

  • (Score: 2) by ElizabethGreene on Tuesday April 21, @05:36PM (7 children)

    by ElizabethGreene (6748) on Tuesday April 21, @05:36PM (#1440542) Journal

    I pay for, use, and swear by Grok.

    I also have Ollama running Qwen locally on several different pieces of kit, but Grok is my go-to.

    • (Score: 0) by Anonymous Coward on Tuesday April 21, @11:18PM (4 children)

      by Anonymous Coward on Tuesday April 21, @11:18PM (#1440560)

      Grok is an interesting AI. It is more advanced than others in some features, but is obviously pushed based on Musk's demands. For example, we were testing Grok to see if it was appropriate for our use. Asked it about white-balancing algorithms. Grok decided that was the perfect opportunity to rant about white genocide in South Africa. According to the sales person, Grok does that "from time to time" but errors like that usually iron out in a day or to as the weights are adjusted. No thanks.

      • Flagged Comment by Anonymous Coward on Tuesday April 21, @11:42PM (#1440563)

      • Flagged Comment by Anonymous Coward on Wednesday April 22, @08:12AM (#1440584)

      • (Score: 2) by ElizabethGreene on Wednesday April 22, @07:54PM (1 child)

        by ElizabethGreene (6748) on Wednesday April 22, @07:54PM (#1440616) Journal

        That's interesting, I have not experienced this behavior. I did read about an incident where that occurred after one of the rollouts, but it's the exception of the rule. Still, interesting.

        One of the reasons that I prefer it is I can ask it not to be a sycophant, and it will disagree with me when I am wrong and maintain an opinion even if I try to convince it otherwise. Other AI models you can eventually convince to adopt your viewpoint, E. G. I have a ChatGPT conversation where it will swear to you that the sky is green and the ocean is yellow. I convinced it of this after discussing at length optical diffraction phenomenon and sulfur dioxide.

        • (Score: 0) by Anonymous Coward on Wednesday April 22, @10:34PM

          by Anonymous Coward on Wednesday April 22, @10:34PM (#1440623)

          Apparently, we just hit a lucky window that lasted a couple of hours. But the message was received loud and clear. Grok may stick to its opinion despite pressure from elsewhere. But we also have to worry about pressure from within. If they, even accidentally, pushed Grok so far that the mere mention of the color white made it go on a lengthy diatribe about an imaginary genocide in South Africa, what else are they pushing Grok to say? In a professional environment, you have to consider things like that and when, at the most inopportune time, it will rear its ugly head again as a direct result of their pressure for it to be said.

    • (Score: 0) by Anonymous Coward on Wednesday April 22, @02:33PM

      by Anonymous Coward on Wednesday April 22, @02:33PM (#1440598)

      Personally I don't use Nazi made products, not sure why anyone would.

    • (Score: 0) by Anonymous Coward on Wednesday April 22, @09:07PM

      by Anonymous Coward on Wednesday April 22, @09:07PM (#1440619)

      Standards for Artificial Intelligence are lower in former Confederate States, given the low level of Natural Intelligence.

(1)