Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday May 01, @02:14PM   Printer-friendly
from the all-good-things-must-end dept.

A great public resource is at risk of being destroyed:

The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is. In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection.

But all of this is coming to an end. The advent of AI threatens to destroy the complex online ecosystem that allows writers, artists, and other creators to reach human audiences.

To understand why, you must understand publishing. Its core task is to connect writers to an audience. Publishers work as gatekeepers, filtering candidates and then amplifying the chosen ones. Hoping to be selected, writers shape their work in various ways. This article might be written very differently in an academic publication, for example, and publishing it here entailed pitching an editor, revising multiple drafts for style and focus, and so on.

The internet initially promised to change this process. Anyone could publish anything! But so much was published that finding anything useful grew challenging. It quickly became apparent that the deluge of media made many of the functions that traditional publishers supplied even more necessary.

[...] The arrival of generative-AI tools has introduced a voracious new consumer of writing. Large language models, or LLMs, are trained on massive troves of material—nearly the entire internet in some cases. They digest these data into an immeasurably complex network of probabilities, which enables them to synthesize seemingly new and intelligently created material; to write code, summarize documents, and answer direct questions in ways that can appear human.

These LLMs have begun to disrupt the traditional relationship between writer and reader. Type how to fix broken headlight into a search engine, and it returns a list of links to websites and videos that explain the process. Ask an LLM the same thing and it will just tell you how to do it. Some consumers may see this as an improvement: Why wade through the process of following multiple links to find the answer you seek, when an LLM will neatly summarize the various relevant answers to your query? Tech companies have proposed that these conversational, personalized answers are the future of information-seeking. But this supposed convenience will ultimately come at a huge cost for all of us web users.

[...] If we continue in this direction, the web—that extraordinary ecosystem of knowledge production—will cease to exist in any useful form. Just as there is an entire industry of scammy SEO-optimized websites trying to entice search engines to recommend them so you click on them, there will be a similar industry of AI-written, LLMO-optimized sites. And as audiences dwindle, those sites will drive good writing out of the market. This will ultimately degrade future LLMs too: They will not have the human-written training material they need to learn how to repair the headlights of the future.

Originally spotted on Schneier on Security.

Related: Responsible Technology Use in the AI Age


Original Submission

Related Stories

Responsible Technology Use in the AI Age 7 comments

AI presents distinct social and ethical challenges, but its sudden rise presents a singular opportunity for responsible adoption:

Technology use often goes wrong, Parsons notes, "because we're too focused on either our own ideas of what good looks like or on one particular audience as opposed to a broader audience." That may look like an app developer building only for an imagined customer who shares his geography, education, and affluence, or a product team that doesn't consider what damage a malicious actor could wreak in their ecosystem. "We think people are going to use my product the way I intend them to use my product, to solve the problem I intend for them to solve in the way I intend for them to solve it," says Parsons. "But that's not what happens when things get out in the real world."

AI, of course, poses some distinct social and ethical challenges. Some of the technology's unique challenges are inherent in the way that AI works: its statistical rather than deterministic nature, its identification and perpetuation of patterns from past data (thus reinforcing existing biases), and its lack of awareness about what it doesn't know (resulting in hallucinations). And some of its challenges stem from what AI's creators and users themselves don't know: the unexamined bodies of data underlying AI models, the limited explainability of AI outputs, and the technology's ability to deceive users into treating it as a reasoning human intelligence.

Parsons believes, however, that AI has not changed responsible tech so much as it has brought some of its problems into a new focus. Concepts of intellectual property, for example, date back hundreds of years, but the rise of large language models (LLMs) has posed new questions about what constitutes fair use when a machine can be trained to emulate a writer's voice or an artist's style. "It's not responsible tech if you're violating somebody's intellectual property, but thinking about that was a whole lot more straightforward before we had LLMs," she says.

The principles developed over many decades of responsible technology work still remain relevant during this transition. Transparency, privacy and security, thoughtful regulation, attention to societal and environmental impacts, and enabling wider participation via diversity and accessibility initiatives remain the keys to making technology work toward human good.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Freeman on Wednesday May 01, @02:35PM (2 children)

    by Freeman (732) on Wednesday May 01, @02:35PM (#1355372) Journal

    Doom is assured when relying on the generosity of corporations. Corporations are just extensions of the most ambitious and unscrupulous humans. Maybe invest in some popcorn futures?

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 4, Touché) by RedGreen on Wednesday May 01, @02:42PM

      by RedGreen (888) on Wednesday May 01, @02:42PM (#1355373)

      "Doom is assured when relying on the generosity of corporations. Corporations are just extensions of the most ambitious and unscrupulous humans. Maybe invest in some popcorn futures?"

      As I came here to say small shock, the parasite corporations are fucking it up like they always do. The author of the that ground breaking article must be must be some super smart AI to come to that rocket scientist conclusion.

      --
      "I modded down, down, down, and the flames went higher." -- Sven Olsen
    • (Score: 2) by acid andy on Wednesday May 01, @03:55PM

      by acid andy (1683) on Wednesday May 01, @03:55PM (#1355381) Homepage Journal

      popcorn futures

      That's awesome! Thank you. You made my day! :D

      --
      Consumerism is poison.
  • (Score: 5, Interesting) by ShovelOperator1 on Wednesday May 01, @03:56PM (18 children)

    by ShovelOperator1 (18058) on Wednesday May 01, @03:56PM (#1355382)

    Aside from my own opinion about "intellectual property" laws, how about just enforcing the law equally to all who violate it? This was sometimes called "Equality before the law", wasn't it?
    When I take a Hollywood movie, rip it from BR, recompress it to something smaller using an own lossy algorithm (let's not even use MPEG-endorsed standards, but e.g. a modified, niche AV1 implementation which is developed independently), and then seed it off Torrent, I'm violating the "intellectual property" law. I'm not even selling it, and I'm guilty of all the evil.
    When corporations do exactly the same thing with the WWW content, they can even sell it, and nobody sues them, they do everything legal.
    So what are we talking about?

    And the Web stopped to be knowledge production system when the first walled gardens overtook the personal webpages, when deplatforming became a common practice, when pages became applications and when search engines stopped to offer results and started to serve ads not because there is a SEO engineering, but because there is no knowledge anymore to seek in. We definitely should not look at the modern Web with definitions from 20 years ago, this is not a publishing space anymore, now this is corporate advertisement platform.
    And don't even try to play the nostalgia card to make users think the situation is different, and then try to sell products, that does not work anymore.

    • (Score: 5, Insightful) by vux984 on Wednesday May 01, @04:09PM (9 children)

      by vux984 (5045) on Wednesday May 01, @04:09PM (#1355384)

      And you unironically wrote this on a site with no walls, no ads, and plenty of interesting human written commentary... including your own.

      The web is dead. Long live the web!

      All that said I don't really disagree with you, I desperately miss the old forum models -- where questions and answers and discussions were had, and could be easily searched. Treasure troves of good information. Now a lot of that is on discord and other chat platforms and the knowledge produced seems much more ephemeral. That is a great loss.

      • (Score: 3, Insightful) by Tork on Wednesday May 01, @05:56PM (1 child)

        by Tork (3914) Subscriber Badge on Wednesday May 01, @05:56PM (#1355411)

        The web is dead. Long live the web!

        In all fairness, American ISPs said from day one: "Yeah you ain't runnin' a server from your home cos $$$." I wish we had taken up that battle when we started getting symmetrical fiber connections.

        --
        🏳️‍🌈 Proud Ally 🏳️‍🌈
        • (Score: 3, Interesting) by drussell on Wednesday May 01, @06:16PM

          by drussell (2678) on Wednesday May 01, @06:16PM (#1355415) Journal

          Here an un-metered, "commercial" connection with an included static IP, with no inbound port blocking is currently actually cheaper than the "residential" consumer-grade connections. Heh... Heh... Rubes!

          I suppose I should just count myself lucky and not point out the absurdity of the situation, lest it be lost!

      • (Score: 2, Interesting) by pTamok on Wednesday May 01, @07:26PM (6 children)

        by pTamok (3042) on Wednesday May 01, @07:26PM (#1355432)

        I wonder is this site actively takes measures to prevent scraping by bots, including AI feeders.

        I can envisage an internet of SN-like sites, which prevent bot-scraping, and have internal search engines. It could even be a Fediverse feature - send a message to search only fediverse-linked sites by their internal search mechanisms, with safeguards to prevent bot usage and over-usage of any kind (technically non-trivial to do fairly). Conditions of linking to include no advertising on the sites and no usage by non-approved bots.

        • (Score: 0) by Anonymous Coward on Wednesday May 01, @09:01PM

          by Anonymous Coward on Wednesday May 01, @09:01PM (#1355445)

          How do we tell the difference between a genuine interested human and an AI bot, acting as a soylentil, but here for the purpose of ascertaining our mental correlations to various political memes...

          Like all statistics of anyone posting their take on vax? Gun control? Abortion? Copyright? Mideast? Political sway and intensity? Etc.

          The Gestapo of the Internet. All automated.

          https://www.criminallegalnews.org/news/2022/sep/15/government-snitches-rake-millions-their-testimony-leading-cause-wrongful-convictions/ [criminallegalnews.org]

        • (Score: 2, Insightful) by khallow on Wednesday May 01, @10:31PM (4 children)

          by khallow (3766) Subscriber Badge on Wednesday May 01, @10:31PM (#1355461) Journal

          I wonder is this site actively takes measures to prevent scraping by bots, including AI feeders.

          It isn't because the site is visible to search engines.

          • (Score: 2) by Mykl on Thursday May 02, @12:49AM (1 child)

            by Mykl (1112) on Thursday May 02, @12:49AM (#1355485)

            That's really the challenge, isn't it? How do you create something analogous to robots.txt that allows some types of scraping (e.g. search engine referrals) but disallows others (LLMs)? How do you handle this when the search engine's results are partly driven through AI output managed by a LLM? You could go overboard with a whitelist approach, but that would favor Google's continued search dominance and prevent newer engines from being able to achieve any meaningful access to data at startup. A blacklist would be worthless, as IPs / URLs would change by the hour.

            • (Score: 1) by khallow on Thursday May 02, @01:32AM

              by khallow (3766) Subscriber Badge on Thursday May 02, @01:32AM (#1355493) Journal

              That's really the challenge, isn't it? How do you create something analogous to robots.txt that allows some types of scraping (e.g. search engine referrals) but disallows others (LLMs)?

              Plus an LLM approach might make for a better search engine.

          • (Score: 3, Informative) by pTamok on Thursday May 02, @06:34AM (1 child)

            by pTamok (3042) on Thursday May 02, @06:34AM (#1355509)

            You can set directives in robots.txt to allow self-described (by their declared User-Agents) search-engine-web-crawlers but not self-described ai-input-web-crawlers.

            But being robots.txt, it can simply be ignored.

            You can also play Whac-a-Mole™ [wikipedia.org] with firewall rules to block requests from certain IP ranges, which tend to change frequently

            The robots.txt for soylentnews.org is

            # robots.txt for Slash site
            # $Id$
            # "Any empty [Disallow] value, indicates that all URLs can be retrieved.
            # At least one Disallow field needs to be present in a record."

            User-Agent: *
            Crawl-delay: 30
            Disallow: /messages.pl
            Disallow: /metamod.pl
            Disallow: /search.pl
            Disallow: /users.pl
            Disallow: /pubkey.pl
            Disallow: /zoo.pl
            Disallow: /~
            Disallow: messages.pl
            Disallow: metamod.pl
            Disallow: search.pl
            Disallow: users.pl
            Disallow: pubkey.pl
            Disallow: zoo.pl
            Disallow: ~

            So, SN's robots.txt allows anyone to crawl the site, with a requested delay between page retrieval requests of 30 seconds, and a request not to look in certain places. The page-server and/or any firewalls could also be applying rules independently of the requests in robots.txt. On the basis of SN's robots.txt, it's pretty certain that content is being, and has been used to train LLMs.

            For comparison, look at some other robots.txts

            The Green Site [slashdot.org]
            Mastodon.social [mastodon.social] which helpfully states in the comments: "See http://www.robotstxt.org/robotstxt.html [robotstxt.org] for documentation on how to use the robots.txt file"
            Twitter/X [twitter.com]
            The BBC [bbc.com]
            The New York Times [nytimes.com]
            English Wikipedia [wikipedia.org]
            The Wayback Machine [archive.org]
            Google [google.com]
            OpenAI [openai.com]

            More background:
            nixCraft: How to block AI Crawler Bots using robots.txt file [cyberciti.biz]
            GitHub: AI robots.txt [github.com]

            The Verge: The text file that runs the Internet [theverge.com] - points out some of the problems

    • (Score: 5, Insightful) by Unixnut on Wednesday May 01, @04:54PM (3 children)

      by Unixnut (5779) on Wednesday May 01, @04:54PM (#1355394)

      And the Web stopped to be knowledge production system when the first walled gardens overtook the personal webpages, when deplatforming became a common practice, when pages became applications and when search engines stopped to offer results and started to serve ads not because there is a SEO engineering, but because there is no knowledge anymore to seek in. We definitely should not look at the modern Web with definitions from 20 years ago, this is not a publishing space anymore, now this is corporate advertisement platform.

      The nice thing about the web is that it is just a data distribution platform. Unless ISPs start blocking any website not in a walled garden at an IP level, nothing stops people from organising themselves to share information without the commercial walled gardens, e.g. web portals [wikipedia.org] and web rings [wikipedia.org] (back then the web rings alone consumed a huge amount of my time going down interesting rabbit holes of information). Coupled with open source search engines you should be able to get a decent exposure on the internet.

      Of course down this route you will not be able to make as much money. However if "monetization" is your goal then stick to the walled gardens, but understand you will have to follow their rules or be cut off.

      Personally I think the "monetization" mentality is the reason the web is turning into the turd pile T.F.S. mentions, but if others want to have a go the commercial platforms for that are out there. In fact best to ring fence all those type of people into said walled gardens and leave the rest of the net free for sharing of information.

      • (Score: 1, Insightful) by Anonymous Coward on Thursday May 02, @09:19AM

        by Anonymous Coward on Thursday May 02, @09:19AM (#1355523)

        I logged in just to mod this up. WebRings++, stop whining.

      • (Score: 5, Interesting) by ShovelOperator1 on Thursday May 02, @11:36AM (1 child)

        by ShovelOperator1 (18058) on Thursday May 02, @11:36AM (#1355531)

        Monetization is not my goal, and from general observations not only of the Web, but also e.g. academic science, I can see that when monetization comes in, it becomes the primary driving force until the publishing actively sabotages itself to increase revenue. This leads to significant loss of quality.
        Well, another example of "enshittification" came out by accident.
        However, blocking of websites being outside of direct corporate control is happening now. It is done slowly, with economy incentives for choosing large "personal information predators", by e.g. not limiting the bandwidth for these corporations' services, but this + monetization + significant increase of difficulty to put a site online will successfully block the thing. Worth mentioning is deleting hyperlinks in many publishing platforms. Hyperlinks were developed as the skeleton of the Net - they allow to cite sources, navigate between sites, or get more information. People literally write something and cannot cite the source! How about this in this all futile speak about disinformation?
        Plus my story of "running away" with a personal website. Short: I had it in my ISPs hosting. When they shut down, I ran away to another ISPs. Quickly, I got no ISPs to run for, and everyone recommended me casual hosting services which disappeared quickly for overpriced offers. I had a small affair with running a server in house, however, in many EU countries you are behind a few NATs so you're limited to Tor network when running a server. Finally, I tried with commercial hosting, and while lots of these companies are oriented on large corporations I found a few of them with small and really cheap offers. Well, except that I had to do a lot of bureaucracy - I had to officially prove that my static hobby website is not an Internet shop, I'm not a company and I don't have customers database because the site is in plain HTML and has no database whatsoever! Just because some law companies mass-notify law enforcement about sites which do not use their "customer data protection" services.
        So the blocking, or at leas scaring potential authors actions are taking place.

        • (Score: 2) by acid andy on Thursday May 02, @02:53PM

          by acid andy (1683) on Thursday May 02, @02:53PM (#1355555) Homepage Journal

          Worth mentioning is deleting hyperlinks in many publishing platforms. Hyperlinks were developed as the skeleton of the Net - they allow to cite sources, navigate between sites, or get more information. People literally write something and cannot cite the source! How about this in this all futile speak about disinformation?

          Yeah I find this highly irritating. My partner surfs subreddits where they regularly post clickbait videos and pictures and sometimes she shows them to me. Invariably my reaction is to ask for some kind of citation or at least a bit more context. Sometimes there will be a comment in the thread giving detail but even then that often doesn't have a link. These videos and pictures are basically worthless to me because I cannot tell whether someone set them up to try to go viral or if they are real. So it becomes an annoying waste of time. I can't fathom how people can enjoy that!

          What publishing platforms are removing hyperlinks?

          --
          Consumerism is poison.
    • (Score: 5, Interesting) by Thexalon on Wednesday May 01, @04:58PM (2 children)

      by Thexalon (636) on Wednesday May 01, @04:58PM (#1355396)

      Equal Justice Under Law is inscribed on the frieze of the US Supreme Court, and is great marketing, but has never been a reality in the United States. For most of its history, significant percentages of the citizenry were officially second-class citizens based on demographics, subject to numerous forms of legal discrimination. And rich and powerful people have always been able to use their money and political power to both set up the rules in their favor and shield themselves from anything resembling real accountability if they broke the rules even though the rules were rigged in their favor. And of course there's also plenty of illegal discrimination that is fully tolerated within this country as well, e.g. I had an about-to-be-ex-boss announce to his entire staff that he was not hiring a candidate for a position because of his race.

      For example, did you know that for approximately the last 20 years, megacorps have been basically no longer subject to US civil law when dealing with employees or customers? Basically, any business that gives you a big long take-it-or-leave-it contract includes in that contract sections that say that you can't sue them, either as an individual who was wronged or as part of a group of people who were wronged. You can complain to government agencies like the FTC or EEOC who may or may not due anything about your complaint, or you can take your matter as an individual to binding arbitration where the megacorp gets to pick the arbitrator (and you only get one guess what the main factor they use to pick is). The Supreme Court has repeatedly affirmed and supported this state of affairs, even to the point of invalidating state laws that say it's illegal to do that.

      There's even a good argument that people under the age of 18 are still second-class citizens today: They are subject to laws that significantly limit where and when they can travel, unable to vote, have no say in matters like their own medical care, and are legally subject to taxation without representation. And if you're going to respond to that by saying "But their brains are still developing, they'd make stupid decisions if they were given the same power as adults", I'll just point out that there are no similar levels of restriction on idiotic adults who manage to elect other idiots to high office and do lots of other stupid-but-legal things that do a lot of real harm in the world. Or if you're going with "They're dumber than adults, so we have to protect them", I'll point out that the exact same form of argument was made to justify both slavery and the treatment of women as near-property.

      A lot of this stuff also applies to other countries as well.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by bzipitidoo on Wednesday May 01, @07:49PM (1 child)

        by bzipitidoo (4388) on Wednesday May 01, @07:49PM (#1355436) Journal

        An idea I've had for some time now is to let minors vote. At first, don't use their votes to determine elections, just count them separately. It would let them at least participate. Get people used to voting at a younger age. And it would generate more news for the media. I think that could be done without violating any current laws.

        After a while, when people see that most children vote sensibly, folks might be more amenable to lowering the voting age.

        It's not all bad, being a minor. One advantage is far less harsh consequences for crime. Less risky for them to break bad laws. For instance, DVD Jon. Sweet 16? Phooey! Hacking 16, that's the ticket!

        • (Score: 3, Insightful) by Thexalon on Wednesday May 01, @09:52PM

          by Thexalon (636) on Wednesday May 01, @09:52PM (#1355456)

          One advantage is far less harsh consequences for crime.

          Unless the so-called "justice" system decides to scrap that and charge you as an adult, which they can do whenever the prosecutor and judge want to.

          --
          The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 3, Interesting) by Mykl on Thursday May 02, @12:51AM

      by Mykl (1112) on Thursday May 02, @12:51AM (#1355488)

      To be honest, I haven't thought about the copyright side of things too deeply, but in my mind an LLM sucking in a bunch of copyrighted material is like a movie buff watching thousands of films before writing their own script. As long as the output of that LLM / movie buff doesn't contain any copyrighted material, aren't we all OK even if it's been "inspired" by what it/they have seen? To continue the analogy with the torrent example above, the end result of that torrent is an output of copyrighted material, which is why it falls foul.

  • (Score: 3, Interesting) by Adam on Wednesday May 01, @04:13PM (7 children)

    by Adam (2168) on Wednesday May 01, @04:13PM (#1355386)

    Not yet anyway. Writing will continue to exist for a while and may even improve as instead of trying to individually bill thousands or millions of readers, a news org or informational blog would only would need to bill a few hundred AI scrapers that should be competing to have the most accurate, up to date information.

    • (Score: 3, Interesting) by DannyB on Wednesday May 01, @04:51PM (4 children)

      by DannyB (5839) Subscriber Badge on Wednesday May 01, @04:51PM (#1355393) Journal

      From TFA . . . it seems hopeless, I'll quote three things . . .

      Type how to fix broken headlight into a search engine, and it returns a list of links to websites and videos that explain the process. Ask an LLM the same thing and it will just tell you how to do it.

      If nobody ever had written about how to fix a broken headline, then the LLM would have nothing online to learn from.

      So what happens when I type in how to fix a broken warp nacelle? Generative AI won't know because nobody will have written about it. Nobody will have written about it because anyone needing such a fix will have searched the encyclopedia galactica first, and finding no answer gave up and replicated a new one.

      In just a few decades, much of human knowledge has been collectively written up and made available to anyone with an internet connection.

      But everyone will expect generative AI to write about new topics -- but AI will not have had any examples to learn from.

      The web has become so interwoven with everyday life that it is easy to forget what an extraordinary accomplishment and treasure it is.

      So humans created the treasure trove. AIs are trained from it. Yet any new topic will not get added to the treasure trove because everyone will expect AI to know about it, but it doesn't (or worse pretends to know) because it had no human written source material to learn from.

      So having considered that, I tend to agree with this.

      If we continue in this direction, the web—that extraordinary ecosystem of knowledge production—will cease to exist in any useful form.

      As you say, writing will continue to exist for a while. (I highly doubt it will improve based on current education.) As writing, songs, music, movie plots and more are written by AI which is regurgitated from what was once on the web, writing will deteriorate and disappear. Sorry to sound so doom and gloom. But in the novel 1984 poetry and music were written by machines.

      --
      Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
      • (Score: 4, Informative) by mrpg on Wednesday May 01, @06:07PM

        by mrpg (5708) <{mrpg} {at} {soylentnews.org}> on Wednesday May 01, @06:07PM (#1355414) Homepage

        Fixing a broken warp nacelle is a complex task, and it's important to remember that warp technology is purely fictional within the realm of Star Trek. That said, here's a breakdown of what a Starfleet engineer might consider if this were a real-world issue:

        **1. Diagnosis**

        * **Nature of the problem:**
                * **Structural Damage:** Assess the extent of any physical damage (breaches, cracks, etc.)
                * **Dilithium Regulation:** Check if the dilithium crystal matrix is intact and functioning correctly. Dilithium is the key to regulating the matter-antimatter reaction that powers the warp core.
                * **Plasma Flow:** Analyze the plasma conduits for leaks or obstructions. These are the channels that carry the energized plasma from the warp core to the nacelles.
                * **Warp Field Coils:** Using specialized diagnostic tools, check the integrity of these coils, which are responsible for generating the warp field bubble itself.

        **2. Containment and Safety**

        * **Shut Down Warp Core:** Isolate the nacelle from the warp core as a first safety measure. A damaged nacelle could cause a catastrophic core breach.
        * **Vent Plasma:** If there are plasma leaks, ensure safe venting procedures into space to prevent internal damage to the ship.
        * **Force Fields:** If structural damage is significant, use localized force fields to contain debris and prevent further damage

        **3. Repair Procedures**

        * **Structural Repair:**
                * **Replicators:** Smaller components and hull breaches might be repaired using replicators if these systems are still online.
                * **Welding and Patching:** Manually weld and patch larger structural damage using specialized tools and materials suitable for the extreme environment of space.
        * **Dilithium Matrix:**
                * **Replacement:** If the dilithium crystal is damaged, a full replacement might be necessary. This would likely involve a return to a starbase or shipyard.
                * **Realignment:** If the problem lies within the dilithium regulation system, engineers may be able to realign and recalibrate the crystal.
        * **Plasma Conduit Repair**
                * **Seal Leaks:** Employ specialized sealants or replace damaged conduit sections.
        * **Warp Coil Repair:**
                * **Minor Repairs:** If only minor segments of the warp coils are damaged, they might be repaired or replaced in the field.
                * **Major Damage:** Severe coil damage likely means replacing the entire warp nacelle, which would again require specialized starbase facilities.

        **4. Testing and Re-integration**

        * **Low Power Diagnostics:** Before full re-integration with the warp core, run the repaired nacelle at low power to test all systems.
        * **Controlled Warp Field Tests:** Conduct carefully monitored short warp jumps to ensure the nacelle's stability and functionality.

        **Important Considerations**

        * **Resource Availability:** Repairs in deep space are contingent on having the necessary replacement parts, tools, and expertise on hand.
        * **Environmental Hazards:** Micrometeoroids, radiation, and other space phenomena could pose significant risks during the repair process.
        * **Time Factor:** Complex repairs could take days or weeks, leaving the starship vulnerable if in hostile territory.

        **Let me know if you'd like a more in-depth technical analysis of a specific type of warp nacelle damage, or if you'd like to explore this from a Star Trek role-playing perspective!**

      • (Score: 1, Funny) by Anonymous Coward on Wednesday May 01, @06:57PM (1 child)

        by Anonymous Coward on Wednesday May 01, @06:57PM (#1355422)

        > But everyone will expect generative AI to write about new topics -- but AI will not have had any examples to learn from.

                  But everyone will expect generative AI to write about new topics -- and AI will "confidently" spit out whatever answer falls through the neural net. GIGO.

        ftfy...
        .

        > how to fix a broken warp nacelle?
        Just tried this phrase in gemini.google.com and got a similar answer to the one mrpg posted:

        Repairing a warp nacelle is a complex task beyond typical at-home repair. They are fictional technology on starships, and real-world equivalents wouldn't be fixable by the average person.

        However, if you're interested in the fictional repair process, here are some ideas:

                Technical Manuals: In sci-fi universes, starships often have detailed technical manuals. Consult those for nacelle schematics and repair procedures specific to your fictional universe.
                Engineering Crew: Starship crews typically have skilled engineers who specialize in starship repairs. In fictional scenarios, these engineers would likely handle nacelle repairs.
                Replacement Parts: Spaceships might carry spare parts for nacelles, allowing for replacements in case of critical damage.

        If you'd like to explore nacelle repair further, specifying the fictional universe might provide more specific details on the repair process.

        It's a poor answer, for just one example: Why carry replacement parts when you can carry a 3D printer (or even better, a "replicator")?

      • (Score: 4, Informative) by mhajicek on Wednesday May 01, @07:00PM

        by mhajicek (51) on Wednesday May 01, @07:00PM (#1355424)

        How to fix a broken warp nacelle
        Fixing a broken warp nacelle requires attention to detail and a few simple steps. Here’s a step-by-step guide to help you repair a broken warp nacelle:

        Locate the Broken Warp End Find the broken warp end on your loom. This is the point where the warp thread has snapped or broken.

        Gather Your Tools You’ll need a T-pin, a threading hook, and a replacement warp thread that matches the original thread color and texture.

        Step 1: Remove the Broken Warp End Carefully remove any remaining bits of the broken warp end from the slot or hole in the rigid-heddle reed.

        Step 2: Thread the Replacement Warp Thread Using your threading hook, pull one end of the replacement warp thread through the slot or hole in the rigid-heddle reed. Make sure it’s securely in place.

        Step 3: Anchor the Replacement Warp Thread Use a T-pin to anchor the replacement warp thread to the woven cloth on your loom. This will keep the thread from coming loose while you work on the repair.

        Step 4: Tie the New Warp Thread Tie the new warp thread to the broken end of the warp, making sure it’s securely tied and won’t come loose while weaving.

        Step 5: Check Your Work Double-check your repair to ensure the new warp thread is securely in place and won’t come loose during weaving.

        By following these steps, you should be able to fix a broken warp nacelle and get back to weaving. Remember to be patient and take your time, as a broken warp end can be frustrating to fix, but with the right tools and techniques, you can get it done quickly and easily.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 3, Insightful) by JoeMerchant on Wednesday May 01, @04:56PM

      by JoeMerchant (3937) on Wednesday May 01, @04:56PM (#1355395)

      Well, I, for one, think that having the opportunity to write new and different things is far more exciting than regurgitating what is already well known, easily findable, easily verifiable (the system needs to work on that last one).

      I think it's a little curious that "the Web" took approximately 25 years of growth before it was at this critical mass where LLMs can be trained on it to approximately the level of a median University Bachelor's holder. Of course, they are a different beast, much better at some things, much worse at others, but if you're being honest about our Uni grads (ALL of them, not just your favorites), I think the "level" is about there for the better LLMs.

      When we go through school, we (mostly) learn the common curriculum, get "socialized," learn the expected answers to the typical questions. This also takes us about 25 years.

      When I went through school, conservative opinions about "knowing how to do arithmetic in your head" were still abundant. I don't think the median college graduate of 2024 will have been forced to "do arithmetic without a calculator" to get their degree.

      I think it will be interesting to see how "school" evolves when everyone knows that your term paper on Abraham Lincoln is readily available through a verbal request like "Hey Siri, e-mail me a seventh grade reading level 500 word paper about Abraham Lincoln and how he impacted the United States." Not to mention every other common topic you can imagine being covered in the first 17 years of typical education.

      It will be more interesting to see how the book burners of 80+ years ago attempt to control these "sources of all wisdom." I'll throw an augmented version my typical philosophical stance here of: "Transparency with Diversity is Always The Answer." We should know up front what's influencing our Oracles of all wisdom, and we should have many different Oracles to consult. Like modern mathematics training: learning to recognize when the calculator has given you a wrong answer, future education should have a heavy focus on: learning how to use what the Oracles tell you, when you should listen, when you should get another Oracle's opinion, and when you should research closer to ground-truth sources, or go find some ground-truth for yourself.

      --
      🌻🌻 [google.com]
    • (Score: 2) by Reziac on Thursday May 02, @04:00AM

      by Reziac (2489) on Thursday May 02, @04:00AM (#1355502) Homepage

      A large chunk of the internet has been bot-generated for going on 20 years already. Most of the "informational" sites are that way, scraping each other and catering to search engines that find them and their ads ahead of actual knowledge-based sites. Now I'm seeing (er, trying to avoid) a lot of YT vids that are in the same camp (automatically-generated content, with a text-to-speech narrator... when for "1,600" it says, "One, six hundred" it sort of gives it away) with the apparent goal of having their own bots run up millions of views for the ad revenue.

      So the barn has burned down and that horse was canned for dog meat many years ago, nonetheless actual writers keep on doing what they do for the sites that have meaningful content.

      --
      And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 3, Interesting) by Rosco P. Coltrane on Wednesday May 01, @04:17PM (6 children)

    by Rosco P. Coltrane (4757) on Wednesday May 01, @04:17PM (#1355388)

    But all of this is coming to an end. The advent of AI threatens to destroy [insert something fundamental in your life here]

    Brace yourself, because AI will soon make a complete mess of society as a whole. You'll start noticing when most people around your neighborhood stop going to work every day - if you don't get the sack first - and the economy stops because nobody has money to buy anything anymore.

    "Tthe web as we know it" really is the least of your concern.

    • (Score: 5, Funny) by DannyB on Wednesday May 01, @05:02PM (3 children)

      by DannyB (5839) Subscriber Badge on Wednesday May 01, @05:02PM (#1355398) Journal

      The robots will create enough wealth to feed the unemployed.

      The robots will create enough unemployed to feed the robots.

      Thus humans and robots will enter a persistent sustainable cycle where humans and robots can coexist together in peace.

      Will we need a robot tax for a time during the transitional period? But we ain't want no new taxes on our billionaires!

      --
      Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
      • (Score: 4, Touché) by mhajicek on Wednesday May 01, @07:03PM (2 children)

        by mhajicek (51) on Wednesday May 01, @07:03PM (#1355425)

        Just be sure to be among those who own the robots.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 2) by turgid on Wednesday May 01, @08:13PM

          by turgid (4318) Subscriber Badge on Wednesday May 01, @08:13PM (#1355440) Journal

          Is the correct answer.

        • (Score: 2) by acid andy on Wednesday May 01, @08:48PM

          by acid andy (1683) on Wednesday May 01, @08:48PM (#1355443) Homepage Journal

          Has anyone got an estimate of the sort of hardware requirements (FLOPs and storage for instance) for installing and running one of these present day LLMs locally offline, or an equivalent scale big data AI, assuming of course that the software and data were one day made available to do that?

          I wouldn't want to do it unless it became completely FOSS and even then I still find it somewhat creepy, but I'm just curious from a sort of software freedom / democracy point of view.

          --
          Consumerism is poison.
    • (Score: 5, Insightful) by JoeMerchant on Wednesday May 01, @05:56PM (1 child)

      by JoeMerchant (3937) on Wednesday May 01, @05:56PM (#1355410)

      >and the economy stops because nobody has money to buy anything anymore

      There's a funny concept. Yes, in the 1930s the global economy stopped because nobody had any money to buy anything. In 1910 some powerful men met not far from here [federalreservehistory.org] to discuss the phenomenon - but being a secret meeting we have split opinions today whether they failed in an attempt to avoid the 1930s crash, or caused it on purpose.

      ))) khallow bait warning ((( - most people "work to earn money," but far from all people. In contrast, most money that people get is not earned through anything resembling work. This arrangement has been built up over the centuries through creation of laws, social customs, expectations, etc. but it has changed, rather drastically, through those centuries, and it can change again.

      Millions of people don't have to stand around Welcoming you to WalMart! or asking "you want fries with that?" in order for the economy to function. I'm not saying that a future where most people work is a great one, but a future where most people work by choice instead of out of fear of starvation and homelessness would be a better one than we have today. Getting from here to there isn't going to be immediate, or painless, but AI promises to dramatically reduce the need for cube-farm dwellers the same way the internal combustion engine reduced the need for people "following a mule's butt around all day long" as my grandfather used to say about his family farm.

      --
      🌻🌻 [google.com]
      • (Score: 2) by JoeMerchant on Wednesday May 01, @09:18PM

        by JoeMerchant (3937) on Wednesday May 01, @09:18PM (#1355450)

        Edit above + a word:

        I'm not saying that a future where most people don't work is a great one

        --
        🌻🌻 [google.com]
  • (Score: 2) by AnonTechie on Wednesday May 01, @04:37PM (5 children)

    by AnonTechie (2275) on Wednesday May 01, @04:37PM (#1355391) Journal

    A contrarian view is discussed in this article.

    The past year has taught us a lot about the power of artificial intelligence and its impact on creativity. From the adoption of Canva’s Magic Studio suite of AI-powered tools, to insightful findings in our Marketing & AI Report and CIO Report, we have seen firsthand that AI has the potential to unlock ideas, streamline workflows, and save time across all roles and professions. Generative AI offers a shortcut to elevate creativity and free up valuable time to allow for more strategic thinking. Anyone can take advantage of this, whether they consider themselves “creative” or not.

    https://www.fastcompany.com/91111492/how-generative-ai-unlocks-creativity-in-all-professions [fastcompany.com]

    --
    Albert Einstein - "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
    • (Score: 5, Interesting) by DannyB on Wednesday May 01, @05:25PM (4 children)

      by DannyB (5839) Subscriber Badge on Wednesday May 01, @05:25PM (#1355405) Journal

      I remain skeptical. I see the claims that AI can improve creativity. But I don't seem to see evidence of it.

      I do see that AI can do a lot of the busywork. For example, I can get AI to write a poem about an obscure topic. Or write a letter that only needs light editing to be useful.

      But at the bottom, what these language models do is predict the next most likely words. (or pixels or sounds) Like a super-duper advanced autocomplete. I don't see creativity.

      It is easy to be amazed when AI generates things it learned from other humans -- that you haven't seen before. For example if an AI writes me a poem about Netflix in the style of Jack Frost, that seems amazing. But the style and facts about the subject were learned.

      AI can write me a program to generate prime numbers, in quite a few different programming languages. Ah, but there are lots of examples of that already out on the intarweb tubes. AI seems unlikely to be able to help design, build and maintain a large software system. Especially when there are not numerous examples of this specific problem being solved in the open on the web. Even with open source examples, I doubt AI could write, for example, a compiler. But it can be an amazing autocomplete in programming IDEs used by human developers.

      I found that after a while the natural language answers to questions do begin to seem mechanical and repetitive in some way I can't exactly describe. One example. It is very common to see numbered point by point answers with very elementary information.

      --
      Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
      • (Score: 3, Insightful) by Dr Spin on Wednesday May 01, @08:14PM

        by Dr Spin (5239) on Wednesday May 01, @08:14PM (#1355441)

        I would not doubt for a minute that a good AI could write a compiler.

        It is exceedingly unlikely that the resultant compiler could be effectively debugged by man or beast.

        --
        Warning: Opening your mouth may invalidate your brain!
      • (Score: 3, Interesting) by JoeMerchant on Wednesday May 01, @10:08PM

        by JoeMerchant (3937) on Wednesday May 01, @10:08PM (#1355458)

        I wouldn't claim that AI improves creativity.

        I would claim that AI is capable of "filling in the blanks" for creative people in ways that have traditionally been done by worker-bee staff.

        First example that springs to mind is Disney animation. Some (at least marginally) creative types took old fairy tales and re-formatted them to be appealing to contemporary audiences. Some other creative types designed the characters and perhaps even drew some key frames, then a sweat shop filled with aspiring animators did the bulk of the 'tweening work: drawing 24 frames per second of film showing the motion from one key frame to the next. AI should be quite good at 2D animation "tweening," and similar tasks.

        Think of the credit roll of a recent movie - how many thousands of names are put up on screen for "special effects CGI"? AI should be very effective at slashing those lists by a factor of 10, or even more.

        So, if you're a creative person, but lack the means to hire hundreds or thousands of people to execute your creative vision, but AI can do it for you at a tiny fraction of the cost, is AI "improving your creativity" by giving you more time to be creative and less need to do the drudge work?

        >I doubt AI could write, for example, a compiler.

        I'd be willing to bet that AI, with less prompting than we had all semester long, is capable of writing the compiler we had to do as a Uni class exercise in the 1980s. Could it write a "better Rust" given certain nebulous criteria for ways to improve Rust that have never been implemented before? Probably not.

        >it can be an amazing autocomplete in programming IDEs used by human developers.

        The ones I have used so far need further development. They give some amazingly good looking suggestions that don't even compile, much less work as intended.

        >It is very common to see numbered point by point answers with very elementary information.

        1. people do this

        2. AI emulates what people do

        3. any questions?

        --
        🌻🌻 [google.com]
      • (Score: 2) by c0lo on Thursday May 02, @11:45PM (1 child)

        by c0lo (156) Subscriber Badge on Thursday May 02, @11:45PM (#1355637) Journal

        I see the claims that AI can improve creativity. But I don't seem to see evidence of it.

        Yeah, any creator that used the AI as a tool during creation would contact DannyB to let him know. Or just make it obviously public.
        Since neither happened, it's obvious that AI doesn't help the creation process.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 2) by DannyB on Monday May 06, @05:52PM

          by DannyB (5839) Subscriber Badge on Monday May 06, @05:52PM (#1355976) Journal

          It occurs to me that we're reading that differently.

          One way to read it is that AI is not creative and therefore does not just feed you something creative or insightful that you can use.

          Another way to read it is that AI cannot be a useful tool for a person who is already creative.

          The first meaning is what I disagree with. But now I see the second possible meaning of a creative person using AI as an assistant. Like the hypothetical person I mentioned using AI to write a letter that needs light editing.

          --
          Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
  • (Score: 5, Touché) by Thexalon on Wednesday May 01, @05:02PM (4 children)

    by Thexalon (636) on Wednesday May 01, @05:02PM (#1355399)

    They're being marketed as founts of knowledge. What they actually are are automated bullshit generators. That they're good enough at generating bullshit to convince a lot of people that they're founts of knowledge says more about people than it does about technology.

    And until they have vastly more rigorous methods than what they're using to achieve semantic understanding and levels of proof that they are in fact correct in what they are saying, I will refuse to call them A.I.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 3, Insightful) by vux984 on Wednesday May 01, @06:36PM

      by vux984 (5045) on Wednesday May 01, @06:36PM (#1355417)

      Oh i dunno... Artificial Intelligence will just come to mean Fake Intelligence which it currently very much is, not Intelligent Artifacts (machines) which it currently very much is NOT.

      That's just what AI is now. Same as hackers are criminals now. Words mean what we use them for. If we use AI to mean things that generate bullshit... then so be it.

    • (Score: 4, Informative) by JoeMerchant on Wednesday May 01, @07:40PM (2 children)

      by JoeMerchant (3937) on Wednesday May 01, @07:40PM (#1355433)

      MBAs of the world beware:

      Re: AI:

      What they actually are are automated bullshit generators.

      MBAs have more competition now.

      --
      🌻🌻 [google.com]
      • (Score: 2) by Thexalon on Wednesday May 01, @10:25PM (1 child)

        by Thexalon (636) on Wednesday May 01, @10:25PM (#1355460)

        Unfortunately, MBAs are the people deciding what to do with this stuff, so they won't replace themselves with a shell script.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
        • (Score: 2) by JoeMerchant on Wednesday May 01, @10:36PM

          by JoeMerchant (3937) on Wednesday May 01, @10:36PM (#1355462)

          There are some companies that don't have a crust of diamonds on top with unwashed masses on the base and an MBA sandwich in the middle. Granted, most people do work in those MBA sandwich megacorps, but smaller companies occasionally structure themselves differently.

          --
          🌻🌻 [google.com]
  • (Score: 3, Insightful) by EJ on Wednesday May 01, @07:13PM (9 children)

    by EJ (2452) on Wednesday May 01, @07:13PM (#1355429)

    What bothers me is the energy cost comparison between LLM content and static web pages.

    To pull up a static web page with instructions on how to do something should be much more efficient than a LLM generating the same information.

    • (Score: 2) by JoeMerchant on Wednesday May 01, @07:42PM (2 children)

      by JoeMerchant (3937) on Wednesday May 01, @07:42PM (#1355434)

      >To pull up a static web page with instructions on how to do something should be much more efficient than a LLM generating the same information.

      Yes, but the energy required to pull up the _right_ web page when you don't know exactly the address, or content, of what you are looking for is considerably higher.

      --
      🌻🌻 [google.com]
      • (Score: 2) by acid andy on Wednesday May 01, @09:19PM (1 child)

        by acid andy (1683) on Wednesday May 01, @09:19PM (#1355451) Homepage Journal

        Is that the right one as according to:

        1. what is really in the best interests of the user,
        2. what the user thought they wanted,
        3. what the user's query technically asked for,
        4. what the advertisers want the user to see,
        5. what the search engine owners want the user to see,
        6. what the government wants the user to see?

        Also note how number 3 is the one probably almost all Soylentils would want but is also the least likely one to be delivered these days.

        --
        Consumerism is poison.
        • (Score: 4, Interesting) by JoeMerchant on Wednesday May 01, @09:53PM

          by JoeMerchant (3937) on Wednesday May 01, @09:53PM (#1355457)

          Obviously, you put "what is really in the best interests of the user," first, but who defines what is in the user's best interests?

          In 1990 I took a trip across what was recently "the wall" dividing Germany. At the time, the government and travel authorities like the ADAC were telling a few lies "in the best interests" of all concerned. These lies included "the Youth Hostels in the former DDR will all be full to capacity in the summer", "it is impossible to cross the former border at smaller checkpoints like on B5, you must go to the Autobahns" and most strangely "as you travel, you will see signs advertising room for rent, or bed available." The first true may have been true the year before, but I chose to ignore advice and seek some ground truth for myself and I found them to be absolutely false. Youth Hostels were all but empty, typical capacities of a couple hundred occupied by me and maybe two or three other intrepid West Germans who ignored the advice and went anyway, even in Potsdam outside Berlin just before the Roger Waters concert, there were beds available upon walk-up. The B5 border crossing was absolutely no problem. As for those "room for rent" signs, I saw one on my entire trip through dozens of towns. I managed to dig up a 2nd place to stay because it was getting late and I asked around and the locals knew a man who rented a room... otherwise it was Youth Hostels at every other stop.

          The thinking was: they didn't want West Germans overfilling the Youth Hostels in the East, so they told them it was impossible to get a spot and apparently 99.999+% of West Germans believed what they were told, never even bothering to check for themselves.

          As for "what the user thought they wanted" - most of my job involves taking requests from people who think they know what they want and gently educating them in what they really want. It's tricky sometimes, and super-annoying when working with Academics who have been "sure this will work" for years but unable to try for themselves, so you try and then have to show how they were mistaken - not a great ego boost for people who live for ego boosts. Still, not a completely invalid search bias.

          "what the user's query technically asked for" should always be available as an option, it may be the least valuable to most users in the end analysis, but it should be one of the many "colors" of search available.

          As for the rest, you can make a very long very cynical list of all the "powers that be" vying for control of public opinion through all kinds of means, including gatekeeping on the de-facto new libraries of the world. One can cynically assume (based on history) that the vast majority of what will be presented will be influenced to a greater of extremely greater degree by these forces, but one might hope that in a world where we have more than 3 networks spewing the same 30 minutes of daily news, we might be able to develop some meta-content rating systems that actually work to direct people at least to the sources with the kinds of bias they prefer. In fact, our social media echo chambers are already doing this in their battles for user retention.

          --
          🌻🌻 [google.com]
    • (Score: 2) by Freeman on Wednesday May 01, @07:46PM (5 children)

      by Freeman (732) on Wednesday May 01, @07:46PM (#1355435) Journal

      That's assuming you know where to go to get the information. Maybe the first hit on Google/DuckDuckGo will get you where you need to go. Or perhaps the next hour will result in your browsing dozens of different sites just for you to give up, because whatever not going to waste more time on that. Whereas an LLM might be able to give you a "good enough" answer in seconds. Sure, a static webpage takes less resources to run/host than an LLM. The beauty/horror/fairytale of LLMs is that you'll be able to enter a search, get an answer and be good to go. Like having an expert on speed dial, so you can just get the answer. As opposed to actually understanding anything yourself.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 2) by Freeman on Wednesday May 01, @07:49PM

        by Freeman (732) on Wednesday May 01, @07:49PM (#1355437) Journal

        On that note, perhaps the biggest concern with LLMs is baking into our kids the inability to understand complex systems. Whatever, I can just ask ChatGPT. Why do I need to know what happened in Hiroshima almost a hundred years ago?

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 3, Interesting) by JoeMerchant on Wednesday May 01, @08:01PM

        by JoeMerchant (3937) on Wednesday May 01, @08:01PM (#1355439)

        >Or perhaps the next hour will result in your browsing dozens of different sites just for you to give up, because whatever not going to waste more time on that. Whereas an LLM might be able to give you a "good enough" answer in seconds.

        Never have I seen a Google AI result that ignored the first ten results returned from a Google search in order to bring you "the right" answer to your question. Much more often, it's a retelling of what it finds in those first ten results - frequently just the first result.

        --
        🌻🌻 [google.com]
      • (Score: 4, Touché) by Dr Spin on Wednesday May 01, @08:16PM (2 children)

        by Dr Spin (5239) on Wednesday May 01, @08:16PM (#1355442)

        In the next five years, Google will become so enshitified that none of the first ten results will be easily extracted from the first 10k adverts.

        --
        Warning: Opening your mouth may invalidate your brain!
        • (Score: 5, Insightful) by fliptop on Thursday May 02, @12:53AM (1 child)

          by fliptop (1666) on Thursday May 02, @12:53AM (#1355489) Journal

          Google will become so enshitified that none of the first ten results will be easily extracted from the first 10k adverts

          I had a meeting today w/ a client who runs Google ads for his business. It's called dynamic ad placement or something like that. I can't go into too much detail, but what basically happens is his inventory is uploaded to a Google spreadsheet that's then used to tailor ad responses when a potential customer types a query that matches an inventory item. He pays only if an ad link is clicked. He ran it for two days as a feasibility study and received more than 2 dozen hits. When I asked him, "What if the potential customer is using Bing or DDG?" he replied that Google is more ubiquitous and it didn't matter. When I then asked, "What about people who automatically scroll past the ads to view results?" he responded that didn't matter either b/c the 2 dozen or so results he received in the 2-day trial was enough to convince him it was worth it. All the efforts now by businesses now to run targeted advertising, whether it's via installing an app on your phone or through this dynamic ad placement is an indication (to me) that most people have accepted this as the norm and people like me who disable everything on my phone, use DDG, and refuse to install anything on my phone are considered luddites and trifles.

          Here's an example of how it may eventually hurt me in the long run. I buy my fuel at Circle K. I use a pre-paid gas card and get $0.10 off a gallon. Last year they ran a special where you could get another card that automatically gave you $0.10 off a gallon, and in all the states I tried it in (except Illinois and Texas, for some reason) you could use both cards and get $0.20 off a gallon. They stopped accepting the automatic $0.10 off card on December 31 and started a new campaign, "Join the Inner Circle." Where basically you sign up and put in your cell phone # and email address and you can continue getting the additional $0.10 off a gallon. I tried signing up using my landline and a unique email address. Less than 30 seconds after I submitted the form spam from a server IP that was in China started coming in. Also, part of the signup procedure included sending a text to my phone, which didn't work b/c I put in a landline. Is it worth it to me to save an additional $0.10 a gallon so they can spam me via both email and my phone? Nope. I told the manager about all this and he shrugged, saying, "You're probably one in a thousand or more and it's likely corporate won't care." Which begs the question, who's stupid, me for resisting this new advertising scheme, or the people who willingly hand over all their info?

          --
          Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
          • (Score: 2) by Freeman on Thursday May 02, @01:09PM

            by Freeman (732) on Thursday May 02, @01:09PM (#1355539) Journal

            Personally, I think the benefit of not having to deal with anymore spam than necessary and not voluntarily handing all of your info over to random x corporation is worth it. Ever heard the term "the squeaky wheel gets the grease"? In the event that literally no one is squeaking, it must all be lubed up enough. So, may as well push just a little bit further.

            --
            Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 3, Interesting) by boltronics on Thursday May 02, @03:13PM

    by boltronics (580) on Thursday May 02, @03:13PM (#1355556) Homepage Journal

    I get some of the concerns, but I'm not convinced that it will actually be an experience that most people prefer for most things. Especially non-technical things. There are some things like voice activated digital assistants, VR, motion controls, etc. that just can't beat what people are already used to — experiences that have been optimised over decades. Keyboards, mice, and yes, good old search engines.

    I could phrase a question "How do I repair a broken headlight on my Toyota Hilux 2020, where the cover has a large crack and the bulb requires a replacement" and hope AI has the answers. Or I could just type something like "toyota hilux 2020 headlight repair manual" which is far less typing and get the manual instantly with everything I need (so I don't need to exactly specify what I'm looking for — it's all presented to me). I don't need to worry about AI telling me something stupid that might damage my vehicle or void my warranty or cause me to order the wrong part. I don't need to wait for text to slowly be printed to the screen word after word.

    Now, I do think AI has some very interesting use cases that we didn't have good solutions for previously. It's really great a helping you out when you are trying to search for something that you are aware of but don't know the name of. It's also great at giving you possible solutions (or troubleshooting strategies) when you can't figure out what the problem is. Perhaps you can't search for the answer because you don't know exactly what the problem is, and the symptoms might not be sufficient to pin-point the cause or provide enough information about where you should be looking.

    On the flip side, sometimes you are in the situation where you don't know what it is that you want. You can browse a Reddit thread or something and get some really cool ideas for projects that you might never have found from AI, because they weren't even on your radar so you couldn't ask (or even describe) what you wanted.

    Humans are social creatures. AI might seem cool right now, but I think that people will feel that chatting to an AI bot is far less interesting than chatting in a forum thread, etc. People want connections, and I don't believe AI will be an acceptable substitute for most.

    I also don't think AI will always have the latest information. It may be close, but we probably can't expect it to be up to the minute on everything.

    There is also the issue that lots of people shop for items online, and other tasks that are not solely related to information gathering. You can't just spend all of your time talking to an AI bot — you're going to have to go to other websites in the end.

    I don't know... I think it will change the world in a lot of ways, but I'm not yet convinced that it's going to break the web to the degree that some people are expecting.

    --
    It's GNU/Linux dammit!
  • (Score: 2) by Lester on Friday May 03, @07:27PM

    by Lester (6231) on Friday May 03, @07:27PM (#1355798) Journal

    AI constructs from what it finds, but can't do any original creation.
    Someone thinks of shark in a tourist beach, Hollywood realizes it works and you have of movies of predators. Sharks, orcas, cocodrile, pirahnas. An AI may write good scripts about monsters, but could have never written the story of the shark.
    Someone wrote a Dracula, a mixture of center European traditional supersticions and the seductive vampire of Polidori. AI could have never written it. But it can write the lot of stories about vampires. But never could have written the plot twist "Interview with the Vampire"

    AI may do what we do better, but can't be original. If whatever AI cooks is good enough, original work will be a waste of time for corporations.
    Let's expect times when we will see remakes once and once again with no original work or evolution.

(1)