Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by mrpg on Saturday February 04 2023, @11:41PM   Printer-friendly
from the we-are-commited-to-help-us dept.

After being acquired by Red Ventures, staff say editorial firewalls have been repeatedly breached:

Last October, CNET's parent company, Red Ventures, held a cross-department meeting to discuss the AI writing software it had been building for months. The tool had been in testing internally ahead of public use on CNET, and Red Ventures' early results revealed several potential issues.

[...] Red Ventures executives laid out all of these issues at the meeting and then made a fateful decision: CNET began publishing AI-generated stories anyway.

"They were well aware of the fact that the AI plagiarized and hallucinated," a person who attended the meeting recalls. (Artificial intelligence tools have a tendency to insert false information into responses, which are sometimes called "hallucinations.") "One of the things they were focused on when they developed the program was reducing plagiarism. I suppose that didn't work out so well."

[...] Multiple former employees told The Verge of instances where CNET staff felt pressured to change stories and reviews due to Red Ventures' business dealings with advertisers. The forceful pivot toward Red Ventures' affiliate marketing-driven business model — which generates revenue when readers click links to sign up for credit cards or buy products — began clearly influencing editorial strategy, with former employees saying that revenue objectives have begun creeping into editorial conversations.

Reporters, including on-camera video hosts, have been asked to create sponsored content, making staff uncomfortable with the increasingly blurry lines between editorial and sales. One person told The Verge that they were made aware of Red Ventures' business relationship with a company whose product they were covering and that they felt pressured to change a review to be more favorable.


Original Submission

This discussion was created by mrpg (5708) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by stormreaver on Sunday February 05 2023, @12:34AM (6 children)

    by stormreaver (5101) on Sunday February 05 2023, @12:34AM (#1290314)

    The word "hallucinations" is being used in a context where it doesn't belong. Computer programs don't hallucinate; that is a biological defect. Computer programs malfunction, or do things that make them not suitable for their task. Windows, for example, has severe fuckups that are not malfunctions; in those cases, it is working as designed.

    Using the word "hallucinations" is an attempt to draw a positive emotional response to mislead people into believing there is some type of intelligence at work with these programs. "AI" has always been a huge misnomer, as there is no intelligence behind it. It is an impressive achievement in data processing, but that's all it is. There is no more intelligence in it than there is in pistons in an engine. It's a mathematical machine.

    • (Score: 4, Insightful) by helel on Sunday February 05 2023, @03:50AM

      by helel (2949) on Sunday February 05 2023, @03:50AM (#1290330)

      Why can't a computer hallucinate? If a human sees something that isn't there we call it a hallucination. If a computer does the same thing? Sure, it's a malfunction, but it's often useful to be able to describe what kind of malfunction you're dealing with. After all, the solution to a "smart doorbell" that has no video at all is probably very different than one that thinks it sees a person approaching every night at the witching hour.

    • (Score: 4, Insightful) by janrinok on Sunday February 05 2023, @11:08AM

      by janrinok (52) Subscriber Badge on Sunday February 05 2023, @11:08AM (#1290350) Journal

      Using the word "hallucinations" is an attempt to draw a positive emotional response to mislead people into believing there is some type of intelligence at work with these programs.

      I agree that your claim is probably reasonably accurate.

      However, I am sure that using the word 'hallucinate' with regards to AI is also just the language developing as humans pick a word with an understandable if not exactly accurate description of what is happening. I understood clearly what the writer was trying to say. The AI may not be malfunctioning - it might have been designed to try to find words or phrases from its vocabulary which it 'thinks' are suitable to be inserted. In which case it is functioning exactly as designed.

    • (Score: 2) by acid andy on Sunday February 05 2023, @02:44PM (3 children)

      by acid andy (1683) on Sunday February 05 2023, @02:44PM (#1290361) Homepage Journal

      There is no more intelligence in it than there is in pistons in an engine. It's a mathematical machine.

      Given that AI these days usually involves neural networks, I'm not so sure your comments are entirely fair. They're not as complex as biological brains, certainly, but the way they process and represent information is at least analagous. Don't you remember those weird pictures they've generated from the internal state of neural networks? Hallucination is a good word for those, surely.

      --
      If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
      • (Score: 2) by stormreaver on Sunday February 05 2023, @05:04PM (2 children)

        by stormreaver (5101) on Sunday February 05 2023, @05:04PM (#1290371)

        The name "neural network" is a misnomer. Neural Networks do not replicate the functionality of biological neurons, except in a very, very, very superficial sense. They are still mathematical constructs that are many orders of magnitude less sophisticated and capable than biological neurons. They will never, ever, ever come anywhere even remotely close to biological neural capabilities. Once the low hanging computational fruit has been exhausted in the market, and the customers have been adequately fleeced, "AI" will once again be swept under the rug and be relegated to what it does best: iterative math.

        There was a placard my mom bought me when I was a teenager, and there have been several takes on the theme: "to err is human; to really fuck things up requires a computer." Replace "computer" with "artificial intelligence", and the point remains valid as ever. Even in those cases where "'AI" is useful, it usually spits out something one degree removed from garbage. In every single case (and this will never change), a competent human must review the results. It will always remain in the realm of autocorrect, where the program will present a person with several possible choices, and the person must choose the most correct possibility.

        • (Score: 2) by acid andy on Sunday February 05 2023, @10:26PM (1 child)

          by acid andy (1683) on Sunday February 05 2023, @10:26PM (#1290402) Homepage Journal

          So at the minimal end of the intelligence spectrum you have a rock and right by it your piston engine example, much nearer the far end is the human brain, and somewhere in the middle we have present day artificial neural networks.

          I don't think the current neural network architectures are going to get anywhere close to strong AI. I've said before I believe it's because biological brains have evolved specialized regions that perform particular functions. A lot more R&D is needed to try and mimic something of that complexity.

          As you said the artificial neurons are greatly simplified compared to their biological counterparts. I don't personally believe that in itself rules out a network of such neurons exhibiting legitimately intelligent behavior though.

          --
          If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
          • (Score: 3, Interesting) by stormreaver on Monday February 06 2023, @09:28PM

            by stormreaver (5101) on Monday February 06 2023, @09:28PM (#1290521)

            I don't personally believe that in itself rules out a network of such neurons exhibiting legitimately intelligent behavior though.

            Given our current technological level, I'm of the opinion that a neural network covering roughly the entire planet's surface would be necessary to accurately imitate a lower life form such as a mouse. The power requirements per minute would probably be in line with that released by a multi-megaton nuclear explosion.

            Granted, I'm not being completely serious, but I'm also not being entirely facetious.

(1)