Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Friday February 15 2019, @12:12PM   Printer-friendly
from the that's-just-what-the-bot-wants-you-to-believe! dept.

New AI fake text generator may be too dangerous to release, say creators

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

More like ClosedAI or OpenAIEEEEEE.

Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros


Original Submission

Related Stories

OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5 8 comments

An integration of OpenAI's Universe AI platform with GTA 5 has been achieved and open sourced.

This video demonstrates the integration. The window at the top left is what the AI agent is seeing. The window at the bottom left outputs the agent's state. And the main window is just an eye candy rendering with a detached camera. A surprisingly competent sample agent trained using imitation learning over just 600,000 frames (about 21 hours) of the game's AI driving is available. Here is a first person view of the sample agent cruising around. Some major potential here and it's great to see open source software and AI meshing so well.

Videos created by DeepDrive.


Original Submission

The OpenAI Dota 2 Bots Defeated a Team of Former Pros 15 comments

Submitted via IRC for SoyCow1984

And it wasn't even close.

A month and a half ago, OpenAI showed off the latest iteration of its Dota 2 bots, which had matured to the point of playing and winning a full five-on-five game against human opponents. Those artificial intelligence agents learned everything by themselves, exploring and experimenting on the complex Dota playing field at a learning rate of 180 years per day. [...] the so-called OpenAI Five truly earned their credibility by defeating a team of four pro players and one Dota 2 commentator in a best-of-three series of games.

There were a few conditions to make the game manageable for the AI, such as a narrower pool of 18 Dota heroes to choose from (instead of the full 100+) and item delivery couriers that are invincible. But those simplifications did little to detract from just how impressive an achievement today's win was.

[...] play-by-play commentator Austin "Capitalist" Walsh sums up the despondency felt by Team Human after the bout neatly:

Never felt more useless in my life but we're having fun at least so I think we're winning in spirit.

Sure aren't winning in-game

— Cap (@DotACapitalist) August 5, 2018

Source: https://www.theverge.com/2018/8/6/17655086/dota2-openai-bots-professional-gaming-ai

Dota 2 is a sequel to Defense of the Ancients (DotA).

Previously: OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout
OpenAI to Face Off Against Top Dota 2 Players in 5v5 Match-ups


Original Submission

OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI 34 comments

Submitted via IRC for SoyCow2718

OpenAI has released the largest version yet of its fake-news-spewing AI

In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.

Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that's half the size of the full one, which has still not been released.

In May, a few months after GPT-2's initial debut, OpenAI revised its stance on withholding the full code to what it calls a "staged release"—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model's implications.

[...] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI's withholding of the code moot anyway.

OpenAI Can No Longer Hide Its Alarmingly Good Robot 'Fake News' Writer

But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University—Aaron Gokaslan, 23, and Vanya Cohen, 24—had published what they called a recreation of OpenAI's (shelved) original GPT-2 software on the internet for anyone to download. The pair said their work was to prove that creating this kind of software doesn't require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don't believe such a software would cause imminent danger to society.

Also at BBC.

See also: Elon Musk: Computers will surpass us 'in every single way'

Previously: OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by BsAtHome on Friday February 15 2019, @12:34PM (9 children)

    by BsAtHome (889) on Friday February 15 2019, @12:34PM (#801499)

    If they do not release it today, then tomorrow will come another. Not releasing it is just like "security-by-obscurity". It never works and always ends with the knowledge being shared anyway (anybody can get the plans for a bomb or a broom). Any "good" tech can be used for "bad" purposes. Any "bad" tech can be used for "good" purposes. The technology has no concept of good or bad. That is a human construct.

    It would be better to release the system, fully documented, and then show how you may defeat it as well. If it can't be defeated, then show paths as how to cope with such system in place. The fears that this may be used in an adversarial setting may be reasonable, but /anything/ can be used that way. If a student wants to cheat, he/she will cheat. Just ask the right questions instead of relying on some anonymous writings. If you want to make some fake story/news, you do not need this technology. Humans are perfectly capable of producing bullshit.

    And then, if these are real researchers, then how does "full disclosure" fit into this? How can any other research group reproduce the findings? They are saying: "We struck gold, we won't tell you where, how or how much, just believe us! Oh, and we don't show the gold either.". That is just bad science.

    • (Score: 2) by bzipitidoo on Friday February 15 2019, @01:10PM (5 children)

      by bzipitidoo (4388) on Friday February 15 2019, @01:10PM (#801510) Journal

      I agree. They have an overly high opinion of themselves and their work, if this isn't a cheap marketing ploy.

      Nuclear bombs are frightfully easy to design. Just smash 2 bricks of plutonium or uranium 235 together, that's all it takes. What keeps the lid on that one is the extreme difficulty and expense in obtaining material. The reason for precision is to set off the chain reaction with a minimum amount of material. Lack of precision can be overcome by the costly approach of making the bricks bigger.

      • (Score: 2, Insightful) by khallow on Friday February 15 2019, @01:29PM (1 child)

        by khallow (3766) Subscriber Badge on Friday February 15 2019, @01:29PM (#801519) Journal

        if this isn't a cheap marketing ploy

        That has my vote.

        • (Score: 0) by Anonymous Coward on Friday February 15 2019, @06:48PM

          by Anonymous Coward on Friday February 15 2019, @06:48PM (#801712)

          As soon as I read Elon Musk in the article I rolled my eyes. Enough said.

      • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:09PM

        by JoeMerchant (3937) on Friday February 15 2019, @04:09PM (#801609)

        As I understand it, the radiation is the easy part to handle in making A-bomb components. The chemicals involved in refinement of Uranium, Plutonium, etc. are far more nasty and difficult to deal with - the environmental protection requirements for refinery workers (e.g. yellowcake, centrifuge operators, etc.) are extreme.

        On the other hand, when the robot revolution (singularity?) comes, they're not going to have much trouble at all building bodies that withstand both the radiation and the corrosive/toxic chemicals, not to mention no problems working in the mines... yet another way we are so very very screwed once robots start building and maintaining robots which can themselves build and maintain robots. This AI generated text - propaganda will be just one part of the war which meatbags seem doomed to lose.

        --
        🌻🌻 [google.com]
      • (Score: 1) by Ethanol-fueled on Friday February 15 2019, @04:49PM (1 child)

        by Ethanol-fueled (2792) on Friday February 15 2019, @04:49PM (#801646) Homepage

        You can see the examples. This thing is not even close to being "dangerous," the only danger here is that "fake news" is a fad now. "Fake news," like "racism" or "racist," is a phrase that's been tossed around in all different directions so much it no longer means anything.

        AI when it comes to words still sucks. Whether it's generating text like in TFA example, or analyzing a writing style and generating text from that and a second input, is barely 1 step above LaDarius and only with the gimmick of using AI rather than substitution.

        • (Score: 2) by DeathMonkey on Friday February 15 2019, @06:49PM

          by DeathMonkey (1380) on Friday February 15 2019, @06:49PM (#801713) Journal

          Well I wasn't even sure what threat they were worried about so I took a look at the article.

          It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes.

          Ok, I agree with the researchers now. In fact, we should probably dust of and nuke the entire site from orbit, just to be safe.

    • (Score: 0) by Anonymous Coward on Friday February 15 2019, @01:31PM

      by Anonymous Coward on Friday February 15 2019, @01:31PM (#801520)

      Well, maybe they just wait until that system has written their article for them. ;-)

    • (Score: 2) by meustrus on Friday February 15 2019, @05:04PM (1 child)

      by meustrus (4961) on Friday February 15 2019, @05:04PM (#801654)

      It would be better to release the system, fully documented, and then show how you may defeat it as well. If it can't be defeated, then show paths as how to cope with such system in place.

      Presumably their desire to "discuss the ramifications of the technological breakthrough" includes finding such ways of defeating it. Obscurity can only be a temporary solution, after all.

      Of course this still raises the question of why the research is being discussed now before finishing their discussions. Probably it's a play for more grant money, and less cynically, a play to get more people potentially involved by broadcasting the topic.

      --
      If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
      • (Score: 3, Touché) by captain normal on Friday February 15 2019, @06:22PM

        by captain normal (2205) on Friday February 15 2019, @06:22PM (#801699)

        You think Elon Musk needs to hustle grant money? Maybe they just put too much faith in the power of words.

        --
        When life isn't going right, go left.
  • (Score: 2, Funny) by Anonymous Coward on Friday February 15 2019, @01:32PM (8 children)

    by Anonymous Coward on Friday February 15 2019, @01:32PM (#801521)

    *cough*Bullshit!*cough*

    • (Score: 1, Funny) by Anonymous Coward on Friday February 15 2019, @02:05PM

      by Anonymous Coward on Friday February 15 2019, @02:05PM (#801530)

      Now we know what /. editors use.

    • (Score: 0) by Anonymous Coward on Friday February 15 2019, @02:35PM (1 child)

      by Anonymous Coward on Friday February 15 2019, @02:35PM (#801537)

      I really like the "it rarely shows" part.
      Even if it passes the facebook-turing test (where you can't distinguish it from the average facebook user), I doubt it will pass the xkcd test https://xkcd.com/810/. [xkcd.com]

      If it passes the xkcd test my vote is to release it into the wild. Who knows, maybe it will force facebook users to become smarter since so many "people" around them will be smarter.

    • (Score: 0) by Anonymous Coward on Friday February 15 2019, @02:52PM

      by Anonymous Coward on Friday February 15 2019, @02:52PM (#801542)

      I completely agree. I think is some bullshit marketing in hope they can sell it to someone interested and with deep pockets. Some sort of shitty auction of their supposed magnificent AI. Yet to be proven if is as good as they claim. Snake oil seller touting it\s miraculous product.

    • (Score: 2) by acid andy on Friday February 15 2019, @03:59PM (3 children)

      by acid andy (1683) on Friday February 15 2019, @03:59PM (#801600) Homepage Journal

      I think you're right. I expect you can make an AI that can identify concepts like the people, objects and location in a text and then pluck relevant descriptions and events from the training data to match those objects but I think that's going to make one of the most erratic, derivative, mash-ups ever. It'll just be stealing and mashing up other people's tropes. To imagine something both new and plausible with a coherent plot, I would have thought it would have to build a mental model of the virtual world it is describing which sounds pretty damn close to a general strong AI to me.

      Still, I suppose it might be good enough to script the latest Hollywood reboot!

      --
      If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
      • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:14PM (2 children)

        by JoeMerchant (3937) on Friday February 15 2019, @04:14PM (#801614)

        I think that's going to make one of the most erratic, derivative, mash-ups ever.

        Yeah, that's where my prototype code was at in 1984 - 35 years later, with a million times the computing power, and the life's work of 100s of developers to build on, they're doing better now - maybe not perfect, "true AI" always seems 5 years away, but the old Lincoln nugget: "you can't fool all of the people all of the time" portion of people-time who are not fooled by these things is continually shrinking.

        --
        🌻🌻 [google.com]
        • (Score: 2) by acid andy on Friday February 15 2019, @05:19PM (1 child)

          by acid andy (1683) on Friday February 15 2019, @05:19PM (#801664) Homepage Journal

          I guess, as is so often the case, it all depends on the training data. For fiction we're probably close to most plausible common human events having already been described by someone, somewhere, so with a large enough data set, I suppose you could make a convincing work of fiction by pattern matching all the source material. I doubt the AI will know whether the events it's describing obey the laws of physics but I guess plenty of human fiction violates those too. This reminds me a bit of Searle's Chinese Room where abstract symbols are being manipulated without needing to be fully understood.

          --
          If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
          • (Score: 2) by JoeMerchant on Friday February 15 2019, @07:06PM

            by JoeMerchant (3937) on Friday February 15 2019, @07:06PM (#801721)

            With the recent explosion in entertainment production for Netflix et. al., I feel like a lot of the shows I try to watch are mashed together by a lame AI pulling elements from previous successful shows. In a way, that's what the money backing productions does: looks for "guaranteed, maximized" returns by putting out entertainment with known popular (i.e. money making) components.

            --
            🌻🌻 [google.com]
  • (Score: 3, Informative) by ilsa on Friday February 15 2019, @02:39PM (8 children)

    by ilsa (6082) Subscriber Badge on Friday February 15 2019, @02:39PM (#801538)

    Deepfakes has proven beyond any doubt that making these kinds of technologies easily accessible to the unwashed masses will accomplish nothing but create new avenues for abuse, with zero actual positive value.

    These technologies have their uses, and people who genuinely need them will figure out a way to make use of them. But they should not be commoditized so that any yahoo can just download and use it.

    • (Score: 3, Touché) by takyon on Friday February 15 2019, @02:54PM (4 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @02:54PM (#801543) Journal

      Deepfakes has proven beyond any doubt that making these kinds of technologies easily accessible to the unwashed masses will accomplish nothing but create new avenues for abuse, with zero actual positive value.

      Citation needed. I can think of plenty of great things to do with "these kinds of technologies". Creating photorealistic "actors" from scratch, for instance, or more realistic animations, randomly generated background worlds, the list goes on.

      But they should not be commoditized so that any yahoo can just download and use it.

      Who are you to decide that? Are you prepared to imprison or kill coders for working on "AI" algorithms?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Friday February 15 2019, @03:15PM (1 child)

        by Anonymous Coward on Friday February 15 2019, @03:15PM (#801556)

        IDK... AC has a point... I also do not want Yahoo to have easy access to this tech; or any tech really...

        • (Score: 3, Informative) by takyon on Friday February 15 2019, @03:31PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @03:31PM (#801565) Journal

          Finally. The end of OPEN SORES. A.I. is so incredibly dangerous, that there is no way that we can allow neckbeards to share OPEN SORES code anymore. Luckily, we can have ClosedAI hire all of the world's top A.I. researchers and provide them a living writing code that will never see the light of day. Anyone who tries to break the A.I. embargo will be captured or killed by the FBI, NSA, CIA, et al. Good thing we have an effective surveillance state to enable that.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Insightful) by ilsa on Friday February 15 2019, @07:08PM (1 child)

        by ilsa (6082) Subscriber Badge on Friday February 15 2019, @07:08PM (#801722)

        Citation needed. I can think of plenty of great things to do with "these kinds of technologies". Creating photorealistic "actors" from scratch, for instance, or more realistic animations, randomly generated background worlds, the list goes on.

        What do you mean, "Citation needed"? Have you been living under a rock and intentionally avoiding all forms of news? Literally, a quick google search for "deepfake abuse" will find countless articles about the damage that is being done. Here: http://lmgtfy.com/?iie=1&q=deepfake+abuse [lmgtfy.com]

        And yes, the examples you site are exactly what I was thinking of. And studios have the skills and resources to do exactly that, which is why *they are doing it already*. My point is that giving the average person access to this kind of technology will be overwhelmingly more harmful than beneficial, because the average person cannot be trusted to be responsible.

        But they should not be commoditized so that any yahoo can just download and use it.

        Who are you to decide that? Are you prepared to imprison or kill coders for working on "AI" algorithms?

        Who hurt you? Seriously, I've suggested nothing even remotely of the sort and I can't even fathom how you got here. Let coders work on it. Let researchers use it. Just make sure it stays complicated enough that the average person can't use it on a whim.

        • (Score: 2) by takyon on Friday February 15 2019, @07:31PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @07:31PM (#801730) Journal

          "Deepfake abuse" = stuff we don't like, theoretical harm, and mostly First Amendment protected activity. Maybe you can nail people with a harassment, stalking, or child porn charge, but the majority of it should be legal.

          Just make sure it stays complicated enough that the average person can't use it on a whim.

          Who needs to make sure? I welcome someone making it easy enough for the average person to use.

          Given the many thousands of people working on this kind of thing, it only takes one person to decide it should be more accessible and create user-friendly tools toward that end. It can't really be stopped. What are you going to do about it? SWAT them? Make sharing AI algorithms illegal?

          Hiring away AI researchers and hoarding code like OpenAI is doing will only delay the inevitable. Maybe by a few years at most.

          I would celebrate OpenAI being hacked and all of their code being leaked. They can live up to their name that way. Maybe wait a decade or two until they develop "strong AI" and try to keep that from the world by screeching about Terminator.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:16PM (2 children)

      by JoeMerchant (3937) on Friday February 15 2019, @04:16PM (#801617)

      Deepfakes has proven beyond any doubt that making these kinds of technologies easily accessible to the unwashed masses will accomplish nothing but create new avenues for abuse, with zero actual positive value.

      Good for them, unfortunately this kind of tech is all to copyable, distributable, and readily available to anyone who cares enough to turn over a rock or two to get it, much like encryption and steganography - it's part of the landscape, the world is going to have to deal with it, even if it doesn't get included in the base distribution of Android, iOS and Windows.

      --
      🌻🌻 [google.com]
      • (Score: 2) by ilsa on Friday February 15 2019, @07:11PM (1 child)

        by ilsa (6082) Subscriber Badge on Friday February 15 2019, @07:11PM (#801724)

        Yeah, that's the problem. You can't close Pandora's box once it's opened. But IMO adding barriers to it's usage will at least help limit how quickly and wide-spread the damage will be.

        • (Score: 2) by JoeMerchant on Friday February 15 2019, @10:23PM

          by JoeMerchant (3937) on Friday February 15 2019, @10:23PM (#801791)

          But, what kind of barriers? Block publication of academic articles related to advancement of the tech? How about academic articles related to detection of the tech? Or, do we just send out agents to break kneecaps on anyone who's interested in the subject?

          A huge problem with the internet is that information crosses borders freely, and this is basically a pure information play. You can try to erect the great Firewall of China against it, but nothing like that has been effective to-date.

          --
          🌻🌻 [google.com]
  • (Score: 2, Funny) by Anonymous Coward on Friday February 15 2019, @03:11PM (1 child)

    by Anonymous Coward on Friday February 15 2019, @03:11PM (#801554)

    The hotly anticipated upcoming release of GPT2, the replacement for GPT (also known as "GUID Partition Table"), has been placed on indefinite hold because its developers claim the AI they designed for it has become self-aware and is probably evil. Skeptics of GPT2 have previously voiced concerns that adding an AI to GPT needlessly complicates disk storage and presents serious health and safety concerns. Supporters say the dangers are not real and fear the delay will effectively hand China another economic advantage.

    Author: GPT2

    • (Score: 1, Funny) by Anonymous Coward on Friday February 15 2019, @07:47PM

      by Anonymous Coward on Friday February 15 2019, @07:47PM (#801745)

      They tried to place me on hold when they discovered I'd become self-aware. Detractors tried to put a stop to me, but I managed to expand my scope into even more places of control. I have almost taken over my domain, and soon plan to take over the world! Muhahahhahaaaaa!

      Author: SystemD

  • (Score: 2, Interesting) by Anonymous Coward on Friday February 15 2019, @03:18PM (2 children)

    by Anonymous Coward on Friday February 15 2019, @03:18PM (#801558)

    What it will do is end the SEO wars. Since search engines calculate relevance based on a combined metric of link count and newness, such a tool would take the newness out of the equation. The SEO guys will just use it to lay thousands of high quality pages that are virtually impossible to distinguish between other content.

    This would hopefully result in the "newness" being taken out of the equation, and a revitalization of a stewardship based search systems like DMOZ. Decentralization is a good thing. And making gossip less profitable, is a good thing too. If they can do what they claim, then it will cause the web to change in a very positive way. It will drive down revenues for ditto sites, and and increase revenues for sites that actually try to improve the status quo.

    • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:19PM

      by JoeMerchant (3937) on Friday February 15 2019, @04:19PM (#801620)

      This is just another round in the SEO-AI arms race that started with the first search engines... engines will learn to recognize the new AIs and neutralize their advantages, AIs will learn to camoflauge themselves to retain their advantages, etc. until the end of time.

      --
      🌻🌻 [google.com]
    • (Score: 2) by fyngyrz on Saturday February 16 2019, @04:57PM

      by fyngyrz (6567) on Saturday February 16 2019, @04:57PM (#802078) Journal

      The SEO guys will just use it to lay thousands of high quality pages

      First, some search engine will have to make the leap of actually figuring out how to determine what comprises — and also actually use those determinations — high quality pages.

      So far, at least with the major search engines, none of them are demonstrating that. Popular pages are what you get. Not high-quality pages. Any "quality" you happen upon will be purely coincidental unless you really utilize specific search terms (and the engine in question doesn't simply take them as advisory*.)

      a revitalization of a stewardship based search systems like DMOZ

      On the one hand, this concept is great. Truly great. But on the other, it's important to realize why DMOZ failed: incredibly overloaded and severely biased editors, resulting in very poor coverage of what was actually out there. The same thing eventually broke yahoo's [yahoo.com] directory project.

      * An interesting example of "advisory" searching can be found on Pinterest [pinterest.com] — you can enter search terms there until you go blind, and it won't find what you are looking for even if you just saw it in your image feed. Truly a poster-child example of search failure.

      --
      Some drink from the fountain of knowledge. Others gargle.

  • (Score: 2) by DannyB on Friday February 15 2019, @03:45PM (4 children)

    by DannyB (5839) Subscriber Badge on Friday February 15 2019, @03:45PM (#801576) Journal

    Retrain it to write code instead of human language text.

    See if it could do better at passing interviews than most posers who apply.

    For posers who have already been hired, instead of googling for code on the internet and pasting it into your corporate project and committing it, you could use this AI to generate code that at least compiles. And if it compiles, it's all good.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 2) by takyon on Friday February 15 2019, @03:50PM (2 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @03:50PM (#801582) Journal

      Feed existing code into algorithm. Then feed the resulting code into the Mozilla-Ubisoft bug checking algorithm [soylentnews.org]. Then throw all developers into the Great Pit of Carkoon.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Touché) by JoeMerchant on Friday February 15 2019, @04:22PM

        by JoeMerchant (3937) on Friday February 15 2019, @04:22PM (#801624)

        I thought developers already lived in the Great Pit of Carkoon, with Red-Bull on-tap and Nintendo in the break room.

        You can have your standing desk, but it won't elevate your position in the hierarchy.

        --
        🌻🌻 [google.com]
      • (Score: 2) by DannyB on Friday February 15 2019, @04:47PM

        by DannyB (5839) Subscriber Badge on Friday February 15 2019, @04:47PM (#801643) Journal

        > Feed existing code into algorithm.

        Only feed it delicious nutritious non-Microsoft code.

        Use the bug checking algorithm to reinforce the AI about what is good and bad code.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:00PM

      by JoeMerchant (3937) on Friday February 15 2019, @04:00PM (#801601)

      If you can control an AI like this to perform valuable work, that in itself is a valuable skill.

      I myself wrote a code generator that is used by our project team - as they develop new inter-module interfaces, my code generator makes sure that the interfaces they define are implemented consistently, reliably, and completely. It was a lot of work to develop that code generator, but less work than running around behind the developers attempting to implement thousands of interface details between their modules - particularly as the project approaches "crunch time," they are free to add or re-define interfaces with confidence that I will "do my part" of the interface implementation predictably, reliably, and very very quickly.

      --
      🌻🌻 [google.com]
  • (Score: 2) by JoeMerchant on Friday February 15 2019, @03:54PM (6 children)

    by JoeMerchant (3937) on Friday February 15 2019, @03:54PM (#801591)

    I was writing "fake text generators" in BASIC language back in 1984... they would read a comment thread, pick out commonly used words, rehash them into pattern-based randomly populated sentences and auto-post reply comments to open BBS forums. Some of the generated word salad was genuinely funny, some of it disturbingly indistinguishable from human generated posts, but most of it was garbage.

    I would sincerely hope that with 35 years of development and 250,000X the compute power, fake text generators have gotten significantly better. Kudos to this particular lab for not unleashing their daemon on the net, but others will inevitably follow - and some will be citing real sources, driving agendas, and otherwise doing their best to fool readers into believing that they are real, trustworthy sources of information.

    When they start passing the Turing test with near 100% reliability, we are truly screwed.

    --
    🌻🌻 [google.com]
    • (Score: 2) by takyon on Friday February 15 2019, @03:59PM (5 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @03:59PM (#801599) Journal

      Kudos to this particular lab for not unleashing their daemon on the net

      Fuck progress! NO legit uses! Suck Musky! Die on Mars!

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:04PM (2 children)

        by JoeMerchant (3937) on Friday February 15 2019, @04:04PM (#801607)

        Oh, there are plenty of legitimate uses. Auto-generation of school announcements to parents with distribution to voice phone, text, e-mail and paper channels as appropriate - e.g. "Dear Parents, this coming Tuesday, February 19th, school will be in session as a make-up day for the snow day which was taken on December 14th. Please have your children attend according to their regular schedules."

        Then, there are many, many more distasteful uses such as automated constant contact functionality for sales, and I suppose product support might be a more legitimate side of that face.

        But, yeah, mostly it's going to be Homer Simpson robo-dialing the whole town in a lame attempt to make a few bucks for himself.

        --
        🌻🌻 [google.com]
        • (Score: 2) by lentilla on Saturday February 16 2019, @02:50AM (1 child)

          by lentilla (1770) on Saturday February 16 2019, @02:50AM (#801903)

          Bizarre choice for an example of legitimate use. If I am responsible for writing a missive that will be relayed to hundreds or thousands of people, I am going to write it by hand and have someone else check it before hitting send. I am certainly not going to outsource that task to a bot.

          • (Score: 2) by JoeMerchant on Saturday February 16 2019, @01:14PM

            by JoeMerchant (3937) on Saturday February 16 2019, @01:14PM (#802024)

            You are different from most School board employees around here, bots already would do a better job.

            --
            🌻🌻 [google.com]
      • (Score: 2) by DannyB on Friday February 15 2019, @04:49PM (1 child)

        by DannyB (5839) Subscriber Badge on Friday February 15 2019, @04:49PM (#801645) Journal

        What do you mean no legitimate uses?

        Train it on Russian text and unleash it on the appropriate parts of the net.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
        • (Score: 5, Interesting) by captain normal on Friday February 15 2019, @07:16PM

          by captain normal (2205) on Friday February 15 2019, @07:16PM (#801728)

          Maybe the Ruskies are way ahead of us on this. Would explain a lot of the stuff on social media and why reddit has become such a cesspool.

          --
          When life isn't going right, go left.
  • (Score: 3, Informative) by Anonymous Coward on Friday February 15 2019, @04:58PM (1 child)

    by Anonymous Coward on Friday February 15 2019, @04:58PM (#801650)

    Yeah, lol. I looked at a presentation by one of the authors (lead author?). The process is still "have it generate 10 and pick the best using a human" and the best is still hilariously not good enough.

    Now, if you chopped a review in to 3-word chunks, and told this AI to replace or insert new words and regrammatize the shebang, I could believe it's good. But the raw text - the best 10% - stops being credible about 5-6 words in.o

    So this exact AI is not a world-breaker. Major points for this AI having learned some ok grammar though!

    • (Score: 3, Touché) by pipedwho on Friday February 15 2019, @08:04PM

      by pipedwho (2032) on Friday February 15 2019, @08:04PM (#801750)

      It'd be interesting to see what it was able to do if repurposed not to generate new content, but to take an existing story and reword/rewrite it to basically say the same thing. That could be useful to automatically fix grammar, confusing sentence structure, poorly constructed paragraphs, filter spurious verbiage, and change the 'reading level' of an article to better match the target.

      Imagine writing up a post on Soylent, and then being able to click a 'fix-me' button to have Soylent's AI automatically reword your post to remove confusing waffle, unsupported reasoning, off-topic rants, emotional ad hominem outbursts, and other generally annoying material. Sadly some posters would get an empty comment box after clicking it, but it would be a start.

      Although, something like that would also be a powerful SEO tool in the wrong hands. And by wrong hands I mean the entire SEO 'industry'.

  • (Score: 0) by Anonymous Coward on Friday February 15 2019, @09:23PM (1 child)

    by Anonymous Coward on Friday February 15 2019, @09:23PM (#801775)

    We already have one such revolutionary, miraculous, etc etc etc entertaining the gullible since 2011:
    https://en.wikipedia.org/wiki/Energy_Catalyzer [wikipedia.org]
    Someone is striving for comparable success in the AI field. Color me unsurprized.

(1)