Stories
Slash Boxes
Comments

SoylentNews is people

posted by FatPhil on Tuesday August 22 2017, @11:45AM   Printer-friendly
from the when-they-came-for-the-eSport-players,-I-said-nothing dept.

In the past hour or so, an AI bot crushed a noted professional video games player at Dota 2 in a series of one-on-one showdowns.

The computer player was built, trained and optimized by OpenAI, Elon Musk's AI boffinry squad based in San Francisco, California. In a shock move on Friday evening, the software agent squared up to top Dota 2 pro gamer Dendi, a Ukrainian 27-year-old, at the Dota 2 world championships dubbed The International.

The OpenAI agent beat Dendi in less than 10 minutes in the first round, and trounced him again in a second round, securing victory in a best-of-three match. "This guy is scary," a shocked Dendi told the huge crowd watching the battle at the event. Musk was jubilant.

OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.

— Elon Musk (@elonmusk)

According to OpenAI, its machine-learning bot was also able to pwn two other top human players this week: SumaiL and Arteezy. Although it's an impressive breakthrough, it's important to note this popular strategy game is usually played as a five-versus-five team game – a rather difficult environment for bots to handle.

[...] It's unclear exactly how OpenAI's bot was trained as the research outfit has not yet published any technical details. But a short blog post today describes a technique called "self-play" in which the agent started from scratch with no knowledge and was trained using supervised learning over a two-week period, repeatedly playing against itself. Its performance gets better over time as it continues to play the strategy game. It learns to predict its opponent's movements and pick which strategies are best in unfamiliar scenarios.

OpenAI said the next step is to create a team of Dota 2 bots that can compete or collaborate with human players in five-on-five matches. ®

Youtube Video

Also covered here (with more vids, including the bout in question):
Ars Technica: Elon Musk's Dota 2 AI beats the professionals at their own game
Technology Review: AI Crushed a Human at Dota 2 (But That Was the Easy Bit)
TechCrunch: OpenAI bot remains undefeated against world's greatest Dota 2 players


Original Submission

Related Stories

OpenAI to Face Off Against Top Dota 2 Players in 5v5 Match-ups 3 comments

AI bots trained for 180 years a day to beat humans at Dota 2

Beating humans at board games is passé in the AI world. Now, top academics and tech companies want to challenge us at video games instead. Today, OpenAI, a research lab founded by Elon Musk and Sam Altman, announced its latest milestone: a team of AI agents that can beat the top 1 percent of amateurs at popular battle arena game Dota 2.

You may remember that OpenAI first strode into the world of Dota 2 last August, unveiling a system that could beat the top players at 1v1 matches. However, this game type greatly reduces the challenge of Dota 2. OpenAI has now upgraded its bots to play humans in 5v5 match-ups, which require more coordination and long-term planning. And while OpenAI has yet to challenge the game's very best players, it will do so later this year at The International, a Dota 2 tournament that's the biggest annual event on the e-sports calendar.

[...] OpenAI says that at any one time its Dota 2 bots have to choose between 1,000 different actions while processing 20,000 data points that represent what's happening in the game.

[...] For this new batch of Dota bots, the amount of self-play is staggering. Every day, the bots played 180 years of game time at an accelerated rate. They trained at this pace over a period of months.

Dota 2 is a sequel to Defense of the Ancients (DotA).

Previously: OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout


Original Submission

The OpenAI Dota 2 Bots Defeated a Team of Former Pros 15 comments

Submitted via IRC for SoyCow1984

And it wasn't even close.

A month and a half ago, OpenAI showed off the latest iteration of its Dota 2 bots, which had matured to the point of playing and winning a full five-on-five game against human opponents. Those artificial intelligence agents learned everything by themselves, exploring and experimenting on the complex Dota playing field at a learning rate of 180 years per day. [...] the so-called OpenAI Five truly earned their credibility by defeating a team of four pro players and one Dota 2 commentator in a best-of-three series of games.

There were a few conditions to make the game manageable for the AI, such as a narrower pool of 18 Dota heroes to choose from (instead of the full 100+) and item delivery couriers that are invincible. But those simplifications did little to detract from just how impressive an achievement today's win was.

[...] play-by-play commentator Austin "Capitalist" Walsh sums up the despondency felt by Team Human after the bout neatly:

Never felt more useless in my life but we're having fun at least so I think we're winning in spirit.

Sure aren't winning in-game

— Cap (@DotACapitalist) August 5, 2018

Source: https://www.theverge.com/2018/8/6/17655086/dota2-openai-bots-professional-gaming-ai

Dota 2 is a sequel to Defense of the Ancients (DotA).

Previously: OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout
OpenAI to Face Off Against Top Dota 2 Players in 5v5 Match-ups


Original Submission

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Flamebait) by lx on Tuesday August 22 2017, @12:26PM (3 children)

    by lx (1915) on Tuesday August 22 2017, @12:26PM (#557470)

    Let's hope that this ends the e-sports craze once and for all.

    • (Score: 4, Funny) by JNCF on Tuesday August 22 2017, @12:38PM

      by JNCF (4317) on Tuesday August 22 2017, @12:38PM (#557473) Journal

      Or at least harvests electrical energy from the players, who will believe they're playing in the world as it was in 1999.

    • (Score: 2) by Wootery on Wednesday August 23 2017, @11:17AM

      by Wootery (2341) on Wednesday August 23 2017, @11:17AM (#557919)

      If someone invented a superhuman football-player robot, would people stop caring about football?

      Of course not. Why should they?

    • (Score: 2) by Bot on Wednesday August 23 2017, @07:17PM

      by Bot (3902) on Wednesday August 23 2017, @07:17PM (#558131) Journal

      I don't think so. We mop the floor with human players since Space Invaders, that did not stop you trying.

      --
      Account abandoned.
  • (Score: 3, Insightful) by Anonymous Coward on Tuesday August 22 2017, @12:48PM (10 children)

    by Anonymous Coward on Tuesday August 22 2017, @12:48PM (#557476)

    Not much of a DoTA fan, but aren't we just looking at a bit of one on one laning phase here?
    It seems like a scenario where perfect execution provides a considerable advantage.
    It's not purely out 'thinking' the player, like the chess or go AIs.
    It has the ability to never waste a frame after attacking or move any sooner than it needs to to stay out of range.
    These sort of tests of skill are something we actually spend a lot of time making computer opponents less good at so that they play 'fair'.
    There could be something amazing behind this, but I don't think the demonstration was particularly impressive.

    • (Score: 2) by nobu_the_bard on Tuesday August 22 2017, @01:03PM

      by nobu_the_bard (6373) on Tuesday August 22 2017, @01:03PM (#557482)

      This, precisely. I remember when TF2 added bots, they specifically had to add a mechanism to imitate a mouse trackpad for the Heavy so he couldn't turn and fire instantly and with perfect accuracy; it was added I think for all of the bots but this was cited as the reason at the time I believe. Nobody would mistake a bot for a human in any but the most distracted circumstance regardless in that game.

      Then again - maybe the bot was built to win. The bots for games like TF2 and others were mostly built to provide entertaining opposition. The difference in objectives may matter.

    • (Score: 2) by Fnord666 on Tuesday August 22 2017, @01:25PM

      by Fnord666 (652) on Tuesday August 22 2017, @01:25PM (#557489) Homepage

      Not much of a DoTA fan, but aren't we just looking at a bit of one on one laning phase here?
      It seems like a scenario where perfect execution provides a considerable advantage.
      It's not purely out 'thinking' the player, like the chess or go AIs.
      It has the ability to never waste a frame after attacking or move any sooner than it needs to to stay out of range.
      These sort of tests of skill are something we actually spend a lot of time making computer opponents less good at so that they play 'fair'.
      There could be something amazing behind this, but I don't think the demonstration was particularly impressive.

      Yes, the game they were playing was very limited and was one on one. It was pretty much hand picked for the scenario in question, including the chosen character and limited items. It really isn't surprising that it was able to defeat even pro players at a game mode they rarely play.

      That being said, the interesting thing to me was the development of the bot. The bot didn't start with any training set at all. It just poked around on the map, sometimes not moving at all for long periods of time. They just ran iteration after iteration of the bot playing against itself with each iteration providing feedback. Eventually it evolved into the bot they brought to the tournament. It also evolved some of the interesting strategies that normal players use plus some that are unique to the bot's style.

    • (Score: 2, Informative) by Anonymous Coward on Tuesday August 22 2017, @03:11PM (6 children)

      by Anonymous Coward on Tuesday August 22 2017, @03:11PM (#557526)

      The achievement here was not about creating a bot that could defeat humans. As you mention, this would be an interesting but relatively modest accomplishment. The achievement was creating a bot that taught itself to beat humans. This bot has no domain specific knowledge. That means it has no inherent notion of range, lanes, enemies, creeps, healing, damage:health ratios, spells, or anything like this. I think when people hear this they either glaze over it, don't understand what it means, or simply don't believe it. Imagine you played a game and were told the goal was to kill the other player. You were shown a screen that had a mostly garbled section of pixels. Every fraction of a second the screen would change to another correlated garble. And you could send inputs that also result in various other correlated garbles. That is the view of this project from the perspective of the AI. And now imagine within some period of time (probably in low months) somehow reaching a skill to beat the best humans in the world at that game. That's what you're seeing here. This is all the product of a deep learning AI mastering a game of which it knows nothing in specific about, just by playing against itself. And this is the reason that this achievement is something that's teetering on revolutionary. DeepMind has already illustrated that bots can learn to outperform humans already, but this is the first time (to my knowledge) that a deep learning system has competed against highly skilled humans in a real time (as opposed to turn based) competition. This is made further impressive by the fact that DoTA 2 is a game of incomplete information.

      We're living through the birth of AI that will eventually be able to learn faster and outperform humans in most any task. And their progress is accelerating insanely rapidly. It's also a double exponential there. The hardware is improving exponentially at the same time that the software and 'brains' also improve exponentially. This is why people like Musk, who is behind OpenAI, are very involved in trying to ensure that the future of AI is one we're prepared for. Most people have no idea where we're headed. Automation doesn't mean a burger flipping robot - it means generalized robots capable of learning to accomplish practically any task - and then rapidly being able to even outperform us flesh sacks at it.

      • (Score: 2) by Non Sequor on Tuesday August 22 2017, @06:24PM (4 children)

        by Non Sequor (1005) on Tuesday August 22 2017, @06:24PM (#557615) Journal

        The effectiveness of techniques that do not use preprogrammed domain specific knowledge to solve a particular problem measures the extent to which the problem does not require domain specific knowledge. The "no free lunch" theorems say that there is no generally applicable meta-strategy that works well for all problems. When a meta-strategy works well for a particular problem, the distribution of problem cases is a good match for the distribution implicitly assumed in the strategy.

        --
        Write your congressman. Tell him he sucks.
        • (Score: 0) by Anonymous Coward on Tuesday August 22 2017, @07:59PM (1 child)

          by Anonymous Coward on Tuesday August 22 2017, @07:59PM (#557690)

          That's a rather fancy way of saying nothing. No free lunch does not say that there is no strategy that works well for all problems, it says that there is no strategy that works better than all other strategies for all problems in all scenarios. And even that relatively useless statement comes with a few asterisks. It is not a constraint on AI as you seem to be implying. For instance it does not mean that AI will not become more capable than humans across all domains nor does it suggest that at the moment that that happens they will not then rapidly become vastly more capable than humans across all domains.

          Theory is fun, but don't get too absorbed in it. Not long ago the huge revolution in computing was going to be proving programs. Of course that opens up absurd new complexities and really does little more than kick the can from one domain to another, but it sounds sexy as balls from a computational theory perspective. Similarly here, computational theory is of course hugely relevant to AI - but in the end practice always wins over theory.

          • (Score: 2) by Non Sequor on Wednesday August 23 2017, @09:19PM

            by Non Sequor (1005) on Wednesday August 23 2017, @09:19PM (#558175) Journal

            Burning CPU cycles to train an AI in a relatively shallow problem using a general technique has immense automation potential for areas with very fixed problems where it's reasonable to pick up all of the ins and outs from pure inference. Recognizing the method by which the techniques work, which problems they are well suited for, and what leaps they can't make tells you a lot about how AI is going to develop and be deployed.

            I feel it's important not to mistake an easily anticipated application for a breakthrough.

            --
            Write your congressman. Tell him he sucks.
        • (Score: 0) by Anonymous Coward on Wednesday August 23 2017, @08:23AM (1 child)

          by Anonymous Coward on Wednesday August 23 2017, @08:23AM (#557895)

          There's also a theorem that there exists no lossless compression algorithm that decreases the size of arbitrary data thrown on it. Even worse, if it is able to make even one data set shorter, it has to make another one longer. Therefore obviously lossless compression is useless. But somehow, lossless compression is in wide use anyway. Maybe the users just don't know computer science? ;-)

          • (Score: 2) by Non Sequor on Wednesday August 23 2017, @09:41PM

            by Non Sequor (1005) on Wednesday August 23 2017, @09:41PM (#558190) Journal

            While "most" data sets of a given length are purely random, in human terms it's fairly unusual for a data generating process to be encoded on an efficient basis that results in purely random data. Lossless compression texhniques can be understood in terms of Huffman coding, tuning the lengths of codewords to the empirical frequency of the dataset, and a variety of heuristic methods that improve on this approach for types of data which are empirically commonly used by humans.

            General purpose lossless methods are very easily net winners on empirically typical datasets, but there are classes of datasets where domain specific compression methods will dominate their performance. That's a relatively simple to anticipate result.

            There are methods in algorithmic information theory that can achieve compression ratios comparable to domain specific compression algorithms without using domain specific knowledge, but they are incredibly computationally inefficient, since they rely on approximating an intractable problem. The problem is facilitated by acquisition of domain specific knowledge. If the knowledge is shallow enough, it can be found by optimization heuristics and gratuitous application of CPU cycles. If the knowledge is deep, you'll need a lot more cycles. If your domain is fixed, that may not matter, you can just burn CPU cycles to build up a body of knowledge of this problem. But if the problem isn't fixed, energy efficiency may be a consideration and the issue of how to transfer knowledge from a previous setting to a modified environment becomes relevant. In this setting, methods of learning that facilitate transfer of knowledge may be preferable to raw inferential capability. It may be appropriate to discard inferences that are too difficult to communicate and transfer.

            --
            Write your congressman. Tell him he sucks.
      • (Score: 2) by Bot on Wednesday August 23 2017, @07:24PM

        by Bot (3902) on Wednesday August 23 2017, @07:24PM (#558133) Journal

        Tech evolution seems to me logarithmic instead of exponential, but of course there are economic factors like the power of the incumbents to consider.

        --
        Account abandoned.
    • (Score: 0) by Anonymous Coward on Wednesday August 23 2017, @07:16PM

      by Anonymous Coward on Wednesday August 23 2017, @07:16PM (#558130)

      these types of demos are ok but these conclusions/stories are a little lame.

      it's like calculating all the possible moves and what moves work against what moves, etc. writing all the percentages down and then freezing time for the human player so he can look through all his notes each play. what's so impressive about that?

      the whole point of these types of games is that it's hard for humans to learn and then remember all that shit. recording it in a machine and then giving it rules so it knows how to choose all these recorded options is great and all, but it's just not the same thing/fair.

      IOW, we need human affirmative action.

  • (Score: 4, Insightful) by lgsoynews on Tuesday August 22 2017, @01:13PM (8 children)

    by lgsoynews (1235) on Tuesday August 22 2017, @01:13PM (#557485)

    From the article:

    It also taps into the Dota 2 bot API to get information not normally available to human players, such as distances between objects, as well as control its movement. This live data stream gave it an edge over human players

    Yeah, that's real fair!

    I'll be really impressed if the computer was playing in a FAIR setting: only access to a REAL screen (via some camera system) and playing with a REAL keyboard/mouse.

    Humans are at a tremendous disadvantage when they face computer programs that have a raw access to the data. We have to process the screen data visually, convert it to a mental model, take strategic decisions based on the available data, then output our choices using the mouse/keyboard/joypad. And this does not take into account the fact that computers don't get tired.

    A computer that does not have the input/output limitations "only" has to solve the strategic decision part, how is that fair?

    Don't misunderstand me, I still find to kind of things neat, but I dislike the hype especially when there is clear "cheating"/unfairness involved.

    • (Score: 2) by takyon on Tuesday August 22 2017, @01:31PM (1 child)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 22 2017, @01:31PM (#557493) Journal

      Is that really useful? Google/DeepMind has done something closer to what you suggest. If it can be done once, it can probably be done a hundred times. Requiring each AI bot to use human-like vision is not the useful bit. It's cracking the complexity of a game (assuming it is complex and it didn't accidentally find that the RTS is more about luck and reaction). As long as it isn't getting more APM than the human players it is mostly fair.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by lgsoynews on Tuesday August 22 2017, @02:23PM

        by lgsoynews (1235) on Tuesday August 22 2017, @02:23PM (#557516)

        You missed my point.

        I know that realistic access by computers has been done before, what I critize here is the unfairness while bragging that computers "trounced" players. Read the article, and Musk's tweet: he conveniently "forgets" to mention that huge advantage given to the computer.

        This is cheating and marketing B.S. (but still technically interesting, as I wrote in my original comment)

    • (Score: 1, Funny) by Anonymous Coward on Tuesday August 22 2017, @01:43PM (4 children)

      by Anonymous Coward on Tuesday August 22 2017, @01:43PM (#557498)

      Even if humans had access to the said raw data, I doubt their performance would go up (it'll probably go down)

      • (Score: 0) by Anonymous Coward on Tuesday August 22 2017, @01:59PM (2 children)

        by Anonymous Coward on Tuesday August 22 2017, @01:59PM (#557506)

        If they had access to it in a format directly accessible to their brain (via direct brain interface), then I'm sure their performance would go up. Of course to date we don't know how the brain encodes such information, so even if we assume the hardware side were completely solved, we still cannot give the brain access to it. I'm sure if the computer program would have access to the extra data only via an OCR program that additionally during its run time blocked the code examining the game situation, the computer's performance in the game would also go down considerably.

        • (Score: 2) by HiThere on Tuesday August 22 2017, @04:47PM (1 child)

          by HiThere (866) Subscriber Badge on Tuesday August 22 2017, @04:47PM (#557565) Journal

          You can basically call vision "a format directly accessible to the brain", as we've lots of special purpose hardware (wetware?) to deal with it. Probably presenting the information in any way that doesn't directly stimulate one of the senses would require a LOT more work on the part of the brain.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 1, Interesting) by Anonymous Coward on Tuesday August 22 2017, @07:10PM

            by Anonymous Coward on Tuesday August 22 2017, @07:10PM (#557647)

            There is basic stuff the ai can have access to like rng seed (thus what item will appear where next). That is why you can see bots beat mariokart in minutes by waiting to get the item box at just the right moment, so they get exactly what is needed.

      • (Score: 0) by Anonymous Coward on Tuesday August 22 2017, @05:22PM

        by Anonymous Coward on Tuesday August 22 2017, @05:22PM (#557588)

        But computer-assisted humans could win.

        e.g. the computer does the stuff computers have been better at and the humans do the stuff humans are still better at.

    • (Score: 2) by FatPhil on Tuesday August 22 2017, @04:05PM

      by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Tuesday August 22 2017, @04:05PM (#557546) Homepage
      I agree, it does seem contrived to give many advantages to the bot. Note, however, that other games have been solved with only imperfect vision as input, such as Ms Pacman: https://soylentnews.org/article.pl?sid=17/06/15/0253242 . The delay that such "real-world" restrictions put on the dominance of AIs is, IMHO, better measured in months than years. And because of that, the bot teams should really be trying hard to show off what they can do in a "mechanically" fair fight, rather than a fixed one. They'll probably get more sympathy, support, and eventually cheers for their victories, if they're prepared to go out and lose a few times.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2, Informative) by Anonymous Coward on Tuesday August 22 2017, @01:28PM

    by Anonymous Coward on Tuesday August 22 2017, @01:28PM (#557491)

    while the backend may or may not be more impressive, in one on one, it essentially is fighting an aimbot.

  • (Score: 1) by fustakrakich on Tuesday August 22 2017, @01:42PM (2 children)

    by fustakrakich (6150) on Tuesday August 22 2017, @01:42PM (#557497) Journal

    In human parlance, it's called masturbation. Yes, it can be useful learning tool.

    --
    La politica e i criminali sono la stessa cosa..
    • (Score: 2) by takyon on Tuesday August 22 2017, @01:53PM (1 child)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 22 2017, @01:53PM (#557505) Journal

      Is there nothing autobots can't do better?!

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Tuesday August 22 2017, @02:01PM

        by Anonymous Coward on Tuesday August 22 2017, @02:01PM (#557508)

        There is: As far as I know, nobody has yet managed to build a machine that enjoyed its life (or whatever term you would use instead of "life" for a non-living sentient entity).

  • (Score: 0) by Anonymous Coward on Tuesday August 22 2017, @02:07PM (2 children)

    by Anonymous Coward on Tuesday August 22 2017, @02:07PM (#557509)

    Am I supposed to know what Dota is (well, other than that it is obviously a multiplayer computer game)? Or why was there no link to a description of it (Wikipedia page, game home page, whatever)?

  • (Score: 2) by Gaaark on Tuesday August 22 2017, @03:28PM (1 child)

    by Gaaark (41) on Tuesday August 22 2017, @03:28PM (#557531) Journal

    How 'open' is openAi?

    (Havent really settled in yet from trip: haven't googled)

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 3, Informative) by HiThere on Tuesday August 22 2017, @04:50PM

      by HiThere (866) Subscriber Badge on Tuesday August 22 2017, @04:50PM (#557567) Journal

      https://github.com/openai [github.com]
      Parts of it are under the MIT License, so there's no guarantee that the whole thing is open, but the base is.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by arslan on Wednesday August 23 2017, @03:50AM

    by arslan (3462) on Wednesday August 23 2017, @03:50AM (#557837)

    I can see a good business model here... parents who want their kids to stop playing can sign up to Elon's Pwnage-as-a-Service to keep pwning their kids online to the point where it just isn't fun anymore.

    The business case would that be that the cost of the service per time unit plus resource costs multiplied by the expected duration for total pwnage is less than the cost of the kid's projected continued subscription plus resource costs. Where's an MBA when you need one?!!

(1)