Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by Fnord666 on Sunday February 02 2020, @10:06AM   Printer-friendly
from the rendering-engines dept.

Next-Gen NVIDIA Teslas Due This Summer; To Be Used In Big Red 200 Supercomputer

Thanks to Indiana University and The Next Platform, we have a hint of what's to come with NVIDIA's future GPU plans, with strong signs that NVIDIA will have a new Tesla accelerator (and underlying GPU) ready for use by this summer.

In an article outlining the installation of Indiana University's Big Red 200 supercomputer – which also happens to be the first Cray Shasta supercomputer to be installed – The Next Platform reports that Indiana University has opted to split up the deployment of the supercomputer in to two phases. In particular, the supercomputer was meant to be delivered with Tesla V100s; however the university has instead opted to hold off on delivery of their accelerators so that they can instead have NVIDIA's next-generation accelerators, which would make them among the first institutions to get the new accelerators.

The revelation is notable as NVIDIA has yet to announce any new Tesla accelerators or matching GPUs. The company's current Tesla V100s, based on the GV100 GPU, were first announced back at GTC 2017, so NVIDIA's compute accelerators are due for a major refresh. However it's a bit surprising to see anyone other than NVIDIA reveal any details about the new parts, given how buttoned-down the company normally is about such details.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by JoeMerchant on Sunday February 02 2020, @02:50PM (9 children)

    by JoeMerchant (3937) on Sunday February 02 2020, @02:50PM (#952711)

    I just (basically yesterday) installed a GeForce RTX 2060 (plus the required power supply upgrade) - impressive specs: 1920 processors, 6GB of high bandwidth RAM, $350. In the process of installing the software stack (Keras/TensorFlow/Docker/Ubuntu) before I can crank it up and see how much it warms up my office.

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Sunday February 02 2020, @04:15PM (7 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday February 02 2020, @04:15PM (#952733) Journal

    What's your use case for machine learning?

    Also, they just cut the price to $300 [anandtech.com].

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Interesting) by JoeMerchant on Sunday February 02 2020, @04:50PM (6 children)

      by JoeMerchant (3937) on Sunday February 02 2020, @04:50PM (#952748)

      They're always cutting the price...

      The main case for machine learning is "everybody's doing it, how can we do it too?" (I think the corporate speak is something to the effect of: leverage emerging technologies to deliver best in class state of the art performance in our products.)

      To that end, we hired "experts" to take some of our 30 year mature 1d time series feature extraction algorithms and "do it better" - well, they got out the big hammer with a 9 layer convolutional network and got results more or less about the same as what our 30 year developed cleverness came up with, but... they are using about 10,000x the CPU to get those answers. That was the first round. Before we launch them on a 2nd round (at double the cost of the first round) to optimize these algorithms for better performance and higher efficiency, I figured it was worth spending some time with in-house people to see what we can do with the tech ourselves. Plus, there's the ever hungry data maw that always claims to be the excuse for poor performance - we could do better if we only had more data to train on is the cry... and to be fair, that will apply to both in house and consultant efforts in the future, so we really need to get more serious about real world data gathering.

      That ^^^ is more the politics, we're looking for really basic stuff: response humps in noisy data, identification of anomalous high energy high frequency noise in what is normally mostly low frequency bumps plus a little hiss... The algorithms are never going to be perfect, because the "experts" can never agree 100% on what's a hump, what is noise, etc. but... maybe this data collection exercise will lead us to know our experts better and align better with the majority of them in how the algorithms classify stuff - whether using neural nets, or simple machine learning tuning of filters and thresholds, or just hand tweaking the old algorithms.

      --
      🌻🌻 [google.com]
      • (Score: 2) by takyon on Sunday February 02 2020, @05:34PM (2 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday February 02 2020, @05:34PM (#952757) Journal

        They're always cutting the price...

        Not true. The GeForce (RTX) 20-series was overpriced at debut on the basis of a hyped real-time raytracing feature, and minimal effective competition from AMD. RTX 2060 launched at $350 in January 2019. They didn't lower the price when they launched "Super" refreshes in July. They have only cut the price last month to compete with AMD's Radeon RX 5600 XT. It looks like the situation will improve later this year with significant gains for Nvidia's first "7nm" GPUs and "Big Navi" from AMD (it's possible that a top-end AMD card will outperform/match the RTX 2080 Ti [archive.org]). And hopefully AMD GPUs will become more useful [towardsdatascience.com] for machine learning in the future.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by JoeMerchant on Sunday February 02 2020, @09:02PM (1 child)

          by JoeMerchant (3937) on Sunday February 02 2020, @09:02PM (#952849)

          They have only cut the price last month to compete with AMD

          That may be true in this case, but I've been buying "tech stuff" since the Atari 800 in 1982 - hence: they're ALWAYS cutting the price.

          You may know, you may not know, are the AMD cards as widely supported by TensorFlow & friends? I've only done a superficial dive and it would seem that CUDA/NVidia are way better supported in that space.

          --
          🌻🌻 [google.com]
      • (Score: 0) by Anonymous Coward on Monday February 03 2020, @03:36AM (1 child)

        by Anonymous Coward on Monday February 03 2020, @03:36AM (#953000)

        > they are using about 10,000x the CPU to get those answers.

        Make sure to run the calculation that compares the 10,000x CPU cost against your salary (and perhaps a few of your key co-workers). Assuming you aren't the boss, that may well be the question that your company boss is really asking.

        Smart of you to make an end run on the consultants because I'm guessing with your domain expertise you can do the job better and cheaper than the outsiders.

        • (Score: 2) by JoeMerchant on Monday February 03 2020, @02:05PM

          by JoeMerchant (3937) on Monday February 03 2020, @02:05PM (#953111)

          This is a field deployed device, we sell about 1000 units a year, the problem with 10,000x the power is that it makes the Core i7 in the device run hot - which makes the fan blow, and the device is much better received by users if it is quiet.

          --
          🌻🌻 [google.com]
      • (Score: 2) by jasassin on Monday February 03 2020, @07:19AM

        by jasassin (3566) <jasassin@gmail.com> on Monday February 03 2020, @07:19AM (#953053) Homepage Journal

        What?

        --
        jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
  • (Score: 0) by Anonymous Coward on Sunday February 02 2020, @07:19PM

    by Anonymous Coward on Sunday February 02 2020, @07:19PM (#952799)

    what a skank...