Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday October 28 2020, @11:45PM   Printer-friendly
from the graphic-news! dept.

AMD announced its first RDNA 2 (Radeon RX 6000 series) gaming GPUs during a live stream (24m42s) on October 28.

AMD originally planned for RDNA 2 to have 50% more performance per Watt than GPUs using the RDNA 1 microarchitecture. Now, AMD is claiming 54% more performance per Watt for the RX 6800 XT and RX 6800, and 65% more performance per Watt for the RX 6900 XT. Part of the efficiency gain is due to the use of "Infinity Cache", similar to the L3 cache found in Ryzen CPUs. This allowed AMD to use a 256-bit memory bus with 2.17x the effective memory bandwidth of a 384-bit bus, while using slightly less power.

The RX 6900 XT ($1000) has performance comparable to Nvidia's RTX 3090, with a total board power (TBP) of 300 Watts. The RX 6800 XT ($650) is comparable to Nvidia's RTX 3080, also with a 300 Watt TBP. The RX 6800 ($580) is around 18% faster than Nvidia's RTX 2080 Ti, with a 250 Watt TBP. All three of the GPUs have 16 GB of GDDR6 VRAM and 128 MB of "Infinity Cache".

The 6800 XT and 6800 will be available starting on November 18, while the 6900 XT will be available on December 8.

Also at Tom's Hardware, Phoronix, Ars Technica, and Guru3D.

Previously: Nvidia Announces RTX 30-Series "Ampere" GPUs
AMD Announces Zen 3 CPUs


Original Submission

Related Stories

Nvidia Announces RTX 30-Series "Ampere" GPUs 14 comments

Nvidia has announced its latest generation of gaming-oriented GPUs, based on the "Ampere" microarchitecture on a customized Samsung "8nm" process node.

The GeForce RTX 3080 ($700) has 10 GB of GDDR6X VRAM and will be released on September 17. TDP is up significantly, at 320 Watts compared to 215 Watts for the RTX 2080. The GeForce RTX 3070 ($500) has 8 GB of GDDR6 and a TDP of 220 Watts. The GeForce RTX 3090 ($1500) is the top card so far with a whopping 24 GB of GDDR6X VRAM. The GPU is physically much larger than the other two models and it has a 350 Watt TDP.

Nvidia's performance benchmarks should be treated with caution, since the company is often using ray-tracing and/or DLSS upscaling in its comparisons. But the RTX 3070 will outperform the RTX 2080 Ti at less than half the launch price, as it has 35% more CUDA cores at higher clock speeds.

Nvidia also announced some new features such as Nvidia Reflex (4m53s video), Broadcast, Omniverse Machinima, and RTX IO. Nvidia Broadcast includes AI-derived tools intended for live streamers. RTX Voice can filter out background noises, greenscreen effects can be applied without the need for a real greenscreen, and an autoframing feature can keep the streamer centered in frame while they are moving. Nvidia RTX IO appears to be Nvidia's response to the next-generation consoles' use of fast SSDs and dedicated data decompression.

NVIDIA GeForce RTX 30 Series | Official Launch Event (39m29s video)

Previously: Micron Accidentally Confirms GDDR6X Memory, and Nvidia's RTX 3090 GPU


Original Submission

AMD Announces Zen 3 CPUs 28 comments

AMD announced its first Zen 3 (Ryzen 5000 series) desktop CPUs on October 8.

Compared to Zen 2 (Ryzen 3000 series) CPUs, the Zen 3 microarchitecture has higher boost clocks and around 19% higher instructions per clock. A unified core complex die (CCD) allows 8 cores to access up to 32 MB of L3 cache, instead of two groups of 4 cores accessing 16 MB each, leading to lower latency and more cache available for any particular core. TDPs are the same as the previous generation, leading to a 24% increase in performance per Watt.

AMD estimates a 26% average increase in gaming performance at 1080p resolution, with the Zen 3 CPUs beating or tying Intel's best CPUs in most games.

Ryzen 9 5950X, 16 cores, 32 threads, boosts up to 4.9 GHz, 105W TDP, $800.
Ryzen 9 5900X, 12 cores, 24 threads, boosts up to 4.8 GHz, 105W TDP, $550.
Ryzen 7 5800X, 8 cores, 16 threads, boosts up to 4.7 GHz, 105W TDP, $450.
Ryzen 5 5600X, 6 cores, 12 threads, boosts up to 4.6 GHz, 65W TDP, $300.

You may have noticed that these prices are exactly $50 more than the launch prices for the Ryzen 3000 equivalents released in 2019. The 5600X is the only model that will ship with a bundled cooler.

The CPUs will all be available starting on November 5. AMD will stream an announcement for its RX 6000 series of high-end GPUs on October 28.

See also: AMD Zen 3 Announcement by Lisa Su: A Live Blog at Noon ET (16:00 UTC)
AMD Teases Radeon RX 6000 Card Performance Numbers: Aiming For 3080?

Previously: AMD's Zen 3 CPUs Will Not be Compatible with X470, B450, and Older Motherboards
AMD Reverses BIOS Decision, Intends to Support Zen 3 on B450 and X470 Motherboards
AMD Launching 3900XT, 3800XT, and 3600XT Zen 2 Refresh CPUs: Milking Matisse
AMD Zen 3, Ryzen 4000 Release Date, Specifications, Performance, All We Know


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Interesting) by jasassin on Thursday October 29 2020, @12:32AM (23 children)

    by jasassin (3566) <jasassin@gmail.com> on Thursday October 29 2020, @12:32AM (#1070173) Homepage Journal

    I know the 2080ti was massively overpriced compared to this generation, but people should stop buying this shit so we can get a good video card for under $300. $300 sounds about right for a 3090. For $1000 I'd rather have a beater back up car.

    I really wish the days of $1000 cell phones and video cards would go the way of the buggy whip.

    Oh well, I guess I'm just a buster.

    --
    jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
    • (Score: 2, Insightful) by Anonymous Coward on Thursday October 29 2020, @01:23AM

      by Anonymous Coward on Thursday October 29 2020, @01:23AM (#1070177)

      I believe the memory alone on the 3090 costs more than $300, but that would be nice ;)

    • (Score: 3, Insightful) by Immerman on Thursday October 29 2020, @02:01AM (3 children)

      by Immerman (3985) on Thursday October 29 2020, @02:01AM (#1070191)

      I'm a bit out of touch, but When was the last time we could get a good video card for under $300? And what does that price translate to in today's dollars?

      I know far too often I find myself thing "$X is way too much for Y, what happened? And then I realize 10 to 20 years of inflation happened, and my money is worth only a fraction of what it used to be.

      • (Score: 3, Interesting) by jasassin on Thursday October 29 2020, @02:39AM (1 child)

        by jasassin (3566) <jasassin@gmail.com> on Thursday October 29 2020, @02:39AM (#1070204) Homepage Journal

        I'm a bit out of touch, but When was the last time we could get a good video card for under $300? And what does that price translate to in today's dollars?

        I'm totally with you on the out of touch thing. Back in the day it was a Voodoo 2 for $320. It blew the hell out of a Voodoo 1 and was the first, and only, card I spent $300 on. I remember installing it and loading up Quake and almost pissing my pants.

        Anecdotally since then I've bought an Asus EAH Radeon 3450 (new at the time) for $45 and recently (maybe around April or May) after using an Nvidia 9800 GT a friend gave me about 8 years ago I bought an Nvidia GTX 1650 Super. It was $165, but the main reason I bought that was because I got $50 off for signing up for an Amazon store card and another $60 off for signing up for an Amazon Prime Visa card and free shipping for signing up for a free month of Amazon Prime. So I got it for a little over $60.

        I closed the store account about two weeks after purchase and canceled Amazon Prime two days before the 30 day trial. I kept the Visa card to use with my friends Costco gas card because the pumps don't take Discover. (Yes I'm frugal.)

        --
        jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
        • (Score: 2) by Immerman on Thursday October 29 2020, @03:03AM

          by Immerman (3985) on Thursday October 29 2020, @03:03AM (#1070206)

          Oh, yeah, the good old Voodoo2 - that was the first "real" 3D card I bought, after getting burned by one of the competing "3D decelerators" in the first generation. Cost over $600 in today's dollars.

          I've had a few second-hand upgrades since then, but I'm not sure when the last time I bought a new video card was... well before Bitcoin, and GPU-based superccomputing hadn't yet caught on enough to affect prices. I don't think the Wii had even come out yet. And even then $300 only got you a solid mid-range card. I don't even remember what I bought anymore, but I think it cost around $200, and that was around the time I decided I was no longer interested in chasing high-performance PC gaming. I should probably buy something new soon - at the very least something that supports a 4K desktop natively rather than requiring a 30Hz driver hack with a decent chance of being reverted any time Windows does a major update.

      • (Score: 0) by Anonymous Coward on Thursday October 29 2020, @03:13AM

        by Anonymous Coward on Thursday October 29 2020, @03:13AM (#1070209)

        GPU is cheap as hell, just not the newest best. Navi 10 was less than 300 when it was brand new though, and that will still play any game you throw at it on max settings.

    • (Score: 2) by EJ on Thursday October 29 2020, @04:16AM (1 child)

      by EJ (2452) on Thursday October 29 2020, @04:16AM (#1070241)

      Wrong. If people stop buying this stuff, then they will stop making it.

      The only reason we're seeing such advancements that we're seeing is because people are willing to pay for it. These cards are also helping to advance the power of consoles because the research that goes into making PC cards carries over to console technology.

      If you don't want to pay for the top of the line, then don't. Buy the $300 tier cards.

      You just want to have a car that goes 331 MPH without paying the $1.9M it costs to have a Tuatara.

      • (Score: 2) by jasassin on Thursday October 29 2020, @04:42AM

        by jasassin (3566) <jasassin@gmail.com> on Thursday October 29 2020, @04:42AM (#1070246) Homepage Journal

        Wrong. If people stop buying this stuff, then they will stop making it.

        They might stop making $1000 cards, but it's not like they are going to throw their hands in the air and close up shop. They'd make more realistically priced consumer cards.

        I understand where you're coming from, really I do. I think the car analogy is interesting because owning a car that can go 330MPH with 75MPH speed limits is not quite the same as a video card that can run flight sim on max settings at 4K at 144FPS. At least you could push the card to the limit all the time with no legal or bodily worries... but still... $1000 for a video card, in my mind, is almost as an obscenely overpriced object.

        While we're at it, why don't they mass produce Lamborghini's and sell them for the same profit margins as other cars? It's the Apple/Name/Status symbol. It's not really because people are driving 300MPH.

        If they sold gullwing cars that looked like a Lamborghini with the engine of a Honda Civic would they sell like hotcakes,or it really the horsepower people care about?

        --
        jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
    • (Score: 2) by takyon on Thursday October 29 2020, @05:50AM (13 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday October 29 2020, @05:50AM (#1070260) Journal

      Inflation is a factor, and the BOM cost is pretty high for a 3090, probably at least $700 or so. One of the components making it expensive is the cooler. GDDR6X is also the newest and most expensive memory, and probably a strategic mistake given how power-hungry it is.

      https://www.mooreslawisdead.com/post/nvidia-s-ultimate-play [mooreslawisdead.com]

      Intel rejoining the market should provide needed competition, and they plan to start at the low/mid-range first.

      https://www.notebookcheck.net/More-Intel-Xe-HPG-gaming-GPU-specs-and-performance-info-leak-out.498703.0.html [notebookcheck.net]

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by fakefuck39 on Thursday October 29 2020, @06:51AM (10 children)

        by fakefuck39 (6620) on Thursday October 29 2020, @06:51AM (#1070272)

        lol. yeah, this is the summary I'd expect from someone approaching it from the point of view of "I'm sitting at my house."

        There are companies buying GPU farms for all kinds of data mining, predictive analysis, vdi, high frequency trading, etc. They don't care if this card costs 1k or 10k - they're spending 2mil on the farm. And they are many of them. These go into racks with a few hundred xeon cores, a few hundred nvidia cards, and a mellanox backend that the all-flash SAN and NAS all plug into. Each rack costs 10-20mil, and people are buying them left and right. I know, because I design and sell it.

        These nvidia cards are a terrible choice for someone sitting at their house playing a game. And just like an iphone is a terrible choice and value for someone who needs a smartphone, people with extra cash will still buy them. wait till you find out the specs and the cost of a shitty underpowered mainframe.

        • (Score: 2) by EvilSS on Thursday October 29 2020, @08:24PM (9 children)

          by EvilSS (1456) Subscriber Badge on Thursday October 29 2020, @08:24PM (#1070513)
          Yea but a lot of those are using the higher-end NV GPUs designed for server farms. Hell, NV even locks out anything except some crypto style workloads from being used in a "datacenter" via their driver licensing. Yea, that's a thing, if you never worked with NV GPUs in a datacenter, you should check it out, it's a laugh. For instance you need to buy per-user licenses to use them for things like VDI!

          The 3090 card is crazy for a gamer unless you also do a lot of video rendering (and do it for money). 3080 and 3070 are more aimed at gamers. 3070 is actually a decent card when you compare it's performance and price to the prior gen 2080ti (which was a massive rip-off for anyone gaming). I'm sure the majority of gamers will end up with the 3060 when it is released (and has enough stock built up people can actually buy it). The #060 nvidia cards are usually the most popular in the Steam surveys.
          • (Score: 2) by fakefuck39 on Thursday October 29 2020, @09:29PM (8 children)

            by fakefuck39 (6620) on Thursday October 29 2020, @09:29PM (#1070531)

            "For instance you need to buy per-user licenses to use them for things like VDI"

            I have no idea what you're talking about. I literally sell this, daily. VDI solutions like Citrix need per user licenses - to use citrix, irrelevant of what hardware you're on. You absolutely don't need a per user license, or any license, for the hardware.

            But it's cute you say if I never worked with DC GPU farms that I should check it out. I did. Over 10 years ago when I moved from implementation to architecture and started designing and selling the solutions.

            Although I admit, maybe if you buy something from nvidia, maybe they do have the weird restrictions for licensing. I mostly sell these inside of Dell servers, so the bom is dell, and it includes no licensing for the GPU, because none is needed.

            • (Score: 2) by EvilSS on Friday October 30 2020, @12:31AM (7 children)

              by EvilSS (1456) Subscriber Badge on Friday October 30 2020, @12:31AM (#1070606)
              You should probably update that knowledge if you are architecting this stuff, because you are woefully out of date. After the original GRID cards (which did not need licenses) NV decided they needed to get in on the licensing train and ever since, yes, they require a per-user license for VDI using vGPUs. https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/solutions/resources/documents1/Virtual-GPU-Packaging-and-Licensing-Guide.pdf [nvidia.com]

              As for the datacenter licensing, again, you really need to take a sabbatical and update your industry knowledge: https://www.datacenterdynamics.com/en/news/nvidia-updates-geforce-eula-to-prohibit-data-center-use/ [datacenterdynamics.com]

              Oh, and for the record, I also literally sell this stuff daily, and have for over 20 years for multiple Citrix platinum partners. If you sold me a solution for VDI and vGPUs I'd be pissed when the cards won't work because you didn't tell me I needed to buy licenses for them.
              • (Score: 2) by fakefuck39 on Friday October 30 2020, @05:51AM (6 children)

                by fakefuck39 (6620) on Friday October 30 2020, @05:51AM (#1070750)

                I don't need to read any of that. Here's what I do:
                go to https://www.dellemc.com/partner/en-us/auth/sales/solution-configurator.htm [dellemc.com]
                log in to my VAR account.
                select servers or HCI
                launch configurator for the compute model
                add nvidia GPUs, pick ram, disk, cpu/etc.

                add per-GPU/vGPU licenses, grid if needed. Each license it tied to a GPU. I can select annual or perpetual. Yes, I have the option to select per concurrent user as well. I have not had a single customer ask me to do that. Ever.

                then I add prosupport or prosupport plus
                click validate, generate bom, apply discounts the customer has
                click build quote

                I have no idea what that datacenter dynamics site is. Maybe you're talking about buying something from nvidia directly.

                Point is, you're saying it's not possible, when I literally just checked, right now, before making this post. What's annoying is your way is absolutely an option. It's also absolutely not the only option, and you're completely full of crap, and condescendingly telling the guy who builds these quotes to "read up." That's just funny. Whoever told you this (and you're clearly a customer, not a vendor), you should find a new VAR. As a customer, you literally have never logged in to build the quote and are here telling people how the quotes work. That's just funny.

                • (Score: 2) by EvilSS on Friday October 30 2020, @11:29AM (5 children)

                  by EvilSS (1456) Subscriber Badge on Friday October 30 2020, @11:29AM (#1070790)
                  Oh, you really, REALLY, need to read those docs because you are absolutely wrong. I don't care what the configurator lets you order. The only per GPU option is for NVidia is their compute server license, and that is NOT entitled for use for VDI workloads, or even Windows workloads. It does not matter who you buy from either. Again, read the damn document from nvidia, or call your Dell account rep and ask them to explain it to you slowly.

                  Honestly at this point I have to think you are just a troll or a liar about what you actually do. This has all been known by people who actually work in the industry for years now. And, again, I've been doing this for over 20 years, and never as a customer.
                  • (Score: 2) by fakefuck39 on Friday October 30 2020, @12:02PM (4 children)

                    by fakefuck39 (6620) on Friday October 30 2020, @12:02PM (#1070798)

                    I get it. You're talking about purchasing it from nvidia. I am not. I am talking about purchasing it from a server vendor. Do you understand that? Do you know what vmw horizon is? Heck, I might even be able to quote out a ucs from cisco w/o per-user, but that's a slow one to try and I'm not going to spend the time.

                    You're talking about vcomputeserver. you can run some vdi on that too if you want btw.

                    Now, maybe we have a misunderstanding here too. The per-GPU license package does cut out at 96vPCs or some high number of vApps. There is another in the dropdown that's up to 192, costing about a third more. If I only add 4 GPUs, I get the option for 64 that's cheap.

                    That does not make it a per-user license. That's the max for the package. Like when you sign up for a comcast account and you get a max of 6 static IPs. That doesn't mean their package is a cost for each IP.

                    You know how your phone plan is 10gig/month max usage? That doesn't make it a per-byte price plan.

                    I'm not a troll buddy. I'm the guy competent customers buy things from. Now I'll admit, VDI is not my shtik. I'm mostly storage and compute. But I do build some for vdi every quarter. When you say you've never been a customer and you sell this stuff, then other places you say things only a customer would say:

                    "If you sold me a solution for VDI and vGPUs I'd be pissed when the cards won't work because you didn't tell me I needed to buy licenses for them."

                    that makes me think you lied so much you forgot what lies you told. sold you a solution? so you're buying them. strange, now you say you're selling them. why don't you make up your mind about what you do, then get back to us.

                    • (Score: 2) by EvilSS on Friday October 30 2020, @12:42PM (3 children)

                      by EvilSS (1456) Subscriber Badge on Friday October 30 2020, @12:42PM (#1070806)
                      Dude, IT DOES NOT MATTER WHO YOU BUY FROM. I build BOM for Cisco UCS and HP all day long. We never sell direct from NVidia except in special cases (at least not since waaay back when they first launched GRID). Max vCS is 8 VMs per host per license, Linux only. If you want to do Windows, and do VDI or published apps, you have to buy vApps, vDWS or vPC, and that is per CCU only. If your source of truth is the Dell configuration tool, it's either adding the CCU license for you and you just don't realize it, or it's letting you do something that isn't legal.

                      VIRTUAL GPU SOFTWARE EDITIONS

                      NVIDIA vGPU desktop and application virtualization solutions are designed to bring the power of virtualization to the users who need to be their most productive. vGPU technology ensures application compatibility, meaning any application that can run in a physical desktop can run in a virtual environment. Organizations can now expand their virtualization footprint without compromise.

                      NVIDIA vGPU software is available in four editions: NVIDIA GRID Virtual PC (GRID vPC), NVIDIA Quadro® Virtual Data Center Workstation (Quadro vDWS), GRID Virtual Applications(GRID vApps), and NVIDIA Virtual ComputeServer (NVIDIA vCS).

                      GRID vPC This product is ideal for users who want a virtual desktop but need great user experience leveraging PC Windows applications, browsers and high definition video.

                      NVIDIA GRID

                      Virtual PC delivers a native experience to users in a virtual environment, allowing them to run all of theirPC applications at full performance.Quadro

                      vDWS

                      This edition is ideal for mainstream and high-end designers who use powerful 3D content creation applications like DassaultCATIA, SOLIDWORKS, and 3DExcite, Siemens NX, PTC Creo, Schlumberger Petrel, or Autodesk Maya. NVIDIA Quadro® Virtual Data Center Workstation allows users to access their professional graphics applications with full features and performance, anywhere,on any device.

                      GRID vApps

                      For organizations deploying XenApp, RDSH or other app streaming or session-based solutions. Designed to deliver PC Windows applications at full performance. NVIDIA GRID Virtual Applications allows users to access any Windows application at full performance on any device, anywhere.This edition is suited for users who would like to virtualize applications using XenApp, RDSH or other app streaming or session-based solutions. Windows Server hosted RDSH desktops are also supported by GRID vApps.

                      NVIDIA vCS

                      For organizations running compute-intensive server workloads such as Artificial Intelligence (AI), Deep Learning (DL), and High Performance Computing (HPC). NVIDIA vComputeServer is software that enables the NVIDIA GPU to be virtualized to accelerate compute-intensive server workloads with features such as ECC, page retirement, peer-to-peer over NVLink, Multi-vGPU.

                      VGPUSOFTWARE LICENSING AND PRICING

                      GRID vPC, Quadro vDWSand GRID vApps are available on a per Concurrent User (CCU) model. A CCU license is required for every user who is accessing or using the software at any given time, whether or not an active connection to the virtualized desktop or session is maintained.

                      NVIDIA vCSis available on a per GPU model. A GPU license is required for every GPU that will host vCS-enabled VMs. A single vCS license enables a maximum of 8 concurrent VMs.

                      NVIDIA vGPU editions can be purchased by enterprises as either perpetual licenses with annual Support Updates and Maintenance Subscription (SUMS), or as an annual subscription. All

                      NVIDIA vGPU software products with perpetual licenses must be purchased in conjunction with three, four, or five year of SUMS. One year SUMS is available only for renewals. For annual licenses, SUMS is bundled into the annual license cost. Enterprise vGPU Software Pricing is listed in the tables below,find the full SKU list here. Pricing is suggested pricing only, contact your authorized NVIDIA partner for final pricing. If you are looking to runor host a service using NVIDIA vGPU Software,you need to join the NPN CSP partner program.

                      There is no option to do VDI without a CCU license from NVidia. None. You really need to contact your Dell rep to have them go over this with you. And again, it does not matter where you buy the license from, for example:

                      https://docs.netapp.com/us-en/hci-solutions/hcvdivds_nvidia_licensing.html [netapp.com]

                      https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/desktop-virtualization-solutions-vmware-horizon-view/whitepaper_c11-739654.pdf [cisco.com]

                      https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=a00059765enw [hpe.com]

                      I'm not a troll buddy. I'm the guy competent customers buy things from. Now I'll admit, VDI is not my shtik.

                      That's apparent

                      I'm mostly storage and compute. But I do build some for vdi every quarter. When you say you've never been a customer and you sell this stuff, then other places you say things only a customer would say: "If you sold me a solution for VDI and vGPUs I'd be pissed when the cards won't work because you didn't tell me I needed to buy licenses for them." that makes me think you lied so much you forgot what lies you told. sold you a solution? so you're buying them. strange, now you say you're selling them. why don't you make up your mind about what you do, then get back to us.

                      That was a hypothetical. I have never, in my career, been employed by a end-user company. I've been implementing, selling, and architecting Citrix and VMWare VDI solutions since the Winframe days.

                      • (Score: 2) by fakefuck39 on Friday October 30 2020, @10:48PM (2 children)

                        by fakefuck39 (6620) on Friday October 30 2020, @10:48PM (#1071085)

                        Yes, and the CCU license comes in a base package that sets a max. usually 96 or 192 CCUs. That does not make it a per user license. Just like your 10Gb/month plan does not make it a per-byte plan. What is it you're not getting here?

                        When you buy a 5-seater car, that is the max you can fit in the car. It's not a car that charges you per seat. I'm really boggled the mind gymnastics you're doing here. Yes, you can buy grid licenses per user. But it is not limited to that - you buy a package with a max CCU included. That is not a per-user license. A per user license is about $350, per user. That is not the only option, nor a common option.

                        Seriously, what is it you do not understand here?

                        • (Score: 2) by EvilSS on Saturday October 31 2020, @03:46AM (1 child)

                          by EvilSS (1456) Subscriber Badge on Saturday October 31 2020, @03:46AM (#1071169)
                          Nice try dipshit
                          • (Score: 2) by fakefuck39 on Saturday October 31 2020, @10:47AM

                            by fakefuck39 (6620) on Saturday October 31 2020, @10:47AM (#1071225)

                            and now you lose. backed into a corner, instead of addressing the actual points, you have nothing.

                            btw, I don't believe you for a second from the way you talk that you build or sell anything. you're a customer, and someone from a VAR showed you a configurator once, then you read some pdfs. you say you quote out UCS. that's a great test, since google won't give you the answer. to build a UCS config and quote it out, what is the url of the cisco site?

                            nice try, linux admin at midsize commercial account.

      • (Score: 0) by Anonymous Coward on Thursday October 29 2020, @08:45PM (1 child)

        by Anonymous Coward on Thursday October 29 2020, @08:45PM (#1070518)

        *sigh* ... if only they wouldn't use sooo much power. shhesh >300 Watt?
        maybe wait for the die-shrink? else i am gonna go spend the money on 2 square meters of silicon that MAKE energy instead of consuming it? or so sayZ alyx in half life 3 ^_^

        • (Score: 2) by takyon on Friday October 30 2020, @01:04AM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday October 30 2020, @01:04AM (#1070628) Journal

          Technically, they did die shrink. But they allegedly tried to play hardball with TSMC and got shafted onto Samsung's shitty "8nm" ("10nm" derivative) node instead. Then they put fast but hot GDDR6X on top of that.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Thursday October 29 2020, @12:33PM

      by Anonymous Coward on Thursday October 29 2020, @12:33PM (#1070330)

      The tech market, like the appliance or car market, always has halo products at absurd prices for rich stupid people. Then in a few years that level of features is mainstream. So you and I won't buy these new. But in ten years the equivalent performance will cost $250 (well, not counting inflation). This is the same reason I love the Tesla Model S - I can't afford one and I dislike Elon Musk. But the existence of the Model S means it's exceedingly likely that $20k long range electric family vehicles will exist eventually.

    • (Score: 3, Interesting) by sjames on Friday October 30 2020, @02:10AM

      by sjames (2882) on Friday October 30 2020, @02:10AM (#1070674) Journal

      It seems backwards to me. We used to have $1500 PCs with $50 video cards...

  • (Score: 2) by fadrian on Thursday October 29 2020, @01:41AM (2 children)

    by fadrian (3194) on Thursday October 29 2020, @01:41AM (#1070183) Homepage

    The 6800 XT and 6800 will be available starting on November 18, while the 6900 XT will be available on December 8.

    Given AMD's normal lack of capacity, when will people be able to get them without paying scalpers' prices? My Magic Eight Ball says "Better not tell you now."

    --
    That is all.
    • (Score: 3, Interesting) by EvilSS on Thursday October 29 2020, @02:21AM

      by EvilSS (1456) Subscriber Badge on Thursday October 29 2020, @02:21AM (#1070198)
      Being AMD the gamers will be competing against the scalpers and GPU miners wanting to sample the cards. And yes, there is plenty of GPU mining still going on.
    • (Score: 2) by takyon on Thursday October 29 2020, @05:52AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday October 29 2020, @05:52AM (#1070261) Journal

      We'll see, but AMD is said to have a lot more capacity on TSMC "7nm" than Nvidia has on Samsung "8nm". They could still end up selling every card they make though.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 3, Interesting) by Captival on Thursday October 29 2020, @03:24AM (2 children)

    by Captival (6866) on Thursday October 29 2020, @03:24AM (#1070217)

    has total board power (TBP) been a euphemism? The fact that there's a term for it instead of just saying 'this card is 300 watts' makes me suspicious it means something else.

    • (Score: 5, Informative) by EJ on Thursday October 29 2020, @04:18AM

      by EJ (2452) on Thursday October 29 2020, @04:18AM (#1070242)

      It's because of the ambiguity of the term GPU. When talking about the GPU, are we talking about just the chip or the entire card? Total Board Power is unambiguous. It says that the entire thing, memory, etc. consumes 300W.

    • (Score: 4, Interesting) by takyon on Thursday October 29 2020, @05:54AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday October 29 2020, @05:54AM (#1070263) Journal

      I think it's a shot at Nvidia, which has been exceeding TDP on Ampere. Some of the AIB Ampere cards are being pushed to 400 Watts.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(1)