Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday August 08 2017, @03:19PM   Printer-friendly
from the core-wars dept.

Intel's Skylake-X line-up has been finalized, ranging from the i7-7800X for $389 to the i9-7980XE for $1,999. 18 cores for over $2,000 (after tax)? Someone will buy it:

Intel's high brass made a decidedly un-Intel move last August. During a routine business meeting at the company's Santa Clara headquarters, they decided to upend their desktop CPU roadmap for 2017 to prepare something new: the beastly 18-core i9-7980XE X-series. It's the company's most powerful consumer processor ever, and it marks the first time Intel hsd[sic] been able to cram that many cores into a desktop CPU. At $2,000, it's the sort of thing hardware fanatics will salivate over, and regular consumers can only dream about.

The chip's very existence came down to a surprising revelation at that meeting last year: Intel's 10-core Broadwell-E CPU, which was only on the market for a few months and cost a hefty $1,723, was selling incredibly well. And for Intel, that was a sign that there was even more opportunity in the high-end computing world.

"The 10-core part was absolutely breaking all of our sales expectations," Intel's Anand Srivatsa, general manager of its Desktop Platform Group, told Engadget in an interview. "We thought we'd wait six months or so to figure out whether this was actually going to be successful. But within the first couple months, it was absolutely clear that our community wanted as much technology as we could deliver to them."

[...] If you've been feeling nostalgic for an old-school computing hardware war, we're about to get one. AMD also announced its Threadripper CPUs for high-end desktops a few months ago, and, as usual, they're significantly cheaper than Intel's offerings. The 16-core AMD 1950X will cost $999, with speeds between 3.4GHz and 4GHz. That's the same price as Intel's 10-core i9 X-series processor, while the 16-core model will run you $1,699.

Obligatory Intel Management Engine / AMD Secure Processor comment.

Also at Intel Newsroom.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @03:31PM (2 children)

    by Anonymous Coward on Tuesday August 08 2017, @03:31PM (#550623)

    If you thought Intel Skylake-X systems were expensive, go reserve yourself a TALOSII Workstation Today.

    It is less than half the cores at twice the price! :)

    That said: Open Source BMC with your own ability to customize the signing key. But oh yeah, they misheard that we liked backdoors, so they put 2 Broadcom ethernet controllers in to backdoor our backdoorless system :)

    Having said all that, DDR4, PCIe4, basically everything you need to replace an x86 (sans binary compatibility) with a higher burden on attacker to get to you.

    • (Score: 2) by takyon on Tuesday August 08 2017, @03:40PM (1 child)

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday August 08 2017, @03:40PM (#550627) Journal

      A sufficiently motivated attacker will always get you, if you are connected to the Internet. Sometimes even when you aren't (Stuxnet). Intel and AMD just make it a little easier (maybe).

      Don't wait on me to submit something. [soylentnews.org]

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by bob_super on Tuesday August 08 2017, @04:14PM

        by bob_super (1357) on Tuesday August 08 2017, @04:14PM (#550648)

        True, but reducing your attack profile, by going through the extreme pain of not being compatible with just about anyone, will spare you from any threat short of people dedicated to getting you specifically.
        Attackers do ROI too. All that time spent going after weird systems is time they fall behind on keeping up with regular targets.

  • (Score: 2) by TheRaven on Tuesday August 08 2017, @03:38PM (6 children)

    by TheRaven (270) on Tuesday August 08 2017, @03:38PM (#550625) Journal
    10 years ago, most of the parallel workloads that we had were disk-I/O bound. Now we have SSDs everywhere and they're compute bound again. A half-decent build system will expose more than 18-way parallelism for most large projects, and these days the linker is multithreaded and so that isn't a bottleneck. Stick one of these under a developer's desk and it probably won't need upgrading for 2-3 years. Amortised over that lifetime, $2K for the CPU is pretty cheap.
    --
    sudo mod me up
    • (Score: 2) by LoRdTAW on Tuesday August 08 2017, @04:30PM

      by LoRdTAW (3755) on Tuesday August 08 2017, @04:30PM (#550654) Journal

      Last night, I was backing up an embedded XP install from an industrial PC over the network using Linux. Single core 1.6GHz atom so compression and ssh were out of the question on that dinky CPU. So I used netcat to send the ntfsclone image to my desktop with a core i7 and ran it through pbzip2 (parallel-bzip2). All four cores were loaded right up to 100% and the target disk was spinning rust. You want to tax a CPU? Run pbzip. The Little Atom PC sat at about 20-30% CPU.

      Stick one of these under a developer's desk and it probably won't need upgrading for 2-3 years.

      Given that I haven't needed to upgrade my Core i7 from 2012, I think that could realistically last for 10+ years depending on the use case. I have yet to sit in front of that system and say "gee, I need a bigger CPU." Then again, I'm not rendering special effects scenes for a major film or compiling the microsoft office suite.

    • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @05:09PM (2 children)

      by Anonymous Coward on Tuesday August 08 2017, @05:09PM (#550670)

      ... then maybe your project is very badly organized.

      • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @06:59PM

        by Anonymous Coward on Tuesday August 08 2017, @06:59PM (#550706)

        Not op here but you have no idea how right you are... I was overridden by our private sector partner, to use One blobbish fat war with over 2000 JPA mapped entities instead of using a proper design where the lower layers do not dependent on the upper layers.

        Private public partnership for software, is the worst of both world....

      • (Score: 2) by TheRaven on Thursday August 10 2017, @08:04AM

        by TheRaven (270) on Thursday August 10 2017, @08:04AM (#551514) Journal
        Or it's written in C++, where leaky interfaces are pretty much a given. Or it's doing LTO, where you end up doing a lot of recompilation even for small changes, because a small function in one file might end up being inlined into a large number of others (and, before you say 'you don't need to do fully optimised builds for incremental changes' - we do if we want to rerun performance measurements and check that the change actually did make a difference, and we do when the change is to the compiler that's going to burn about 5 CPU-days of time rebuilding various configurations of the rest of the system).
        --
        sudo mod me up
    • (Score: 0) by Anonymous Coward on Wednesday August 09 2017, @05:45AM

      by Anonymous Coward on Wednesday August 09 2017, @05:45AM (#550959)

      Thats silly: my dev work flow has no use for that! I use Visual Studio, so waiting for the UI to respond takes longer than the builds, so 4 cores is easily enough. I have 12 cores so I can happily have 3 VS instances hung while doing a built in a 4th (and I won't run out of ram, since they are 32 bit, and get an out of memory errors before consuming a significant portion of my 32 GB of ram). The PCIe SSD is nice though.

      I'd be much better off with a highly clocked i5 than my current 12 core, or this massive 18 core. I need to get the at most 2 threads Visual Studio manages to use done as fast as possible.

      Now away from my work (at Microsoft, hence the Visual Studio), yes, those cores would be quite fun to play with.

    • (Score: 2) by FatPhil on Thursday August 10 2017, @06:24AM

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Thursday August 10 2017, @06:24AM (#551490) Homepage
      > compute bound again ... won't need upgrading for 2-3 years

      ?!?!? If it's compute bound now - the CPU needs upgrading now.

      (However, I disagree with the premise - we're still mostly IO bound for non-synthetic workloads.)
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 3, Insightful) by Anonymous Coward on Tuesday August 08 2017, @03:52PM (9 children)

    by Anonymous Coward on Tuesday August 08 2017, @03:52PM (#550635)

    The 10-core exceeded expectations, so they went to 18 cores?

    No. The 10-core was all that was required to beat AMD. Why waste silicon when you don't need to? As soon as AMD does 16 cores, Intel must deal with the threat.

    If AMD does 120 cores, Intel will immediately respond with something like 125 cores.

    Without AMD, we'd be lucky to have anything better than a Pentium II now, never mind that it is 2017.

    • (Score: 2) by bob_super on Tuesday August 08 2017, @04:09PM (8 children)

      by bob_super (1357) on Tuesday August 08 2017, @04:09PM (#550643)

      And well over 90% of the market couldn't care any more whether they run an Intel or an AMD, or win vs linux, or PC vs Mac, because their text/spreadsheet/browser-based workloads can be handled by any machine on the market.
      AMD should be as ubiquitous as ARM, but intel is better at marketing.

      • (Score: 2) by mhajicek on Tuesday August 08 2017, @05:13PM (1 child)

        by mhajicek (51) on Tuesday August 08 2017, @05:13PM (#550672)

        Indeed. You'd think Microsoft would be fighting the cloud tooth and nail, since if everything is in the cloud all you need is a browser which could be running on anything.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 3, Informative) by rcamera on Tuesday August 08 2017, @05:49PM

          by rcamera (2360) on Tuesday August 08 2017, @05:49PM (#550680) Homepage Journal

          microsoft is one of the big cloud providers; like top 2-3 globally. and it's one of their biggest businesses by revenue and income at this point. they're not fighting the cloud - they're embraced and monetized it.

          --
          /* no comment */
      • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @05:14PM (4 children)

        by Anonymous Coward on Tuesday August 08 2017, @05:14PM (#550673)

        Those consumers of which you speak aren't choosing CPUs or chipsets.

        • (Score: 2) by bob_super on Tuesday August 08 2017, @05:16PM

          by bob_super (1357) on Tuesday August 08 2017, @05:16PM (#550675)

          They're often choosing based on cost.

        • (Score: 2) by tibman on Tuesday August 08 2017, @05:59PM (2 children)

          by tibman (134) Subscriber Badge on Tuesday August 08 2017, @05:59PM (#550683)

          Have you forgotten the "Intel Inside" stickers?

          --
          SN won't survive on lurkers alone. Write comments.
          • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @06:17PM (1 child)

            by Anonymous Coward on Tuesday August 08 2017, @06:17PM (#550688)

            We're talking about consumers here.

      • (Score: -1, Troll) by fakefuck39 on Wednesday August 09 2017, @06:20AM

        by fakefuck39 (6620) on Wednesday August 09 2017, @06:20AM (#550973)

        Ah, the guy who has never had a job speaks. Those 18 core machines aren't meant for your desktop moron. They're made to run on the huge server farms that run the world where you are a really dumb consumer, and most of those servers not only don't have a browser installed - they don't have a GUI installed.

        Get a job retard, then comment on industry hardware.

  • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @05:31PM (8 children)

    by Anonymous Coward on Tuesday August 08 2017, @05:31PM (#550676)

    why is there so little open source hardware? there are thousands of FOSS devs releasing their work. Why are there so few hardware engineers releasing their work under Free licenses? Are hardware engineers just a bunch of whores (no offense to actual prostitutes)?

    • (Score: 2, Informative) by Anonymous Coward on Tuesday August 08 2017, @05:36PM (2 children)

      by Anonymous Coward on Tuesday August 08 2017, @05:36PM (#550677)

      Because if you don't own a fab, there isn't much you can do with open source hardware. How much open source software would there be if a compiler cost $100 million and filled a large warehouse?

      • (Score: 2, Informative) by Anonymous Coward on Tuesday August 08 2017, @06:22PM (1 child)

        by Anonymous Coward on Tuesday August 08 2017, @06:22PM (#550691)

        Because manufacturing hardware is so expensive, firms have done everything that they can to protect their margins; the natural consequence is that hardware (even at the most fundamental levels) is trapped in a giant web of legal obligations (patents, NDAs, etc.). In such a litigious environment, it's not possible to cultivate a culture that respects the idea of free and open source hardware.

        • (Score: 2) by lentilla on Wednesday August 09 2017, @04:39AM

          by lentilla (1770) on Wednesday August 09 2017, @04:39AM (#550948) Journal

          In such a litigious environment, it's not possible to cultivate a culture that respects the idea of free and open source hardware.

          This might be a good time to reflect on the timely success of the free software movement. By the time average businesses started to think about monetising software (and by monetising, I mean locking it up so others can't have any), the free software movement was already well under way.

          Serious business investment in hardware started at the end of the second world war, it wasn't until the 1970s that software became something that was considered in its own right. So perhaps that explains the timing.

          So I; for one; am certainly grateful that a number of prescient individuals took up the fight to allow the sharing of software. Today's landscape might have been considerably more hostile without their efforts. I don't like litigious environments.

    • (Score: 2) by LoRdTAW on Tuesday August 08 2017, @07:17PM

      by LoRdTAW (3755) on Tuesday August 08 2017, @07:17PM (#550719) Journal

      why is there so little open source hardware?

      As in silicon? Obviously, you can't build and test hardware like you can software. Software needs only a compiler/interpreter, an editor and maybe some libraries. build/test/fix bugs/repeat until it works. Pretty simple.

      Hardware isn't as easy as there are real world physical design issues beyond the HDL and logic. Design could work in simulation but a real world piece of silicon could flat out fail or exhibit strange behavior due to impedance or other physical design issues. Lots of testing and careful design is needed and those skills are not as common. Then there is the whole fabrication problem.

    • (Score: 1) by crafoo on Tuesday August 08 2017, @08:45PM (3 children)

      by crafoo (6639) on Tuesday August 08 2017, @08:45PM (#550754)

      Hardware and digital circuit design actually requires engineering skills, unlike software development.

      • (Score: 0) by Anonymous Coward on Tuesday August 08 2017, @08:51PM (2 children)

        by Anonymous Coward on Tuesday August 08 2017, @08:51PM (#550762)

        Even if that were true, it's utterly irrelevant to the question.

        • (Score: 2) by arcz on Wednesday August 09 2017, @03:34AM (1 child)

          by arcz (4501) on Wednesday August 09 2017, @03:34AM (#550923) Journal

          Hardware development is something that takes enough effort that you aren't going to be able to do it in your spare time with a small uncoordinated team. It's also not something that can be done for free.

          Prototyping hardware designs requires money.
          Prototyping software designs only requires time.

          People with spare time are often willing to give some for free software, but given the extreme expenses of hardware design and manufacturing, it's unlikely altruism will be a sufficient source of money.

          • (Score: 3, Informative) by TheRaven on Thursday August 10 2017, @08:23AM

            by TheRaven (270) on Thursday August 10 2017, @08:23AM (#551517) Journal
            Not really true. My day job involves an experimental MIPS-compatible processor and I'm also involved with various bits of RISC-V (you'll find my name in the acknowledgements section of the spec, and I'm chairing one of the extension working groups).

            Our research processor began as a R4K equivalent that was developed by a single PhD student. Not exactly a competitor for a modern system, but fast enough to boot FreeBSD and be used for experimentation. We're using some quite expensive FPGAs (though they're not 6-7 years old, so are probably much less expensive), but you can simulate in software. The simulation isn't quite useable, but is fast enough for running our test suite (it's also useful in the test suite to be able to dump the register contents with a single magic nop at certain points).

            We use a high-level HDL called BlueSpec. It's very powerful, but unfortunately it's insanely expensive for non-academic users. It's sufficiently simple to learn that I was able to add an improved branch predictor to our processor with no prior hardware experience after attending a couple of hours of lectures and have subsequently made some other quite significant changes (added a bunch of instructions, changed a data representation and so on).

            In contrast, the most popular RISC-V implementations use a high-level HDL developed at Berkeley called CHISEL. CHISEL is a Scala DSL and is pretty easy to pick up. The most complex RISC-V implementation that I'm aware of is the Berkeley Out of Order Machine (BOOM), which is an out-of-order superscalar design that has most of the features of a modern chip. Again, most of the development is done using the simulator. This is true even for a lot of commercial microprocessors: you're 99% done by the time you fab the first prototype.

            The expensive part comes when you want to either use an FPGA (comparatively cheap - a few hundred to a few thousand dollars) or fab a chip. The economics there are quite odd. It's actually pretty cheap to get a little bit of space on the edge of someone else's wafer for very low volume runs, as long as you don't have any time constraints. If you're a new company or project then you can get some very good deals, because the fabs want to tie you into using their cell libraries so that when you ramp up production they can charge you more. That said, these costs are easily amortised if you have enough people that want the chip. This is what a few of my colleagues are hoping for with the lowRISC effort: they're producing an entirely open source SoC and will aim to ship a few million of them on a RPi-like board (most of them were also involved in the RPi effort).

            --
            sudo mod me up
  • (Score: 2) by tibman on Tuesday August 08 2017, @08:39PM (4 children)

    by tibman (134) Subscriber Badge on Tuesday August 08 2017, @08:39PM (#550751)

    The 10 core i9 was already consuming 50-100 watts more than the 8 core Ryzen. I haven't checked threadripper TDP (actual tested, not spec) and temperatures. But if both scale the same then an 18 core i9 is going to be sooo hot. Will probably have to be water cooled if you wanted to do anything with it.

    --
    SN won't survive on lurkers alone. Write comments.
(1)