Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday February 07 2018, @06:13PM   Printer-friendly
from the do-they-have-barbeque-flavor? dept.

Submitted via IRC for TheMightyBuzzard

Ampere, a new chip company run by former Intel president Renee James, came out of stealth today with a brand-new highly efficient Arm-based server chip targeted at hyperscale data centers.

The company's first chip is a custom core Armv8-A 64-bit server operating at up to 3.3 GHz with 1TB of memory at a power envelope of 125 watts. Although James was not ready to share pricing, she promised that the chip would offer unsurpassed price/performance that would exceed any high performance computing chip out there.

The company has a couple of other products in the works as well, which it will unveil in the future.

Source: TechCrunch


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by frojack on Thursday February 08 2018, @09:20PM (1 child)

    by frojack (1554) on Thursday February 08 2018, @09:20PM (#635210) Journal

    It might be easier to stop speculative execution by simply not building it into the processor in the first place.

    I'd like to see what percentage of typical job time is saved by speculative execution.

    If it were all that great why not build that functionality into the compilers, and spend an extra two minutes in optimization at compile time and avoid the risk?

    If its sot significant, just figure out how much faster the clock speed needs to be to make up for it.

    --
    No, you are mistaken. I've always had this sig.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by TheRaven on Monday February 12 2018, @12:35PM

    by TheRaven (270) on Monday February 12 2018, @12:35PM (#636661) Journal

    I'd like to see what percentage of typical job time is saved by speculative execution.

    On a modern Intel chip, you have up to around 180 instructions in flight at a time. The typical heuristic is that you have, on average, a branch every 7 instructions. Every instruction between the branch being issued and the instruction before it that provides the branch condition reaching writeback is speculative. This means that, on average, around 96% of your instructions are speculatively executed.

    On simpler pipelines, the number is a lot lower. A simple 7-state in-order pipeline is only speculatively executing around 50% of its instructions. So, if you disable speculative instructions entirely then you'll take a 50% performance hit on simple (read: slow) pipelines or around a 96% performance hit on high-end pipelines in the worst case. It isn't quite that bad in the average case, because (as these vulnerabilities showed) speculative execution isn't perfect, so you won't see a difference between not doing speculation and the cases where you'd see incorrect speculation. I'd expect that on a simple in-order core you'd only see around a 30% performance hit and on a high-end Intel core around an 80% hit.

    That said, we only do speculative execution because most code is written in languages like C that don't provide enough high-level parallelism to keep a CPU busy. If you were to design a CPU to run a language with an abstract machine like Erlang, then you could get away without speculative execution by running instructions from another thread.

    If it were all that great why not build that functionality into the compilers, and spend an extra two minutes in optimization at compile time and avoid the risk?

    If the compiler could statically determine branch targets, then it wouldn't bother inserting branches. You can do the classical GPU approach and execute both branches and then discard the results that you don't want, but then you end up seeing performance drop by 50% for each conditional branch.

    If its sot significant, just figure out how much faster the clock speed needs to be to make up for it.

    Faster than you can build, and a lot faster than you can cool (I forget the exact relationship between power consumption and clock rate, it's either square or cube - this is why hardly anything runs at over 2GHz). For a modern Intel chip to reach the same performance without speculative execution, you'd need to go around 10-20GHz, which no one has come close to being able to build (at least, not in anything that didn't run briefly with liquid nitrogen poured on it before burning out).

    --
    sudo mod me up