Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday January 23 2018, @10:16PM   Printer-friendly
from the open-to-the-possibility dept.

Is it time For open processors? Jonathan Corbet over at lwn.net seems to think so. He lists several ongoing initiatives such as OpenPOWER, OpenSPARC and OpenRISC, but feels that most of the momentum is in the RISC-V architecture right now.

Given the complexity of modern CPUs and the fierceness of the market in which they are sold, it might be surprising to think that they could be developed in an open manner. But there are serious initiatives working in this area; the idea of an open CPU design is not pure fantasy.

[...] Much of the momentum these days, instead, appears to be associated with the RISC-V architecture. This project is primarily focused on the instruction-set architecture (ISA), rather than on specific implementations, but free hardware designs do exist. Western Digital recently announced that it will be using RISC-V processors in its storage products, a decision that could lead to the shipment of RISC-V by the billion. There is a development kit available for those who would like to play with this processor and a number of designs for cores are available.

Unlike OpenRISC, RISC-V is intended to be applicable to a wide range of use cases. The simple RISC architecture should be relatively easy to make fast, it is hoped. Meanwhile, for low-end applications, there is a compressed instruction-stream format intended to reduce both memory and energy needs. The ISA is designed with the ability for specific implementations to add extensions, making experimentation easier and facilitating the addition of hardware acceleration techniques.

[...] RISC-V seems to have quite a bit of commercial support behind it — the RISC-V Foundation has a long list of members. It seems likely that this architecture will continue to progress for some time.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Wednesday January 24 2018, @03:07PM

    by DannyB (5839) Subscriber Badge on Wednesday January 24 2018, @03:07PM (#627156) Journal

    I could use the same argument to say just write all your apps in any of the languages that compile to the JVM. (Java Virtual Machine) You don't have to use Java source code or compiler, there are other language compilers that generate JVM bytecode.

    Then your JVM bytecode is compiled on the fly (and soon now ahead of time) into native code.

    The JVM runtime is industrial strength. It dynamically profiles all code and compiles the hottest code to native code using two compilers. First with C1 which quickly generates non optimized native code. Then your function is scheduled to soon be compiled by C2 which spends more time compiling your function to highly optimized native code. The C2 compiler aggressively inlines.

    Here is an example of the sophistication. Suppose Your function F1 calls My function F2. When C2 compiles F1, it may inline code from F2. Now suppose my function F2 gets dynamically re-loaded with a newer version -- function F1 now contains stale inlined code and is instantly switched back to slower bytecode interpretation and native code for F1 is discarded. If F1 is a hot function still, then very soon it will get recompiled by C1, and then shortly by C2.

    One disadvantage of the JVM is that it has a slow start up time. That is because everything is optimized for long, long runtimes. Because of the C1/C2 process I described, programs seem to "warm up" and then become very fast. So while your other GC language (eg, Python, others etc) may start up quickly, then aren't the kind of industrial strength runtime as JVM.

    The JVM can have dozens of gigabytes of memory with pause times in the milliseconds. JVM offers a choice of garbage collector algorithms. Each GC is tunable with plenty of knobs. One commercial JVM provider (Azul systems product Zing) touts hundreds of gigabytes of memory with 10 ms GC pause times. Several research efforts, including one by Red Hat are working towards a new GC that can support Terabytes of memory with GC pause times in the low milliseconds.

    The JVM supports source level debugging.

    If you can live with the warts of the JVM, and there are some, it is an excellent runtime platform.

    (Another wart currently being addressed with modularity is that the JVM runtime is a hundred megabytes of disk footprint for a hello world program. Typical programs that run on JVM are large programs themselves.)

    Coming soon: compilers as part of OpenJDK that generate AOT (ahead of time) native code.

    --
    People today are educated enough to repeat what they are taught but not to question what they are taught.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2