Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by martyb on Thursday October 05 2017, @12:12PM   Printer-friendly
from the please-let-the-vapors-condense dept.

From the lowRISC project's blog:

A high quality, upstream RISC-V backend for LLVM is perhaps the most frequently requested missing piece of the RISC-V software ecosystem… As always, you can track status here and find the code here.

RV32

100% of the GCC torture suite passes for RV32I at -O0, -O1, -O2, -O3, and -Os (after masking gcc-only tests). MC-layer (assembler) support for RV32IMAFD has now been implemented, as well as code generation for RV32IM.

RV64

This is the biggest change versus my last update. LLVM recently gained support for parameterising backends by register size, which allows code duplication to be massively reduced for architectures like RISC-V. As planned, I've gone ahead and implemented RV64I MC-layer and code generation support making use of this feature. I'm happy to report that 100% of the GCC torture suite passes for RV64I at O1, O2, O3 and Os (and there's a single compilation failure at O0). I'm very grateful for Krzysztof Parzyszek's (QUIC) work on variable-sized register classes, which has made it possible to parameterise the backend on XLEN in this way. That LLVM feature was actually motivated by requirements of the Hexagon architecture - I think this is a great example of how we can all benefit by contributing upstream to projects, even across different ISAs.

[...] Community members Luís Marques and David Craven have been experimenting with D and Rust support respectively.

[...] Approach and philosophy

As enthusiastic supporters of RISC-V, I think we all want to see a huge range of RISC-V core implementations, making different trade-offs or targeting different classes of applications. But we don't want to see that variety in the RISC-V ecosystem result in dozens of different vendor-specific compiler toolchains and a fractured software ecosystem. Unfortunately most work on LLVM for RISC-V has been invested in private/proprietary code bases or short-term prototypes. The work described in this post has been performed out in the open from the start, with a strong focus on code quality, testing, and on moving development upstream as quickly as possible - i.e. a solution for the long term.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Anonymous Coward on Thursday October 05 2017, @03:40PM (6 children)

    by Anonymous Coward on Thursday October 05 2017, @03:40PM (#577476)

    128 bit addressing would also be useful for IPv6 since it already uses 128 bit address fields and a 128 bit RISC-V chip might make a good alternative for routers and other devices that need to work with such values in as few cycles as possible.

    Starting Score:    0  points
    Moderation   +4  
       Interesting=4, Total=4
    Extra 'Interesting' Modifier   0  

    Total Score:   4  
  • (Score: 2) by Azuma Hazuki on Thursday October 05 2017, @07:22PM

    by Azuma Hazuki (5086) on Thursday October 05 2017, @07:22PM (#577585) Journal

    I was just about to ask what the use of a 128-bit ISA would be, and then saw this. Thanks :) That is absolutely brilliant, as it means a single IPv6 address can fit in, if I understand this right, one register of said CPU.

    --
    I am "that girl" your mother warned you about...
  • (Score: 0) by Anonymous Coward on Friday October 06 2017, @07:00AM (3 children)

    by Anonymous Coward on Friday October 06 2017, @07:00AM (#577851)

    Wouldn´t it be better to use vector instructions for this?

    • (Score: 2) by maxwell demon on Friday October 06 2017, @08:10AM (2 children)

      by maxwell demon (1608) on Friday October 06 2017, @08:10AM (#577879) Journal

      Is there a fundamental reason why it should not be possible to interpret one and the same register as a single 128 bit value for normal instructions, and as eight 16-bit values for vector instructions? Note that for some instructions (like bitwise operations) the difference is nonexistent, and for others (addition/subtraction) the only difference is that a few carry/borrow lines need to be disabled. And I guess even for multiplication, a lot of the circuitry could be shared between normal and vector instructions.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 3, Informative) by TheRaven on Friday October 06 2017, @11:49AM (1 child)

        by TheRaven (270) on Friday October 06 2017, @11:49AM (#577951) Journal

        You might want to look at Sun's MAJC architecture, which had a single register file that could be used for different types (integer, floating point, vector) depending on the instructions. More practically, architectural registers are a fiction on modern CPUs. The hardware has a lot more physical registers than it has architectural registers. These are mapped to physical registers on demand by the register rename unit (which is one of the most complex parts of a modern CPU). Typically, you have different banks of fixed-size registers, because it complicates rename logic to split them and you have redundant data in wires (i.e. heat) if you use larger rename registers than required, but there's nothing stopping you from using the same 128-bit rename registers for vectors and pointers (except that 128 bits is pretty small for a vector register these days).

        To the other part of your post, sharing ALUs between 16-bit vectors and 128-bit integers, there's a huge difference in the circuitry between 8 16-bit adders and one 128-bit adder. If you don't have a carry in, then your entire structure is different. You could create a 128-bit adder and put an extra and gate on 7 of the carry lines, connected to an is-this-a-128-bit-integer control signal (with it becoming an 8-element 16-bit vector if the bit is not set), but it would be a staggeringly inefficient vector adder. You'd also be optimising for the wrong thing. Since the end of Dennard Scaling (about a decade ago), transistors are cheap, transistors that you are actually using are expensive. Having two separate pipelines, one for adding integers, one for adding vectors, and only using one at a time is only marginally more expensive than having just one, and having a combined one that does both and is less efficient than either is more expensive than having both.

        --
        sudo mod me up
        • (Score: 2) by maxwell demon on Friday October 06 2017, @03:35PM

          by maxwell demon (1608) on Friday October 06 2017, @03:35PM (#578067) Journal

          Thanks; learned something new today.

          --
          The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by TheRaven on Friday October 06 2017, @11:08AM

    by TheRaven (270) on Friday October 06 2017, @11:08AM (#577936) Journal
    Several things are wrong with this argument:
    • Unless you want to memory-map the Internet, your address size doesn't really matter for this kind of computation.
    • Routing decisions don't care about the size of the address, they care about the size of the network part of the address, which in IPv6 is usually 48 or 64 bits (the low 64 bits are all a link-local address).
    • Routers that care about performance have large TCAMs in hardware and don't do most of the address mapping in software anyway.
    --
    sudo mod me up