Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday January 30 2017, @01:16PM   Printer-friendly
from the less-risc-y dept.

OnChip and SiFive, two groups aiming to develop and release RISC-V platforms, have announced they will collaborate. From OnChip's crowdfunding campaign:

Ever since SiFive's HiFive1 campaign was launched just a week after we launched Open-V back in November, we've both been getting a lot of questions about how we might collaborate. It's taken a while, as these things do, but we finally have a concrete answer we think will benefit everyone, not least the RISC-V community. Here's how we're collaborating:
...
Open-V Will Use the SiFive E31 CPU Coreplex
...
All Open-V Peripherals Will Be Compatible with SiFive Chips
...
SiFive Will Donate Wafer Space in a May 2017 Tapeout
...
OnChip Will Contribute to the Free Chips Project

Sounds like good news for those hoping for RISC-V and open hardware designs to become tangible objects.
Note that the SiFive HiFive1 campaign was successful and has already shipped to some backers while the OnChip OPEN-V campaign looks like it will not reach its goal.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday January 30 2017, @08:36PM

    by Anonymous Coward on Monday January 30 2017, @08:36PM (#460776)

    Is that too much to ask?

    I just want a damn RISC-V with provisions for either an SDRAM+ memory bus, or a frontside bus that can be glued to a dedicated memory controller, allowing the bootstrapping of real modern operating systems.

    The next step from there is a southbridge/io bus link to provide access to a wide range of peripherals and configurable bus topologies.

    Once we have these we can start working on adding support for Option ROM/UEFI Rom compatible devices and figure out our next steps towards opening the whole desktop/notebook industry to power users again and freedom/security concerned normal users again.

  • (Score: 2) by LoRdTAW on Monday January 30 2017, @11:33PM

    by LoRdTAW (3755) on Monday January 30 2017, @11:33PM (#460856) Journal

    The MMU is out of scope for this offering which is more akin to a low power microcontroller. I'd love to see the same but we are a long way off and would need a lot of talent to make such a system practical for day to day computing running Linux. Just imagine the complexity of a modern "desktop" oriented SoC to cover most modern use cases:

    Embed 2-4 64 bit cores, 1-3GHz, w/caches (64 bit because you want more than 4GB right? and just say no to PAE)
    Virtualization extensions
    MMU, IOMMU, DMA, Interrupt, etc.
    Dual channel DDR4 memory controller w/ecc (because why not)
    PCIe endpoints
    hardware gigabit MAC's
    QSPI SD interface
    SPI, i2c, UARTs
    HD Audio interface
    USB 2/3/C
    SATA or enough PCIe lanes and endpoints for an m.2 interface or two

    Such an SoC would cover laptop, desktop, and even small server use. Then you could Boot strap it from an i2c or SPI flash using Libreboot and then it's off to booting Linux, BSD, whatever.

    Another issue would be graphics as we don't yet have a libre GPU for 3D. Though, in the meantime, I'm sure it would be simple to build a 2D frame buffer and let the driver handle the 2D acceleration of things like scaling, font rendering, video, overlay, lines, curves, bitmaps, etc. (basically 2D primitives and bitmap manipulation). Would be a CPU hog but would simplify the hardware until we can develop a GPU. Could also benefit server use where a GPU is unnecessary.

    That leads me to my next thought: it would be interesting if we could build a Larrabee [wikipedia.org] like GPU using many small and fast RISC V cores optimized for SIMD with a fat crossbar, and memory controller. That would give us a hybrid CPU-GPU system that can be programmed as one sees fit. So things like 3D, 2D, video encoding/decoding, SDR, ray tracing, scientific computing, and other GPGPU stuff can be done in easy to load/modify software modules. Fixing GPU bugs or expanding functionality would be as simple as downloading a new module build or hacking the code yourself. Might not be as blazing fast as today's GPU's but it would be completely programmable. And bonus points if the compiler is the same for CPU/GPU with different optimization paths. Start small with 4-8 cores and work up from there.

    You would need a lot of engineering talent and tons of cash to get it working. Something I don't think we will ever see unless some billion dollar company/nation/individual invests in such an endeavor. Though we could get as far as the open cores project and build a much more simple SoC. But it would be a tough sell if it can't do simple things like play a youtube video or smoothly render a webpage. Something you need a 1+ GHz CPU for nowadays.