Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday September 25 2017, @07:26PM   Printer-friendly
from the Apple-impact dept.

The company that failed to acquire Lattice Semiconductor will acquire Imagination Technologies instead:

https://www.bloomberg.com/news/articles/2017-09-22/imagination-technologies-agrees-to-takeover-by-canyon-bridge

Imagination Technologies Group Plc agreed to be acquired by China-backed private equity firm Canyon Bridge Capital Partners.

Canyon Bridge said it will pay 182 pence a share in cash, or more than 500 million pounds ($675 million), for the U.K. designer of graphics chips. That's 42 percent more than Imagination's closing share price on Friday.

As part of the deal, Imagination will sell its U.S.-based embedded processor unit MIPS to Tallwood MIPS, a company indirectly owned by California-based investment firm Tallwood Venture Capital, Canyon Bridge said.

Canyon Bridge was keen to structure a bid to avoid scrutiny from U.S. regulators, Bloomberg reported earlier this month.

Earlier in September President Donald Trump rejected a takeover by Canyon Bridge of U.S. chipmaker Lattice Semiconductor Corp., just the fourth time in a quarter century that a U.S. president has ordered a foreign sale of an American firm stopped for security reasons.

Also at The Verge, AnandTech, and Financial Times.

Previously:

Related:


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday September 26 2017, @01:24PM (3 children)

    by Anonymous Coward on Tuesday September 26 2017, @01:24PM (#573086)

    Yes, it's amazing in this day and age how many businesses people still don't understand Imaginary Property. It's pretty clear that unless you're one of the major x86 people or ARM you're going to have a very hard job competing unless you have something very special to offer.

    The other thing is that Free/Open Source hasn't caught on with many of the hardware people yet in the same way it has with software. I'm a software enthusiast and know very little about hardware (wish I had the time and the brains) but if I did, I'd play with the stuff on opencores.org.

    I have a feeling that ARM is going to slowly be overtaken by RISC-V in the way that traditional Unix was by Linux and friends.

  • (Score: 2) by TheRaven on Tuesday September 26 2017, @01:57PM (2 children)

    by TheRaven (270) on Tuesday September 26 2017, @01:57PM (#573104) Journal

    I have a feeling that ARM is going to slowly be overtaken by RISC-V in the way that traditional Unix was by Linux and friends.

    This might happen, but RISC-V has a delicate balance to strike. The thing that killed MIPS (before ImagTec bought it) was ecosystem fragmentation. ARM has been very careful to reduce the number of incompatible versions. You now basically can't make incompatible ARM cores, even if you have an architecture license, you have to differentiate by other cores on your SoC and by different pipeline designs. In contrast, MIPS let anyone add specialised instructions in the coprocessor space (ARM did early on, but stopped quite a while ago). This meant that every MIPS vendor would fork GCC to add support for their custom instructions and no one upstreamed their forks because they broke everyone else's. Everyone hacked up Linux or FreeBSD to handle their extra register sets and other processor state, often not upstreaming their changes for similar reasons. Running software for vendor X MIPS on vendor Y MIPS involved a significant porting effort.

    RISC-V faces a similar fate: everyone is implementing the base specification plus a subset of the standard extensions and soon they're going to start adding non-standard extensions too. This is great for some of the vendors who want something very custom, but it's bad for the ecosystem as a whole. It's difficult to do this in an open source model, because the strength of open source is the lack of a central authority preventing people from doing what they want with the project.

    --
    sudo mod me up
    • (Score: 0) by Anonymous Coward on Tuesday September 26 2017, @02:14PM (1 child)

      by Anonymous Coward on Tuesday September 26 2017, @02:14PM (#573112)

      Would it be possible to implement the base specification plus have some run-time configurable (microcode, FPGA...?) parts to be able to emulate any extensions as required? What about the Transmeta approach? But then you might as well choose a more common instruction set to translate.

      • (Score: 2) by TheRaven on Wednesday September 27 2017, @10:03AM

        by TheRaven (270) on Wednesday September 27 2017, @10:03AM (#573719) Journal

        Would it be possible to implement the base specification plus have some run-time configurable (microcode, FPGA...?) parts to be able to emulate any extensions as required?

        Obviously, it's possible (from the Church-Turing Thesis), but if compilers are emitting an instruction because it's fast and on your implementation it goes from being a single-cycle instruction to one that takes 200 cycles then that's not so great. FPGAs don't tend to use the same fabrication techniques as ASICs and don't run at the same clock speed, so that wouldn't be so great. You can always trap-and-emulate instructions, though this gets messy if you have two shared libraries that both want the different extensions. The hardest things to emulate are things like the A extension (emulating atomicity is insanely hard) and things like lowRISC's tagged memory extensions (which add extra semantics to all load and store instructions).

        --
        sudo mod me up