Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Friday June 03 2022, @04:37PM   Printer-friendly
from the riscy-business dept.

From Tom's Hardware:

Intel and the Barcelona Supercomputing Centre (BSC) said they would invest €400 million (around $426 million) in a laboratory that will develop RISC-V-based processors that could be used to build zettascale supercomputers. However, the lab will not focus solely on CPUs for next-generation supercomputers but also on processor uses for artificial intelligence applications and autonomous vehicles.

The research laboratory will presumably be set up in Barcelona, Spain, and will receive €400 million from Intel and the Spanish Government over 10 years. The fundamental purpose of the joint research laboratory is to develop chips based on the open-source RISC-V instruction set architecture (ISA) that could be used for a wide range of applications, including AI accelerators, autonomous vehicles, and high-performance computing.

The creation of the joint laboratory does not automatically mean that Intel will use RISC-V-based CPUs developed in the lab for its first-generation zettascale supercomputing platform but rather indicates that the company is willing to make additional investments in RISC-V. After all, last year, Intel tried to buy SiFive, a leading developer of RISC-V CPUs and is among the top sponsors of RISC-V International, a non-profit organization supporting the ISA.

[....] throughout its history, Intel invested hundreds of millions in non-x86 architectures (including RISC-based i960/i860 designs in the 1980s, Arm in the 2000s, and VLIW-based IA64/Itanium in the 1990s and the 2000s). Eventually, those architectures were dropped, but technologies developed for them found their way into x86 offerings.

I would observe that a simple well designed instruction set could require less silicon. Possibly more cores per chip using same fabrication technology. Or more speculative execution branch prediction using up some of that silicon. I would mention compiler back ends, but that is a subject best not discussed in public.


Original Submission

Related Stories

Getting to Zettascale Without Needing Multiple Nuclear Power Plants 7 comments

Getting To Zettascale Without Needing Multiple Nuclear Power Plants:

There's no resting on your laurels in the HPC world, no time to sit back and bask in a hard-won accomplishment that was years in the making. The ticker tape has only now been swept up in the wake of the long-awaited celebration last year of finally reaching the exascale computing level, with the Frontier supercomputer housed at the Oak Ridge National Labs breaking that barrier.

With that in the rear-view mirror, attention is turning to the next challenge: Zettascale computing, some 1,000 times faster than what Frontier is running. In the heady months after his heralded 2021 return to Intel as CEO, Pat Gelsinger made headlines by saying the giant chip maker was looking at 2027 to reach zettascale.

Lisa Su, the chief executive officer who has led the remarkable turnaround at Intel's chief rival AMD, took the stage at ISSCC 2023 to talk about zettascale computing, laying out a much more conservative – some would say reasonable – timeline.

Looking at supercomputer performance trends over the past two-plus decades and the ongoing innovation in computing – think advanced package technologies, CPUs and GPUs, chiplet architectures, the pace of AI adoption, among others – Su calculated that the industry could reach the zettabyte scale within the next 10 years or so.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: -1, Offtopic) by Anonymous Coward on Friday June 03 2022, @05:47PM (1 child)

    by Anonymous Coward on Friday June 03 2022, @05:47PM (#1250302)

    Greetings, Starfighter. You have been recruited by the Star League to defend the frontier against Xur and the Ko-Dan armada.

    • (Score: 1, Offtopic) by DannyB on Friday June 03 2022, @07:21PM

      by DannyB (5839) Subscriber Badge on Friday June 03 2022, @07:21PM (#1250322) Journal

      It takes more than a scepter to rule, Xur. Even on Rylos.

      --
      People today are educated enough to repeat what they are taught but not to question what they are taught.
  • (Score: -1, Spam) by Anonymous Coward on Friday June 03 2022, @06:54PM

    by Anonymous Coward on Friday June 03 2022, @06:54PM (#1250315)

    Do they implant the glow in the dark juice and technology themselves or do they send the aliens in to do it?

    What does it take to become a genuine CIA Nigger?

  • (Score: 3, Interesting) by Snotnose on Friday June 03 2022, @08:13PM (13 children)

    by Snotnose (1623) on Friday June 03 2022, @08:13PM (#1250336)

    One would think Intel has already both A) looked into the technologies RISC-V uses and said "yeah, howzabout no, been there done that"; and B) looked at RISC-V and internally decided "yeah, we can do better".

    The only thing I can think of is Intel can't physically make the chips their engineers design unless they can get TSMC or somesuch to make them. Which would really suck for Intel.

    --
    Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
    • (Score: 4, Insightful) by bzipitidoo on Friday June 03 2022, @11:42PM (5 children)

      by bzipitidoo (4388) on Friday June 03 2022, @11:42PM (#1250382) Journal

      It's a fact that the x86 architecture is bad. It's never been good at anything, just adequate. All the additions over the years have made it better, to the point that it's not a turd any more, it's a highly polished turd.

      Another prob is that there's never been any subtractions. x86 has accumulated a lot of cruft that would best be dropped.

      • (Score: 2) by DannyB on Saturday June 04 2022, @04:50PM (3 children)

        by DannyB (5839) Subscriber Badge on Saturday June 04 2022, @04:50PM (#1250511) Journal

        It has decades of accumulated cruft.

        The only real value that Intel's legacy architecture has is that it runs all the legacy software.

        It does have state of the art chips, but the technologies behind that could be applied to other ISAs not hobbled with decades of excess baggage.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
        • (Score: 3, Insightful) by bzipitidoo on Saturday June 04 2022, @10:59PM (2 children)

          by bzipitidoo (4388) on Saturday June 04 2022, @10:59PM (#1250555) Journal

          And that real value shouldn't be a real value. Porting to a different CPU architecture is, as ports go, pretty trivial nowadays. Even stuff such as the SSE4 instructions aren't a big deal, not with libraries such as Vulkan. The onus is on the hardware vendor to make sure their hardware can run Vulkan, OpenGL, and anything else widely used. Compiler backends are very good at targeting most any arch desired. Where more esoteric hardware is involved, stuff such as the old SoundBlaster audio card, we have excellent emulators.

          Whenever there's a new RISC-V hardware platform to target, building a free UNIX for it will likely be done within a day.

          • (Score: 3, Insightful) by DannyB on Sunday June 05 2022, @03:11AM (1 child)

            by DannyB (5839) Subscriber Badge on Sunday June 05 2022, @03:11AM (#1250609) Journal

            A lot of the legacy software on WIndows is deeply wedded to the Intel instruction set in various ways. All of the Windows stuff that is easy to port from Intel to ARM has been already. The legacy stuff that hasn't been ported to ARM probably never will be. A lot of the legacy stuff may now be unmaintained.

            Dementia is preventable. Don't code in Intel's instruction set. At least insulate yourself by using a compiler.

            --
            People today are educated enough to repeat what they are taught but not to question what they are taught.
            • (Score: 3, Insightful) by bzipitidoo on Monday June 06 2022, @05:55AM

              by bzipitidoo (4388) on Monday June 06 2022, @05:55AM (#1250866) Journal

              If you had said MS-DOS, including Windows 3.1, and said the hardware of the late 1980s not just the x86 instruction set, you'd be correct. You could even say that's so of Windows 95 and Windows 98, since they are still fundamentally DOS under the hood. But the x86 instruction set alone is not a big barrier. 16bit vs 32bit vs 64bit is not much of a barrier to porting. It's the techniques of the times that are the biggest barrier. In particular, there was all kinds of 3D graphics done in software, including the use of terrible hacks and limitations, which simply is neither necessary nor desirable today. The first Doom games are examples of this. The worst you're going to get in plain x86 code is stuff such as software implementation of floating point math, for those x86 machines that didn't have a math coprocessor, and awkward workarounds for missing functionality, such as in the 386 and earlier, the lack of an atomic instruction for testing and setting a flag which is necessary to easily implement multitasking.

              In principle, it's always possible to reimplement functionality from scratch. Any software that's valuable enough will be reimplemented, if there is no other easier or more practical way. Usually an emulator is available.

      • (Score: 2) by Freeman on Monday June 06 2022, @02:34PM

        by Freeman (732) on Monday June 06 2022, @02:34PM (#1250969) Journal

        The reason why x86, x86-64 and Windows specifically have thrived is that they were more open to use. Apple enjoyed some early adoptions, but quickly fell behind Microsoft. Due to the fact that Apple wanted to control everything. While Microsoft let you do whatever. That's led to serious security nightmares and troubles, but Windows is the dominant desktop operating system. The more difficult and expensive you make something, the less adoption you will get.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 1, Insightful) by Anonymous Coward on Friday June 03 2022, @11:59PM (1 child)

      by Anonymous Coward on Friday June 03 2022, @11:59PM (#1250389)

      On the contrary, Intel are doubling down on fabs in response to geopolitical tensions over Taiwanese sovereignty.

      For many use cases, one doesn't need the bleeding edge process that powers your cell phone.

      In creating excess fab capacity for their own products, they are positioning themselves to manufacture designs for other parties. In this case, investing in RISC-V explores use cases that don't require decades of x86 compatibility.

      Adapt or perish. In this case they're ahead of the game in that while the dominant OSes from Apple, Microsoft and Google have embraced ARM64, none have made the leap to RISC-V - at least in anything publicly released.

      • (Score: 2) by DannyB on Saturday June 04 2022, @04:56PM

        by DannyB (5839) Subscriber Badge on Saturday June 04 2022, @04:56PM (#1250512) Journal

        dominant OSes from Apple, Microsoft and Google have embraced ARM64, none have made the leap to RISC-V - at least in anything publicly released.

        There is still a lack of high end RISC V chips. I mean powerful ones that compete with Intel's most powerful.

        I have no doubt it will come. It just takes time to develop. Others are having to duplicate work Intel has done. Even when those chips arrive they will probably be proprietary. The advantage of RISC V is that anybody can get into the game without having to license the instruction set. While you can license ARM, it is expensive. Good luck trying to get a license to make chips for Intel's legacy instruction set.

        Linux will probably lead the way on this. Other OSes will follow. Now that both Apple and Microsoft have already played the game of porting their OS across ISAs, it will be far easier to do it again for RISC V.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
    • (Score: 3, Insightful) by maxwell demon on Saturday June 04 2022, @04:39AM (4 children)

      by maxwell demon (1608) on Saturday June 04 2022, @04:39AM (#1250433) Journal

      One would think Intel has already both A) looked into the technologies RISC-V uses and said "yeah, howzabout no, been there done that"; and B) looked at RISC-V and internally decided "yeah, we can do better".

      Intel already tried to do better than existing RISC chips. The result was called Itanium. Didn't work out too well.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by DannyB on Saturday June 04 2022, @04:57PM

        by DannyB (5839) Subscriber Badge on Saturday June 04 2022, @04:57PM (#1250513) Journal

        Itanic was an ultra wide instruction word as I seem to recall. Multiple instructions per word.

        Not what most people think of as RISC.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
      • (Score: 2) by stormwyrm on Saturday June 04 2022, @07:11PM (2 children)

        by stormwyrm (717) on Saturday June 04 2022, @07:11PM (#1250527) Journal
        They also had the i860 and from what I remember was it was a pretty sweet architecture. Too bad it never really made any serious mainstream inroads.
        --
        Numquam ponenda est pluralitas sine necessitate.
        • (Score: 2) by turgid on Sunday June 05 2022, @08:48AM (1 child)

          by turgid (4318) Subscriber Badge on Sunday June 05 2022, @08:48AM (#1250651) Journal

          I believe the problem with the i860 [cpushack.com] was that it was too slow at context switching (it took too long to store and restore all the registers) so it wasn't much good as a general purpose CPU. It was very good for graphics, though, and had hardware support for some 3D stuff. It was on some graphics cards. There was an aborted attempt to port Windows NT to the i860.

          The i960 was also RISC but a very different architecture. It was used for embedded stuff. Several laser printers had i960s in them.

          Those were exciting days when all these new designs were coming out. There was quite a competitive market with RISC CPUs such as SPARC, MIPS, i860, M88k, POWER, PowerPC and all sorts of others, The mainstream was using the 386 and Motorola 680x0 series, which were much slower. When the DEC Alpha came out, it blew everything else clear out of the water.

          • (Score: 0) by Anonymous Coward on Friday June 10 2022, @05:35AM

            by Anonymous Coward on Friday June 10 2022, @05:35AM (#1252090)

            It was actually the architectural starting point for Windows NT, before NT was further ported to other architectures as planned, but having made no assumptions based on those architecture designs. You can find a reference to this in one of those microsoft blog posts (raymond chen, or somebody else?) regarding an interview with one of those early NT developers and using the cobbled together i860 system that didn't even have graphics when it was started (it was text only over a serial console!) The i60 work was completed when the got all the major kernel components completed and the HAL stuff finalized at which point the focus shifted to the intended architectures (ppc and x6 if I remember correctly) before eventually gaining alpha and mips, then contracting back to only x86 before eventually picking up arm.

  • (Score: 0) by Anonymous Coward on Friday June 03 2022, @10:27PM (2 children)

    by Anonymous Coward on Friday June 03 2022, @10:27PM (#1250362)

    > I would mention compiler back ends, but that is a subject best not discussed in public.

    ooh, la la!

    • (Score: 3, Interesting) by DannyB on Saturday June 04 2022, @05:03PM (1 child)

      by DannyB (5839) Subscriber Badge on Saturday June 04 2022, @05:03PM (#1250516) Journal

      LLVM gets me excited.

      A standard portable compiler back end.

      I believe that LLVM's abstraction and typed values is a much easier compiler target for newly written compilers. LLVM provides the optimizations when translating it's abstract instruction set down to a real target instruction set.

      On modern hardware and OSes, it is likely possible to run LLVN in a separate process and pipe in LLVM's binary (or source text) instruction format from the primary compiler -- which itself might be multiple phases or passes running as separate piped processes. A compiler that exploits having multiple cores available.

      --
      People today are educated enough to repeat what they are taught but not to question what they are taught.
      • (Score: 1, Funny) by Anonymous Coward on Saturday June 04 2022, @09:37PM

        by Anonymous Coward on Saturday June 04 2022, @09:37PM (#1250545)

        All the technology in the world won't improve on Punch The Monkey. Burn those cycles UP, brah.

(1)