ESA taps RISC-V for AI and HPC chip:
The Occamy processor, which uses a chiplet architecture, packs 432 RISC-V and AI accelerators and comes with 32GB of HBM2E memory, has taped out. The chip is backed by the European Space Agency and developed by engineers from ETH Zürich and the University of Bologna, reports HPC Wire.
The ESA-backed Occamy processor uses two chiplets with 216 32-bit RISC-V cores, an unknown number of 64-bit FPUs for matrix calculations, and carries two 16GB HBM2E memory packages from Micron. The cores are interconnected using a silicon interposer, and the dual-tile CPU can deliver 0.75 FP64 TFLOPS of performance and 6 FP8 TFLOPS of compute capability.
Neither ESA nor its development partners have disclosed the Occamy CPUs' power consumption, but it is said that the chip can be passively cooled, meaning it might be a low-power processor.
Each Occamy chiplet has 216 RISC-V cores and matrix FPUs, totaling around a billion transistors spread over 73mm^2 of silicon. The tiles are made by GlobalFoundries using its 14LPP fabrication process.
The 73mm^2 chiplet isn't a particularly large die. For example, Intel's Alder Lake (with six high-performance cores) has a die size of 163 mm^2. As far as performance is concerned, Nvidia's A30 GPU with 24GB of HBM2 memory delivers 5.2 FP64/10.3 FP64 Tensor TFLOPS as well as 330/660 (with sparsity) INT8 TOPS.
Meanwhile, one of the advantages of chiplet designs is that ESA and its partners from ETH Zürich and the University of Bologna can add other chiplets to the package to accelerate certain workloads if needed.
The Occamy CPU is developed as a part of the EuPilot program, and it is one of many chips that the ESA is considering for spaceflight computing. However, there are no guarantees that the process will indeed be used onboard spaceships.
(Score: 2) by hendrikboom on Friday May 12, @12:47AM (1 child)
The 432 and the Itanium were completely different architectures. Can's say either is a repeat of the other.
432 was unwieldy because is was way too descriptor-heavy. Too much data had to be fetched before the processor could decide what an instruction meant.
Itanium was a reasonable kind of design, but blindsided by AMD's AMD64 processor, which could also execute the traditional PC instruction set, making it a better migration path for Windows users.
(Score: 3, Interesting) by turgid on Friday May 12, @06:30AM
Itanium was absurdly complex. That was the comparison I was making. Furthermore it relied on magic compilers to make the best use of that hardware. Linus wrote some interesting rants on the subject some years back.
I spoke to someone in the industry back when Itanium was being really hyped and other companies were cancelling their own RISC CPUs because they predicted that in the following decade they wouldn't be able to out-spend intel. The Itanium idea had an interesting history, and it never really worked. Magic compilers that could predict the future were not forthcoming. They were always waiting for these compilers.
I went to trade shows and never saw the Itanium. It was hidden away and switched off. I knew someone who was a Debian developer who had one provided by intel and it was very hot. He used it for drying his clothes.
The reason AMD64 did so well was because it could run x86 code at full speed, but also because the internal architecture was so good. It did more in hardware of what the Itanium delegated to the magic compiler, that we're still waiting for.
Did you ever see any Itanium benchmarks? Did you over see it benchmarked versus an Opteron?
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].