Cache or cores? Biscuit or cake?
It's about three years since I built my Ryzen system. It's a Ryzen 5 3600 (Zen 2, Socket AM4) with 32GB RAM.
Since dual core became a thing I have been meaning to take over the world with cunning multi-threaded code but about as far as I've got is some shell scripts that do things in parallel.
I figured I should upgrade the machine while AM4 CPUs are still available. I noted that AMD had some CPUs out with this newfangled 3D cache, and that they were pretty fast on certain workloads.
So my decision was biscuit or cake? Cache or cores?
It's taken me a few weeks, and much deliberation but today I decided to go for the cake. I think it will be more fun to have more cores to play with. I have ordered a Ryzen 9 5900X (12 core/24 thread Zen 3) and a cooler with two great big fans and fancy quiet bearings to go with it.
I'll need to revisit my old tests from three years ago and see what sort of a difference all those extra cores make. Obviously, there will be more contention for memory bandwidth. If I get around to it, I might post the results together with the results for the old CPU.
Meantime, I have been writing a little bit of C, finally getting around to something I've been meaning to do for 15 years. One day I'll write something about procrastination. I have an anecdote.
(Score: 1, Informative) by Anonymous Coward on Friday May 26, @04:57AM (2 children)
GCJ never had full support for classpath, among other huge gaps in the Java specification, and was very limited on the optimizer. Ironically, towards the end of its life in 2017(!) you could actually get better performance out of interpreted Java than out of GCJ. With the poor performance, numerous footguns, and general lack of demand for sort-lived processes written in Java, it just ended up withering on the vine.
(Score: 2) by turgid on Friday May 26, @09:20PM (1 child)
I thought ahead-of-time compiled Java would be good for command line utilities and such like, since you don't really want to be waiting for a whole VM infrastructure to be starting up and the JIT to be kicking in. I suppose there just wasn't the demand for it.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 1, Insightful) by Anonymous Coward on Saturday May 27, @01:07AM
There was demand but the problem was speed. GCJ always had problems with getting the speed where it should be due to manpower issues and the JVM and its bytecode interpreter didn't stand still in terms of speed improvements either. I think the real nail in the coffin from the JVM interpreter's specialization efforts was the inclusion of type-specific instructions. GCJ is slower from the start than G++ (for example) because G++ has better optimization than GCJ. However, GCJ and G++ have the same problem in trying to represent certain operations in the various intermediate representations. Because of this, Java's interpreter is faster than compiled G++ for certain operations and these operations will usually be compiled to bytecode in Jave with type-specific instructions. When you add together three considerations: 1. the fact that GCJ is already slower than the equivalent G++. 2. Even G++ can be slower than the interpreter in the JVM (and other interpreted VMs of other languages) in the right circumstances. 3. It doesn't take too long wall time (and even shorter on multicore systems) before the JIT is able to work its magic. Frankly, while a good idea for its targeted usage, without more commercial support it just ended up neglected as the JVM left it further behind.