Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by turgid

Cache or cores? Biscuit or cake?

It's about three years since I built my Ryzen system. It's a Ryzen 5 3600 (Zen 2, Socket AM4) with 32GB RAM.

Since dual core became a thing I have been meaning to take over the world with cunning multi-threaded code but about as far as I've got is some shell scripts that do things in parallel.

I figured I should upgrade the machine while AM4 CPUs are still available. I noted that AMD had some CPUs out with this newfangled 3D cache, and that they were pretty fast on certain workloads.

So my decision was biscuit or cake? Cache or cores?

It's taken me a few weeks, and much deliberation but today I decided to go for the cake. I think it will be more fun to have more cores to play with. I have ordered a Ryzen 9 5900X (12 core/24 thread Zen 3) and a cooler with two great big fans and fancy quiet bearings to go with it.

I'll need to revisit my old tests from three years ago and see what sort of a difference all those extra cores make. Obviously, there will be more contention for memory bandwidth. If I get around to it, I might post the results together with the results for the old CPU.

Meantime, I have been writing a little bit of C, finally getting around to something I've been meaning to do for 15 years. One day I'll write something about procrastination. I have an anecdote.

Display Options Threshold/Breakthrough Reply to Comment Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Thursday May 25, @05:34PM (6 children)

    by DannyB (5839) Subscriber Badge on Thursday May 25, @05:34PM (#1308149) Journal

    The drawback of course, is that Java programs start up slowly and then after a couple minutes seem to "warm up" and run fast. And they can run stably for a very long time.

    So don't use Java for, say, a replace for the 'ls' command.

    Do use Java for large complex programs where you want good performance and very long up time without interruption.

    --
    Young people won't believe you if you say you used to get Netflix by US Postal Mail.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by turgid on Thursday May 25, @07:32PM (5 children)

    by turgid (4318) Subscriber Badge on Thursday May 25, @07:32PM (#1308185) Journal

    Some years ago I played with gcj, the GNU Java compiler (then part of gcc). It could compile Java source down to native machine code and it could also compile Java bytecode down to native machine code, so you could take pre-compiled Java class files and convert them to ELF binaries. It was a fun toy. Unfortunately at the time I didn't know enough Java to really try it out.

    The project died quite some years ago. I seem to remember the Java class format changed to add inner classes or something, and gcj was not updated.

    I believe one of the advantages of a JIT is that it is able to do very specialised optimisations dynamically depending on the CPU state at a given time. Obviously, an ahead-of-time compiler can't have that information. CPUs try to do some of that in hardware, but there are limits of course.

    • (Score: 2) by DannyB on Thursday May 25, @08:37PM (1 child)

      by DannyB (5839) Subscriber Badge on Thursday May 25, @08:37PM (#1308204) Journal

      The JIT is able to do other optimizations that an ahead-of-time compiler cannot do. The JIT has the ENTIRE program, unlike an ahead of time compiler. (But with ahead of time compilers, the LINKER does have the "entire" program, so in theory...)

      The JIT can take "liberties" with calling conventions. After all, it has the entire program. The JIT aggressively inlines code. This costs more memory but improves speed. You can always buy more memory, but you can't buy back time.

      If the JIT can prove that this method call could only ever call this one certain other method, then it can completely skip doing a vtable lookup in the machine code.

      The JVM JIT is the product of more than two decades of intensive research. IBM had first opened this up as open source for researchers a long time ago.

      --
      Young people won't believe you if you say you used to get Netflix by US Postal Mail.
      • (Score: 1, Insightful) by Anonymous Coward on Saturday May 27, @05:47AM

        by Anonymous Coward on Saturday May 27, @05:47AM (#1308432)

        That is one reason why research interest in PGO, IPO, and related optimizations are finally being seen outside of JITs. (The other is faster computers being able to do more work in the same amount of time). The benefits you can get can be quite dramatic, so it isn't surprising that people get excited about it when they land in a language implementation they use.

        As an aside, whenever someone talks about something on the cutting edge of programming language development and they looked surprised at my reply, "You should check out how the JVM (usually but sometimes it is OTP or .NET) does that," will never cease to amuse me. It is probably because I've been on the other side of that conversation more times than I can count or really want to admit.

    • (Score: 1, Informative) by Anonymous Coward on Friday May 26, @04:57AM (2 children)

      by Anonymous Coward on Friday May 26, @04:57AM (#1308258)

      GCJ never had full support for classpath, among other huge gaps in the Java specification, and was very limited on the optimizer. Ironically, towards the end of its life in 2017(!) you could actually get better performance out of interpreted Java than out of GCJ. With the poor performance, numerous footguns, and general lack of demand for sort-lived processes written in Java, it just ended up withering on the vine.

      • (Score: 2) by turgid on Friday May 26, @09:20PM (1 child)

        by turgid (4318) Subscriber Badge on Friday May 26, @09:20PM (#1308379) Journal

        I thought ahead-of-time compiled Java would be good for command line utilities and such like, since you don't really want to be waiting for a whole VM infrastructure to be starting up and the JIT to be kicking in. I suppose there just wasn't the demand for it.

        • (Score: 1, Insightful) by Anonymous Coward on Saturday May 27, @01:07AM

          by Anonymous Coward on Saturday May 27, @01:07AM (#1308407)

          There was demand but the problem was speed. GCJ always had problems with getting the speed where it should be due to manpower issues and the JVM and its bytecode interpreter didn't stand still in terms of speed improvements either. I think the real nail in the coffin from the JVM interpreter's specialization efforts was the inclusion of type-specific instructions. GCJ is slower from the start than G++ (for example) because G++ has better optimization than GCJ. However, GCJ and G++ have the same problem in trying to represent certain operations in the various intermediate representations. Because of this, Java's interpreter is faster than compiled G++ for certain operations and these operations will usually be compiled to bytecode in Jave with type-specific instructions. When you add together three considerations: 1. the fact that GCJ is already slower than the equivalent G++. 2. Even G++ can be slower than the interpreter in the JVM (and other interpreted VMs of other languages) in the right circumstances. 3. It doesn't take too long wall time (and even shorter on multicore systems) before the JIT is able to work its magic. Frankly, while a good idea for its targeted usage, without more commercial support it just ended up neglected as the JVM left it further behind.