On Youtube I watched a Mac user who had bought an iMac last year. It was maxed out with 40 GB of RAM costing him about $4000. He watched in disbelief how his hyper expensive iMac was being demolished by his new M1 Mac Mini, which he had paid a measly $700 for.
In real world test after test, the M1 Macs are not merely inching past top of the line Intel Macs, they are destroying them. In disbelief people have started asking how on earth this is possible?
If you are one of those people, you have come to the right place. Here I plan to break it down into digestible pieces exactly what it is that Apple has done with the M1.
Related:
What Does RISC and CISC Mean in 2020?
(Score: 3, Insightful) by ikanreed on Wednesday December 02 2020, @05:01PM (6 children)
Had an incredibly stupid benchmark.
Number of pointers allocated per second in an apple proprietary language with an apple propeitary compiler.
If there's a single fucking application on the planet that is malloc throttled I'll not just eat my hat, I'll eat an entire haberdashery.
(Score: 5, Funny) by Anonymous Coward on Wednesday December 02 2020, @05:08PM (1 child)
I stopped using malloc the day I learned about static global variables. None of my code is slowed down by wasteful checking of NULL pointers.
(Score: 1, Insightful) by Anonymous Coward on Wednesday December 02 2020, @09:48PM
Checking for NULL isn't a malloc problem. You can eliminate all NULL checks due to malloc failure simply by making a NULL return from malloc a panic condition. NULL checks aren't even the performance hit you think, unless you're foolishly failing to factor them out of tight loops. The overhead of malloc/free is the record keeping for what's being used and what isn't. If you have a simple application that can allocate all its memory at startup, great. Otherwise you're just going to reinvent all the record keeping. Sometimes the usage pattern can be tuned to your application, (memory pools) and that's faster; but simply allocating all your memory to some global variable isn't a silver bullet. If it were, there wouldn't be any discussion. Aside from that, you may still have to interact with libraries that use NULL as a sentinel.
(Score: 2) by istartedi on Wednesday December 02 2020, @05:53PM
Maybe not malloc "throttled", but the Jai programming language (currently under development and in private beta) has a focus on making custom allocators and memory pools a built-in part of the language. It's designed for games, where performance is often critical and apparently the creation/destruction cycle of a bazillion little objects in games can be a drag on performance if you just use standard allocators.
Sorry I couldn't pull up a reference on that. Like a lot of information about Jai, it's probably in an hour+ long YouTube video somewhere...
Anyway, memory pools are a common optimization tactic for this very reason.
Appended to the end of comments you post. Max: 120 chars.
(Score: 3, Interesting) by TheRaven on Wednesday December 02 2020, @06:14PM (2 children)
From the SPEC CPU suite, the xalanc benchmark is very malloc dependent, with a 30% end-to-end speedup being common for decent malloc implementations relative to the awful implementation in glibc.
sudo mod me up
(Score: 2) by JoeMerchant on Wednesday December 02 2020, @07:40PM (1 child)
Benchmark - not an app.
I created a very malloc intensive application: life simulation, each of thousands of living creatures had hundreds of "genes" each individually malloc'ed when born and freed when they die, with millions of births per minute. I was actually shocked how well the memory manager handled that torture.
🌻🌻 [google.com]
(Score: 2) by TheRaven on Thursday December 03 2020, @11:29AM
sudo mod me up