Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday December 02 2020, @02:28PM   Printer-friendly
from the all-the-chips dept.

A medium article

On Youtube I watched a Mac user who had bought an iMac last year. It was maxed out with 40 GB of RAM costing him about $4000. He watched in disbelief how his hyper expensive iMac was being demolished by his new M1 Mac Mini, which he had paid a measly $700 for.

In real world test after test, the M1 Macs are not merely inching past top of the line Intel Macs, they are destroying them. In disbelief people have started asking how on earth this is possible?

If you are one of those people, you have come to the right place. Here I plan to break it down into digestible pieces exactly what it is that Apple has done with the M1.

Related:
What Does RISC and CISC Mean in 2020?


Original Submission

Related Stories

The Future of Computing? 46 comments

After giving a gentle introduction to how computers work at the hardware level, this article gives an interesting thought on the future of computing and how RISC-V fits into it.

By now it is pretty clear that Apple's M1 chip is a big deal. And the implications for the rest of the industry is gradually becoming clearer. In this story I want to talk about a connection to RISC-V microprocessors which may not be obvious to most readers.

Let me me give you some background first: Why Is Apple's M1 Chip So Fast?

In that story I talked about two factors driving M1 performance. One was the use of massive number of decoders and Out-of-Order Execution (OoOE). Don't worry it that sounds like technological gobbledegook to you.

This story will be all about the other part: Heterogenous computing. Apple is aggressively pursued a strategy of adding specialized hardware units, I will refer to as coprocessors throughout this article:

Related:
Why is Apple's M1 Chip So Fast?


Original Submission

Apple Announces New M1 Pro and M1 Max SoCs for MacBook Pro 9 comments

Apple has announced two new Arm SoCs for its upcoming MacBook Pro laptops. Both share the same CPU, but differ in GPU and RAM size.

The Apple M1 SoC for Macs has 8 CPU cores: 4 performance cores and 4 efficiency cores. The newly announced M1 Pro and M1 Max have 10 cores: 8 performance cores and 2 efficiency cores. CPU performance (multi-threaded) is about 70% faster, at around a 30 Watt TDP (M1 Pro) instead of 15 Watts for the M1. The 16-core "neural engine" with 11 TOPS of machine learning performance is unchanged from the M1.

While the M1 has an (up to) 8-core GPU with 2.6 TFLOPS FP32 of performance, the M1 Pro doubles that to 16 cores and 5.2 TFLOPS, and the M1 Max doubles it again to 32 cores and 10.4 TFLOPS. The M1 Pro is comparable to an Nvidia RTX 3050 Ti discrete laptop GPU, while the M1 Max is comparable to an RTX 3080 laptop GPU. These levels of performance are achieved at around 30 Watts for the M1 Pro and 60 Watts for the M1 Max, compared to around 100-160 Watts for laptops with discrete graphics.

The M1 Pro has around 33.7 billion transistors fabbed on TSMC "5nm" in a 245 mm2 die space, while the M1 Max has 57 billion transistors at 432 mm2. The M1 Pro will include up to 32 GB of LPDDR5 RAM, and the M1 Max will include up to 64 GB.

Also at Wccftech.

See also: Apple Announces The M1 Pro / M1 Max, Asahi Linux Starts Eyeing Their Bring-Up

Previously: Apple Has Built its Own Mac Graphics Processors
Apple Claims that its M1 SoC for ARM-Based Macs Uses the World's Fastest CPU Core
Your New Apple Computer Isn't Yours
Why is Apple's M1 Chip So Fast?
ARM-Based Mac Pro Could Have 32+ Cores
Booting Linux and Sideloading Apps on M1 Macs


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: -1, Redundant) by Anonymous Coward on Wednesday December 02 2020, @02:50PM (1 child)

    by Anonymous Coward on Wednesday December 02 2020, @02:50PM (#1083220)

    Because they run Terry Davis' OS. Duh!

    • (Score: 0, Funny) by Anonymous Coward on Wednesday December 02 2020, @03:55PM

      by Anonymous Coward on Wednesday December 02 2020, @03:55PM (#1083272)

      Why is Apple’s M1 Chip So Fast?

      ...because it's not slow!... Duh!

  • (Score: -1, Troll) by Anonymous Coward on Wednesday December 02 2020, @03:19PM

    by Anonymous Coward on Wednesday December 02 2020, @03:19PM (#1083252)

    all you need is a 1" hole and that M$ cock squirms in. They're everywhere, even The Linux Foundation. If that isn't heresy I don't know what is.

    Maybe they just want to gobble all of that tiny M$ cock.

  • (Score: 4, Informative) by Anonymous Coward on Wednesday December 02 2020, @03:21PM (2 children)

    by Anonymous Coward on Wednesday December 02 2020, @03:21PM (#1083254)

    It's basically just a guy who really likes to hear himself talk. It's just a "War and Peace" length ramble about how CPUs work. Much of it isn't even right.

    • (Score: 5, Touché) by Anonymous Coward on Wednesday December 02 2020, @03:59PM

      by Anonymous Coward on Wednesday December 02 2020, @03:59PM (#1083275)

      It's Medium, so tell me something I didn't know. ;)

    • (Score: 2) by driverless on Friday December 04 2020, @08:02AM

      by driverless (4770) on Friday December 04 2020, @08:02AM (#1083968)

      So it's not even "a medium article", it's actually "a pretty poor article".

  • (Score: 3, Insightful) by SomeGuy on Wednesday December 02 2020, @03:32PM (40 children)

    by SomeGuy (5632) on Wednesday December 02 2020, @03:32PM (#1083261)

    costing him about $4000. He watched in disbelief how his hyper expensive iMac was being demolished by his new M1 Mac Mini, which he had paid a measly $700 for.

    I can believe it. He paid too fucking much for a Crapple.

    Nice technical article about the hardware, but are we sure Apple isn't just crippling their x86 software now to make it look bad?

    Also, sit back and watch in disbelief as all existing software becomes a second class citizen or refuses to run... AGAIN! (And consumertards love this this, they get to throw everything away and buy all new stuff again!)

    If RISC is so fucking great, then the 64-bit DEC Alpha should have taken off like wildfire in the 1990s.

    Remember, Apple DID have the PowerPC chips, but ditched them for X86. Very nice CPUs, but MacOS X was klunky even on the fastest PPC.

    • (Score: 0) by Anonymous Coward on Wednesday December 02 2020, @03:36PM (18 children)

      by Anonymous Coward on Wednesday December 02 2020, @03:36PM (#1083263)

      X86 CPUs have been RISC internally since the mid 1990s, with intel last.

      • (Score: 2) by JoeMerchant on Wednesday December 02 2020, @04:04PM (17 children)

        by JoeMerchant (3937) on Wednesday December 02 2020, @04:04PM (#1083276)

        What is RISC? If the instructions in the pipeline are CISC can the processor be called RISC if the pipeline instructions get decomposed to RISC?

        If RISC is really so great, shouldn't GCC be outputting RISC instructions?

        At the lowest level, all (commercial) processors evaluate 0s and 1s to do what they do, can't get much more reduced than that.

        --
        🌻🌻 [google.com]
        • (Score: 5, Informative) by Anonymous Coward on Wednesday December 02 2020, @05:04PM (1 child)

          by Anonymous Coward on Wednesday December 02 2020, @05:04PM (#1083296)

          RISC really means a load-store architecture with simple instructions that execute in a single clock cycle, with operands and even data in the same machine word. For example, 32-bit SPARC has to use 2 instructions to read a 32-bit value from memory. Where RISC wins is there's more room on the chip for more registers, cache, branch prediction, speculation, additional pipelines and so on. And because of the simplicity they can run at higher clock speeds.
          Modern x86 CPUs since Cyrix brought out the 6x86 are internally RISC with a CISC decoder layer on. This is also a win, because some CISC opcodes are very small (1 byte) saving RAM and bus bandwidth, but also because the decoder knows the state of the whole CPU at execution time and can make on-the-fly optimizations. Note that intel itanic couldn't do this in hardware...

          • (Score: 1, Interesting) by Anonymous Coward on Wednesday December 02 2020, @10:59PM

            by Anonymous Coward on Wednesday December 02 2020, @10:59PM (#1083423)

            Thank you for straightening out poor Joe Merchant.

            X86 was an exciting way to work when you had 4000 transistors and 16K of RAM - sometimes a whole lot less than that. I programmed in MASM for a very long time, and always felt great for solving those intricate puzzles in as few clocks and bytes as possible. I'm sure some of my libraries are still in use today, though mostly transmogrified for wider CPU registers.

            I didn't think much of ARM when it first came out - too byte-wasteful for those times, and too wordy for assembly language programming. I Was short-sighted. So many things are eternal, but it's time to move to a more rational instruction set, on a platform for these times. Besides, it'll take Redmond a decade to almost get it right, but Linux will be running first, because there are a lot of people rooting for the future.

        • (Score: 2) by mobydisk on Wednesday December 02 2020, @05:37PM (14 children)

          by mobydisk (5472) on Wednesday December 02 2020, @05:37PM (#1083312)

          Read the article. It explains what you need to know.

          If RISC is really so great, shouldn't GCC be outputting RISC instructions?

          It does, but only when compiling for a RISC processor. It cannot output ARM's RISC instructions and expect them to run on an x86 CPU.

          • (Score: 5, Informative) by JoeMerchant on Wednesday December 02 2020, @05:58PM (13 children)

            by JoeMerchant (3937) on Wednesday December 02 2020, @05:58PM (#1083329)

            I was in "Digital Design" graduate studies when the big RISC hype hit in the late 1980s. At that time, RISC was pretty clearly defined as a Reduced Instruction Set Computer: fewer (implicitly simpler) op codes, with the purported advantage of running at faster clock rates / instruction execution times. At that time, RISC architectures resulted in larger object code files to achieve the same ends as CISC competitors, and the faster clock rate was mostly a wash with the increased number of instructions that needed to execute, with all kinds of niches giving advantage to one or the other architecture school.

            Since then the definitions have evolved along with technology, it's not the clear R vs C distinction that it once was. To me, it feels like RISC has finally "won" by changing the definition of what RISC is until it has met that sweet spot between the two where the most advantage is.

            --
            🌻🌻 [google.com]
            • (Score: 3, Interesting) by anubi on Wednesday December 02 2020, @10:54PM (11 children)

              by anubi (2828) on Wednesday December 02 2020, @10:54PM (#1083421) Journal

              I understand the 6502 was among the first RISC chips out there.

              It's great appeal to me was it was much simpler to determine how long it took for each piece of code to execute. One instruction, one clock.

              I was doing a lot of real-time programming. Stepper motor control and positioning systems where physical inertial dynamics was an integral part of the design.

              I still cling to my Arduinos for the exact same reason.

              I don't like multitasking on my robotics.

              I'd much rather design in another cpu for each real-time thingie I have to do, then use yet another for supervisory tasks. And make each one capable of standalone manual terminal input for checking them out on the bench so I can verify its doing what it's supposed to do.

              I get too many things going on at once, and it soon becomes damn near impossible for me to track down timing gremlins.

              --
              "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
              • (Score: 3, Informative) by JoeMerchant on Thursday December 03 2020, @02:31AM (9 children)

                by JoeMerchant (3937) on Thursday December 03 2020, @02:31AM (#1083487)

                I developed a couple of "Deterministic Real Time" systems on 6811 processors, very similar to 6502. I programmed it in C, but would check the assembly code to ensure that no real-time path would take longer than the allotted time (usually 500uS). The system would have a single timed interrupt that ran various processes in deterministic real time, and then a main loop that would handle all the non real time stuff. Very simple, very reliable, no surprises ever and powerful enough to do basically anything I've ever needed an embedded system to do.

                Years before I did that, and continuing through to today, I see otherwise bright software engineers fall down the RTOS prioritized interrupt rabbit hole and get mauled time and time again down there where it's dark and they can't really see what's happening to them.

                --
                🌻🌻 [google.com]
                • (Score: 1) by anubi on Thursday December 03 2020, @08:57AM (8 children)

                  by anubi (2828) on Thursday December 03 2020, @08:57AM (#1083553) Journal

                  I kinda cheated...

                  The Commodore64 had made it's debut... And they sold the assembler for it as well.

                  I used that for many years to develop code...using oscilloscopes to time my code.

                  Like you say...interrupt handling...once my time critical code launched, it had the machine's undivided attention.

                  I worked on one thingie, part of a high speed data acquisition system, where I phaselocked the processor clock with an analog phase locked loop 74HC7046, so I could sync my reception of data in lock step to the geophysical sensor that was streaming the data to me. To eliminate sampling artifacts.

                  Gee, I miss those old days.

                  --
                  "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
                  • (Score: 2) by JoeMerchant on Thursday December 03 2020, @12:33PM (7 children)

                    by JoeMerchant (3937) on Thursday December 03 2020, @12:33PM (#1083585)

                    Checking the assembly was pretty easy - the C compiler would show you the assembly code, complete with cycle time for each instruction. All I had to do was run the paths through the real time handlers and make sure nothing added up to (umm... 4MHz = 250nS, 500uS per slice) ~1900 or more. Avoid loops and crazy branching schemes and it all goes pretty easily. If I did it for much longer it wouldn't have been too hard to make a tool that auto-processed the assembly dumps to do the summations for me.

                    We were (primarily) reading an inductive plethysmograph, basically period counting a ~400KHz oscillator with a 96MHz crystal clock using a CPLD with counter/latches to get high precision readings of changes in inductance. The 6811 would pull the counts off the latches at 2KHz multiplexing through 2 to 5 different inductors, depending, then reconstruct that into 2 to 5 signals with a 50 to 200Hz sample rate, again depending on configuration. 2 years of graduate studies in digital design and that's the most complex digital project I ever did design for in 30 years in the general field. Seems like the career has been 0.2% hardware design, 24% software design, 2.8% software maintenance, 3% mechanical design, 5% research/lab work, 25% documentation and 40% meetings.

                    --
                    🌻🌻 [google.com]
                    • (Score: 1) by anubi on Thursday December 03 2020, @02:00PM (6 children)

                      by anubi (2828) on Thursday December 03 2020, @02:00PM (#1083610) Journal

                      Thanks. Quite interesting.

                      Inexpensive and simple microprocessors have made it possible to make some really unusual custom designs.

                      Talking about inductance, a radio amateur, Neil Hecht came up with a quite nifty scheme for a simple technique for measuring inductance.

                      https://www.mtechnologies.com/aade/lcmeter.htm [mtechnologies.com]

                      Just for what it's worth. I found his research insightful. It's not particularly fast, but by taking advantage of how precise a microprocessor can measure time intervals, I was able to make some inexpensive yet accurate position sensors that are quite resistant to environmental factors.

                      I am trying to build stuff for the small farm. Arduino based so the farmer can maintain the thing without being held hostage to terms, conditions, licensing, monthly fees, and DRM enforced ignorance.

                      --
                      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
                      • (Score: 1) by anubi on Thursday December 03 2020, @02:32PM

                        by anubi (2828) on Thursday December 03 2020, @02:32PM (#1083627) Journal
                        --
                        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
                      • (Score: 2) by JoeMerchant on Thursday December 03 2020, @03:26PM (4 children)

                        by JoeMerchant (3937) on Thursday December 03 2020, @03:26PM (#1083641)

                        Yeah, stuff like this: https://www.freightfarms.com/ [freightfarms.com] is very attractive, except for the lock-in style software subscriptions, etc. I think that the power of open source really needs to be applied to reproducible farming techniques like containers. $2 per quart locally sourced high sugar and flavor strawberries and similar goodies are very possible, if people can succeed in marketing them.

                        --
                        🌻🌻 [google.com]
                        • (Score: 1) by anubi on Friday December 04 2020, @03:05AM

                          by anubi (2828) on Friday December 04 2020, @03:05AM (#1083873) Journal

                          Here's another one

                          https://farmhack.org/tools [farmhack.org]

                          I intend to involve myself here when I get my act together.

                          --
                          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
                        • (Score: 1) by anubi on Friday December 04 2020, @03:12AM (2 children)

                          by anubi (2828) on Friday December 04 2020, @03:12AM (#1083875) Journal

                          Oh, Incidentally, the main reason for my Arduino bent is OPEN SOURCE. I hate "terms and conditions" to the end user with a purple plague.

                          This is all about sharing technology.

                          Take it, do what you will with it, and if you made it better, let the rest of us benefit too.

                          --
                          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
                          • (Score: 2) by JoeMerchant on Friday December 04 2020, @03:35AM

                            by JoeMerchant (3937) on Friday December 04 2020, @03:35AM (#1083887)

                            In my opinion, an open (source) container farming movement should hit critical mass much faster and bigger than any for-profit commercial operations are capable of, once the barriers to entry get low enough.

                            Container: $5K delivered

                            Insulation: $3K (spray on)

                            Interior framework: $1K - 2x4s should do most of the work quite well

                            Power: can be solar (up front investment with payback in a couple of years or less), grid or most likely hybrid.

                            Water: should be minimal / largely recycled as compared to open farming systems

                            LED lighting: figures I read ask for 40W/square foot, so a ~300 square foot container needs ~12KW of LEDs, which might run $1 to 2K

                            So, anybody with $15K, a very little land and the time to put in the work should be able to run a container farm, as opposed to $115K+ for the Freight Farm commercial solution. The real trick is getting effective information sharing going on among the builder/farmers. Once the open community gets 2-3x as many active contributors as a typical commercial operation has, I think you've got a sustainable revolution going on.

                            --
                            🌻🌻 [google.com]
                          • (Score: 2) by JoeMerchant on Friday December 04 2020, @03:37AM

                            by JoeMerchant (3937) on Friday December 04 2020, @03:37AM (#1083889)

                            By the way, I respect the Arduino choice, but find myself firmly planted in the Raspberry Pi camp. When you can get a Pi Zero with WiFi & Bluetooth for $15, why mess around with anything less powerful?

                            --
                            🌻🌻 [google.com]
              • (Score: 2) by dry on Monday December 21 2020, @03:18AM

                by dry (223) on Monday December 21 2020, @03:18AM (#1089776) Journal

                The 6502 wasn't RISC, though it was simple. RISC is supposed to do an instruction a cycle, consider some 6502 instructions.
                LDA $0008
                LDA $08
                LDA $08,Y
                all taking different number of cycles, 3,2 and 4 IIRC in the above example, and I'm probably not remembering correctly.
                It did run about 4 times faster then the 8008 so a 1Mhz 6502 was about equivalent to a 4Mhz 8008.

            • (Score: 0) by Anonymous Coward on Thursday December 03 2020, @02:17AM

              by Anonymous Coward on Thursday December 03 2020, @02:17AM (#1083480)

              Religion is like that. I think "Byte" articles of the time talked RISC and CISC, and found there was not one true savior. The right tool for the job. The huge instruction set for the M1 must be a nightmare to try to remember, and I'd fear they'd deprecate some and canonize others, between releases. Certainly couldn't write assembly without code completion and an incredible text editor.

              I figured I'd be down-modded for that dig at "JM." That was the point of doing it. I'm not by nature like that. It's too bad when somebody is over-modded when logged-in, while another who might have more to say, can be modded to oblivion for being an AC. Nothing personal.

    • (Score: 5, Insightful) by fustakrakich on Wednesday December 02 2020, @03:45PM (3 children)

      by fustakrakich (6150) on Wednesday December 02 2020, @03:45PM (#1083270) Journal

      If RISC is so fucking great, then the 64-bit DEC Alpha should have taken off like wildfire in the 1990s.

      Well, Beta was better than VHS, and look what happened. If the better product always prevailed, would there be a single Ford on the road today?

      --
      La politica e i criminali sono la stessa cosa..
      • (Score: 5, Funny) by Anonymous Coward on Wednesday December 02 2020, @05:39PM

        by Anonymous Coward on Wednesday December 02 2020, @05:39PM (#1083315)

        What? Beta was good? I thought: fuck beta!

      • (Score: 4, Informative) by Tork on Wednesday December 02 2020, @06:40PM

        by Tork (3914) Subscriber Badge on Wednesday December 02 2020, @06:40PM (#1083341)

        Well, Beta was better than VHS, and look what happened.

        No, it wasn't. One technical aspect of Beta was better than VHS but it did not meet the requirements people were willing to spend money on.

        One-hour-long-tapes for content that's often two-hours doesn't cut the mustard. That's the lesson you should have learned from VHS vs. Beta, not that the masses are brain-damaged.

        --
        🏳️‍🌈 Proud Ally 🏳️‍🌈
      • (Score: 2) by turgid on Wednesday December 02 2020, @07:35PM

        by turgid (4318) Subscriber Badge on Wednesday December 02 2020, @07:35PM (#1083357) Journal

        Video 2000 was even better and the tapes were double sided.

    • (Score: 5, Informative) by DannyB on Wednesday December 02 2020, @04:18PM (16 children)

      by DannyB (5839) Subscriber Badge on Wednesday December 02 2020, @04:18PM (#1083278) Journal

      If RISC is so fucking great, then the 64-bit DEC Alpha should have taken off like wildfire in the 1990s.

      Then Macintosh should have totally beaten the IBM PC.

      There were many more, but a few points I remember:
      * 24-bit flat address space (later 32 bit)
      * no AUTOEXEC.BAT and CONFIG.SYS
      * no dip switches anywhere, no interrupts to configure
      * NuBus (from TI) made expansion cards plug-and-play
      * Mouse was plug and play
      * SCSI drives were plug and play (well, first set unique SCSI ID on drive, but its that simple)
      * CD-ROM just worked
      * QuickTime Video just worked, and I mean for grandma
      * Network just worked. Any idiot could effortlessly create a LAN with laser printers, file servers and workstations. Even AppleTalk routers to connect multiple LANs together were very simple.
      . . . and much more, but that much gets the point across

      Why didn't Apple take over the market? For the same reason the iPhone didn't take over the market. For the same reason Beta didn't take over instead of VHS. Apple refused to license their OS to other hardware vendors. If you could have gotten Mac OS on other vendor's hardware, I wonder what would have become of Windows.

      Back to the question I quoted. Maybe Alpha didn't run Windows software? As Steve Jobs said back in the Apple II days: the software tail that wags the hardware dog.

      --
      Every performance optimization is a grate wait lifted from my shoulders.
      • (Score: 1, Informative) by Anonymous Coward on Wednesday December 02 2020, @04:30PM (2 children)

        by Anonymous Coward on Wednesday December 02 2020, @04:30PM (#1083281)

        NT 4.0, and possibly older, had alpha builds available.

        • (Score: 2) by turgid on Wednesday December 02 2020, @05:10PM

          by turgid (4318) Subscriber Badge on Wednesday December 02 2020, @05:10PM (#1083299) Journal

          Yes, NT3.51 was the really portable one where they had builds for all the popular 64-bit RISC CPUs (but in 32-bit mode). Rumour has it there was even a SPARC port. They did it because the Windows NT kernel had a really clean and portable design and they wanted to show it off, but also because Intel CPUs were way behind on performance.

        • (Score: 5, Informative) by TheRaven on Wednesday December 02 2020, @06:11PM

          by TheRaven (270) on Wednesday December 02 2020, @06:11PM (#1083335) Journal
          It did, but almost all software ran in their 32-bit x86 emulator. The Alpha was a good 50% faster than the most expensive Intel hardware at the time and pretty price/performance competitive if you were running native code. At the time, the overwhelming majority of business software was Win16 x86 or DOS code with a few things Win32 x86. This code was all significantly slower on Alpha than anything that didn't run in emulation. The Alpha needed to be at least twice as fast as Intel to break even with the emulation overhead.
          --
          sudo mod me up
      • (Score: 5, Informative) by RS3 on Wednesday December 02 2020, @05:49PM (3 children)

        by RS3 (6367) on Wednesday December 02 2020, @05:49PM (#1083322)

        Yes, everything you mentioned is true, but never underestimate the power of the almighty dollar. Macintoshes have always been more expensive than their PC counterparts. You could argue you're getting more for your money, but you may be getting things you don't need.

        I'll add that Macintoshes become "unsupported" / "obsolete" (I hate those words in computer context) quicker than PCs. As much as I dislike M$ for many reasons, one thing they do pretty well is backward compatibility. I'm worried whenever I "upgrade" to a newer Windows that my 15-year old apps won't run. Not only do they all run, but Windows lets you run in "compatibility mode" (whatever that really means I don't know). I used to support a few Macintoshes back in the day, and each OS update trashed all the major apps, so more time waiting to get the app updates, and dealing with the changes the new versions bring, and people having to redo their older work because the new document / picture / video doesn't display correctly with the new app version. Not that I'm against updates and upgrades, but some are unnecessary, and just bring headaches and cost.

        Yes, definitely the available software base, and the investment therein (stuff you already own) are major factors.

        Windows NT was ported to Alpha, and MIPS, and PowerPC, and most major CPUs of the day... which begs the question: "why?" https://en.wikipedia.org/wiki/Windows_NT [wikipedia.org]

        It's an interesting thought about the older MacOS, if it had run on PC hardware. They'd have had to put in a lot of code for memory management, HAL, gosh, a ton of changes to port it. So now we have the "Hackintosh", but I don't see it being used widely.

        • (Score: 5, Interesting) by DannyB on Wednesday December 02 2020, @09:38PM

          by DannyB (5839) Subscriber Badge on Wednesday December 02 2020, @09:38PM (#1083386) Journal

          Macintoshes have always been more expensive than their PC counterparts.

          That is true.

          I was still naive enough at that age to think that people wouldn't care when the mac was so obviously superior. But . . . A lot of people buy cheap economy cars instead of expensive high end cars.

          I was insulated from the reality of the cost because I lived (at that time) in an R&D playground of all sorts of wonderful cool toys.

          Macintoshes become "unsupported" / "obsolete" (I hate those words in computer context) quicker than PCs.

          That is where I parted ways with Apple.

          Apple was already having trouble, after two unsuccessful tries, to rewrite its OS with a real kernel, memory protection, virtual memory, etc. When Apple bought NeXT and Jobs returned, and the new OS X was NOT going to run on all the current expensive Power PC macs, that is when I parted ways.

          It was the mid-late 1990s. I was reading about this "Linux" thing. I kept reading. After a couple years, in June 1999 I got my first Linux box. Never looked back.

          As OS X and a whole new generation of macs came along -- which I no longer had -- I was watching from the outside. With this new perspective from outside the walled garden, I could see the practical reasons why people did not do the loyal thing and just hand over their money to Apple for a superior experience.

          As I point out, if Apple had licensed their OS to other vendors, I expect the hardware cost would have dropped substantially. Apple, under some pressure, did actually try this at some point in about the mid 90's. I can't remember what the brand was, but some PC maker build a legitimate Mac clone that ran Mac OS software. Apple couldn't believe it, but, yes, it actually was possible that an outside party could build macs way cheaper than Apple. Apple had insisted that nobody could under price Apple on the mac, but was proved wrong once someone had the chance.

          now we have the "Hackintosh", but I don't see it being used widely.

          As a once long time mac fanboy, I have been tempted to try it. But I never quite convince myself. I would then have to make sure that I could keep such an unauthorized unsupported beast running. I would not want to risk anything commercial on such a hackintosh suddenly one day not working because of something Apple changed.

          --
          Every performance optimization is a grate wait lifted from my shoulders.
        • (Score: 0) by Anonymous Coward on Sunday December 06 2020, @01:06PM (1 child)

          by Anonymous Coward on Sunday December 06 2020, @01:06PM (#1084531)

          one thing they do pretty well is backward compatibility

          I get the impression that Microsoft is moving away from that. There seem to be problems even between Windows 10 versions ;). And they're deprecating and removing support for lots of old stuff.

          I suspect a lot of the old Windows people with a clue left or moved to Microsoft Research and a bunch of younger punks from Linux/OSS background replaced them. So now they're practically using the public to test their latest crap.

          Still better than Desktop Linux though. Desktop Linux is so crap that even the Linux Foundation people don't use it ;).

      • (Score: 2) by theluggage on Wednesday December 02 2020, @07:57PM (8 children)

        by theluggage (1797) on Wednesday December 02 2020, @07:57PM (#1083366)

        Apple refused to license their OS to other hardware vendors. If you could have gotten Mac OS on other vendor's hardware, I wonder what would have become of Windows.

        Its the "other hardware vendors" bit that's your problem there - licensing went well for Microsoft because they weren't a hardware vendor. It didn't turn out so well for IBM - or for Apple in the brief period when they did license MacOS. Meanwhile, NextStep, BeOS, OS/2 were all "better" operating systems that got precisely nowhere against Wintel. (...except NextStep did succeed when it was reincarnated as MacOS X exclusively for Apple hardware...)

        For the same reason the iPhone didn't take over the market.

        ...except the iPhone carved out a substantial share of the mobile market and I don't think Apple are particularly disappointed by the billions they've made from it.

        • (Score: 3, Insightful) by DannyB on Thursday December 03 2020, @03:24PM (7 children)

          by DannyB (5839) Subscriber Badge on Thursday December 03 2020, @03:24PM (#1083638) Journal

          See this video: Most Popular Mobile OS 1999-2019 [youtube.com]

          In the end:
          * Android 85.23%
          * iOS 10.63%
          * KaiOS 4.13%
          * Windows Mobile 0.01%

          Apple's share isn't all that big. Android is more than 8 times as big. What's the difference? The EXACT same thing as the superior Macintosh to Windows. You could get Windows on everyone else's hardware.

          Android comes from every manufacturer, size, shape, style, color, feature/price combo.

          --
          Every performance optimization is a grate wait lifted from my shoulders.
          • (Score: 3, Insightful) by DannyB on Thursday December 03 2020, @03:26PM

            by DannyB (5839) Subscriber Badge on Thursday December 03 2020, @03:26PM (#1083642) Journal

            Additional things I wish I had said:

            I didn't want to choose Android. I simply was mature enough to recognize that Android was going to win. For the exact reasons that Macintosh did not win and Windows did. It wasn't that I was rooting for Android. I just recognized the market reality -- even though iOS was bigger than Android at the time. I already knew what was going to happen.

            --
            Every performance optimization is a grate wait lifted from my shoulders.
          • (Score: 2) by theluggage on Saturday December 05 2020, @02:32PM (3 children)

            by theluggage (1797) on Saturday December 05 2020, @02:32PM (#1084346)

            10% of such a huge market is pretty significant. Now look at estimates of the profit share: Apple 66%, Samsung 17%, Others: 13%... [forbes.com]

            Apple has made a strategic decision to focus on the high margin, premium end of the market, whereas a lot of the Android sales numbers come from cheap free-with-mobile-plan handsets.

            Same with the mass of the PC market - a lot of those sales that Apple are losing are those $500 deals where the only real margin comes from hard-selling the customer finance, an extended warranty and a $70 Monster HDMI cable (...or sealing a supply contract with a corporate outfit by offering them a bargain on the basic PC and then screwing them for extras and a support contract).

            OK, as a consumer you may not care about Apple's profits - but if you think that Mac/MacOS is better than Wintel, you might want to stop and consider how much of that is down to Apple having a shedload of cash to put into MacOS and Mac Apps and the advantages of vertical integration - only having to worry about supporting a dozen or so Mac models rather than having to maintain compatibility with a hoard of lowest-common-denominator clones built from commodity hardware. Certainly, a lot of the benefits of the M1 seem to be coming from the tight integration of hardware and software - the GPU is designed from the ground up for Apple's Metal graphics framework, for example, and there are also claims that the CPU has features specifically to accelerate code produced by the Rosetta2 x86 translator.

            Plus, there's no way Microsoft or Intel could mandate a change of CPU architecture for the PC platform the way that Apple has done 3 times now (68k to PPC, PPC to Intel, now Intel to ARM/Apple Silicon). By contrast, we have Windows on Alpha/Sparc/PPC etc. (failed), Itanium (failed), Windows on ARM take 1 (failed) and Window on ARM take 2 (not taking the world by storm, and ironically could be turned around by the demand for Windows on the M1...). Then look at how long it took MS to replace kludgey DOS-based Windows (3/9x/ME etc.) with Windows NT (NT 3.1 released in 1993, didn't replace 9x/ME as the default PC OS until Windows XP in 2001 - and the compatibility layer still hanging around in 32 bit Windows 10) c.f. Apple replacing "classic" MacOS with the completely different OS X (approx 2001-2005, with "classic" stone dead after the 2006 switch to Intel).

            Its a fallacy to think that - in an alternate universe where Apple had gone down the licensing route - the result would bear any resemblance to Mac/MacOS today. Most likely, Apple would have gone down the pan in the 90s when Windows acquired a half-decent GUI and decent graphics. Microsoft gained an unassailable advantage by standing on the shoulders of IBM whereby they basically got a tithe of every single PC sale (...including PCs sold with alternative operating systems wherever they could get away with it) and maybe even kept Apple in business by producing Mac versions of Office and IE just to prove that they weren't a monopoly.

            • (Score: 2) by DannyB on Monday December 07 2020, @02:39PM (2 children)

              by DannyB (5839) Subscriber Badge on Monday December 07 2020, @02:39PM (#1084891) Journal

              No argument about the profitability of Apple.

              That comes at higher prices of their products. And arguably, better products with better experience. (I have not used any Apple products since about 2001 as my last PowerMac got used less and less, and Linux box got used more and more.)

              But if you want to control the world (ala Microsoft) you've got to own the market share.

              --
              Every performance optimization is a grate wait lifted from my shoulders.
              • (Score: 2) by theluggage on Wednesday December 09 2020, @04:20PM (1 child)

                by theluggage (1797) on Wednesday December 09 2020, @04:20PM (#1085581)

                Maybe Apple are happy making a shedload of money and exerting a huge influence on the industry, but leaving the me-tools the hard work of selling the low-margin clones. Apple popularised (even if they left the actual inventing to others) the GUI, DTP, laser printers, local area networking, desktop video editing, the modern laptop layout, the personal “(not) mp3” player, the modern smartphone & “App Store”, the tablet, the “ultrabook” concept, better-than-full-HD displays... And now it is possible that that M1 could be the watershed moment in the move away from x86. That’s a pretty good score sheet without ever having a dominant market share, and I don’t see their shareholders complaining about the emoluments... If they had dominance, like MS, they’d probably never have taken those risks.

                Currently having fun working out how to rescue a bunch of old websites with Flash content (justified - The alternative at the time would have been RealPlayer or MS-centric Dynamic HTML). For the greater good, of course, but annoying. It may have taken Flash 10 years to die, it may belong dead, but the fatal wound was inflicted by the iPhone.

                • (Score: 2) by DannyB on Thursday December 10 2020, @04:26PM

                  by DannyB (5839) Subscriber Badge on Thursday December 10 2020, @04:26PM (#1085947) Journal

                  I had mixed feelings about Steve banning Flash on iPhone. I recognized the good long term effect. Something needed to kill Flash. And this was it. But in the short term it was going to cause a lot of problems.

                  Hopefully, if your old websites are simply using Flash as a "media player" then you can find much better modern solutions.

                  --
                  Every performance optimization is a grate wait lifted from my shoulders.
          • (Score: 2) by barbara hudson on Monday December 21 2020, @01:26AM (1 child)

            by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Monday December 21 2020, @01:26AM (#1089759) Journal
            And tet Apple makes more profit from sales of iPhones than all the other manufacturers combined. And Apple App Store developers make more profit from sales than developers do from google play sales.

            Everyone else is selling more units at a lower price but not making it up in volume.

            --
            SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
            • (Score: 2) by DannyB on Monday December 21 2020, @10:49PM

              by DannyB (5839) Subscriber Badge on Monday December 21 2020, @10:49PM (#1090064) Journal

              Similarly with the Macintosh, Apple was making boatloads of money, even while the IBM PC had most of the market.

              Apple spun out a software company, Claris, that was worth $10 Billion (in about 1990 ish, IIRC). That was bigger than many PC makers.

              --
              Every performance optimization is a grate wait lifted from my shoulders.
  • (Score: 2) by turgid on Wednesday December 02 2020, @03:40PM (2 children)

    by turgid (4318) Subscriber Badge on Wednesday December 02 2020, @03:40PM (#1083266) Journal

    I seem to remember seeing a roadmap presentation from AMD a few years ago where they had x86, GPUs and ARM cores on some kind of unified memory system. What became of it?

    • (Score: 3, Informative) by richtopia on Wednesday December 02 2020, @04:34PM (1 child)

      by richtopia (3160) on Wednesday December 02 2020, @04:34PM (#1083284) Homepage Journal

      I cannot find any recent official documentation from AMD regarding ARM. This screenshot is the best I could find:

      https://www.cnx-software.com/2020/06/01/amd-ryzen-c7-arm-cortex-x1-a78-a55-processor-mediatek-5g-modem-leak/ [cnx-software.com]

      There are plenty of players in the ARM space with motivation to make competitive chips for main-workload or server-workload applications. We've even seen some ARM server chips, and if I remember correctly they were competitive on price with x86. However, I believe Apple's vertical integration of silicon-package-OS is delivering some advantage over the competition; from the release they had a working OS and ecosystem on consumer hardware, even if you don't include Rosetta 2.

      • (Score: 1, Interesting) by Anonymous Coward on Wednesday December 02 2020, @06:10PM

        by Anonymous Coward on Wednesday December 02 2020, @06:10PM (#1083333)

        AMD PSP is an embedded ARM, isn't it?

  • (Score: 3, Insightful) by ledow on Wednesday December 02 2020, @04:46PM (3 children)

    by ledow (5567) on Wednesday December 02 2020, @04:46PM (#1083291) Homepage

    More RAM doesn't necessarily make a computer go faster.

    And Mac RAM and units with large unit pre-installed are literally RIDICULOUSLY priced for what is a basic memory chip nowadays.

    The unit behind me has 128GB RAM and it didn't cost anywhere near $4000 brand-new.

    Also, the workload is always pathetically biased on M1 benchmarks. I'll believe it when I see it myself.

    • (Score: 0) by Anonymous Coward on Wednesday December 02 2020, @05:08PM

      by Anonymous Coward on Wednesday December 02 2020, @05:08PM (#1083298)

      "More RAM doesn't necessarily make a computer go faster."

      We look for things. Things to make us go.
      We are not strong.

    • (Score: 2) by Runaway1956 on Wednesday December 02 2020, @11:02PM (1 child)

      by Runaway1956 (2926) Subscriber Badge on Wednesday December 02 2020, @11:02PM (#1083424) Journal

      More RAM doesn't necessarily make a computer go faster.

      I think that should be rephrased for clarity. More RAM always translates into a faster computer, up to the point that all your OS and applications fit into memory. Beyond that point, additional RAM gives you no benefit.

      Or, rephrased again, if you have enough RAM that you never use swap/virtual memory, more RAM will do you no good at all.

      Solid State Drives change the equation some, but you still don't want your machine constantly writing to swap space.

      • (Score: 2) by darkfeline on Wednesday December 02 2020, @11:55PM

        by darkfeline (1030) on Wednesday December 02 2020, @11:55PM (#1083438) Homepage

        if you have enough RAM that you never use swap/virtual memory, more RAM will do you no good at all.

        Not true, due to the disk cache. Anyone who has built a DB machine would know this.

        $ free -h
                      total        used        free      shared  buff/cache   available
        Mem:           31Gi       3.6Gi        16Gi       1.1Gi        11Gi        26Gi
        Swap:            0B          0B          0B

        Look at that delicious 11 GiB cache. Disk IO would be significantly slower had I only 8 GiB of RAM, even if I'm technically only using 3 GiB.

        --
        Join the SDF Public Access UNIX System today!
  • (Score: 3, Insightful) by ikanreed on Wednesday December 02 2020, @05:01PM (6 children)

    by ikanreed (3164) Subscriber Badge on Wednesday December 02 2020, @05:01PM (#1083295) Journal

    Had an incredibly stupid benchmark.

    Number of pointers allocated per second in an apple proprietary language with an apple propeitary compiler.

    If there's a single fucking application on the planet that is malloc throttled I'll not just eat my hat, I'll eat an entire haberdashery.

    • (Score: 5, Funny) by Anonymous Coward on Wednesday December 02 2020, @05:08PM (1 child)

      by Anonymous Coward on Wednesday December 02 2020, @05:08PM (#1083297)

      I stopped using malloc the day I learned about static global variables. None of my code is slowed down by wasteful checking of NULL pointers.

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday December 02 2020, @09:48PM

        by Anonymous Coward on Wednesday December 02 2020, @09:48PM (#1083393)

        Checking for NULL isn't a malloc problem. You can eliminate all NULL checks due to malloc failure simply by making a NULL return from malloc a panic condition. NULL checks aren't even the performance hit you think, unless you're foolishly failing to factor them out of tight loops. The overhead of malloc/free is the record keeping for what's being used and what isn't. If you have a simple application that can allocate all its memory at startup, great. Otherwise you're just going to reinvent all the record keeping. Sometimes the usage pattern can be tuned to your application, (memory pools) and that's faster; but simply allocating all your memory to some global variable isn't a silver bullet. If it were, there wouldn't be any discussion. Aside from that, you may still have to interact with libraries that use NULL as a sentinel.

    • (Score: 2) by istartedi on Wednesday December 02 2020, @05:53PM

      by istartedi (123) on Wednesday December 02 2020, @05:53PM (#1083326) Journal

      Maybe not malloc "throttled", but the Jai programming language (currently under development and in private beta) has a focus on making custom allocators and memory pools a built-in part of the language. It's designed for games, where performance is often critical and apparently the creation/destruction cycle of a bazillion little objects in games can be a drag on performance if you just use standard allocators.

      Sorry I couldn't pull up a reference on that. Like a lot of information about Jai, it's probably in an hour+ long YouTube video somewhere...

      Anyway, memory pools are a common optimization tactic for this very reason.

      --
      Appended to the end of comments you post. Max: 120 chars.
    • (Score: 3, Interesting) by TheRaven on Wednesday December 02 2020, @06:14PM (2 children)

      by TheRaven (270) on Wednesday December 02 2020, @06:14PM (#1083336) Journal

      If there's a single fucking application on the planet that is malloc throttled I'll not just eat my hat, I'll eat an entire haberdashery.

      From the SPEC CPU suite, the xalanc benchmark is very malloc dependent, with a 30% end-to-end speedup being common for decent malloc implementations relative to the awful implementation in glibc.

      --
      sudo mod me up
      • (Score: 2) by JoeMerchant on Wednesday December 02 2020, @07:40PM (1 child)

        by JoeMerchant (3937) on Wednesday December 02 2020, @07:40PM (#1083360)

        Benchmark - not an app.

        I created a very malloc intensive application: life simulation, each of thousands of living creatures had hundreds of "genes" each individually malloc'ed when born and freed when they die, with millions of births per minute. I was actually shocked how well the memory manager handled that torture.

        --
        🌻🌻 [google.com]
        • (Score: 2) by TheRaven on Thursday December 03 2020, @11:29AM

          by TheRaven (270) on Thursday December 03 2020, @11:29AM (#1083570) Journal
          All of the SPEC benchmarks are existing off-the-shelf programs. The xalanc benchmark is an XML processing tool (the command-line tool from the C++ version of the Apache Xalan library).
          --
          sudo mod me up
  • (Score: 3, Informative) by bzipitidoo on Wednesday December 02 2020, @05:18PM (9 children)

    by bzipitidoo (4388) on Wednesday December 02 2020, @05:18PM (#1083302) Journal

    Why? In a word, architecture. Unified, not just shared, memory. Specialized hardware for specific tasks. Tasks such as audio and video encoding and decoding, facial recognition, speech recognition.

    Yeah, I found the article full of tedious explanations of things I already know quite well, thank you very much.

    Shared memory as was done back in the day, was for low end video, and meant that a portion of the system RAM was dedicated, fenced off, really, to the integrated GPU. And it was dog slow. Needed a lot of copying into and out of the area reserved for graphics. But everyone knew that already, right? This unified memory avoids that by allowing the specialized hardware to access any memory, with much more parallelism. How exactly that works isn't discussed.

    Another way to put it is something else we all may have heard decades ago: the x86 architecture sucks. It sucked even in the 1970s when it was first created. That's right, even for a CISC architecture way back in the day, it was inferior. One of the stupidest decisions was going with a stack architecture for the x87 floating point instructions. You'd think they'd at least stick with the load and execute model of the x86, just for consistency's sake if nothing else. x86 has been massively expanded and improved, but it still sucks today. A totally new architecture can ditch all the x86 baggage that now serves only as a big drag on performance.

    Sounds like this M1 is a good new beginning. One other thing that caught my attention was this NPU, the Neural Processing Unit. I am so used to Von Neumann architecture and thinking that understanding how this very interesting device works and is integrated is a challenge.

    • (Score: 3, Insightful) by shrewdsheep on Wednesday December 02 2020, @05:57PM

      by shrewdsheep (5215) on Wednesday December 02 2020, @05:57PM (#1083327)

      I agree with the floating point part.

      A totally new architecture can ditch all the x86 baggage that now serves only as a big drag on performance.

      Here, I am a bit more sceptical. Sure, some silicon is wasted to translate CISC instructions into the internal RISC representation, a build-in Rosetta, if you will. Most bottlenecks have been worked around by the introduction of new instructions in the past. As a result I would not expect a fresh architecture to perform radically better, let's say in the ballpark of 10% maybe, at least that would be my expectation. In this view, I find the apparent M1 performance amazing. As you say, RAM bandwidth is critical in many applications and probably much can be gained there. I am looking forward to a competent article disentangling RAM/IPC/IO performance for the M1.

    • (Score: 3, Interesting) by Rich on Wednesday December 02 2020, @06:02PM (1 child)

      by Rich (945) on Wednesday December 02 2020, @06:02PM (#1083330) Journal

      Unified Memory is just that: Shared memory. They won't have multi-port SRAM in Gigabyte quantities. But it could be tuned. For example, there is no reason for the framebuffers to pull a couple of megs each time the "beam" refreshes. It would be good enough if the core memory broadcasts updates in some fashion.

      Also, there's no reason the CPU has to do a memcpy itself. It could very well just schedule that to the memory controller which can do that in the background and put up hazards for accesses to not-yet-copied areas, or even predict those and/or satisfy them with priority. This won't work on x86, because there are thousands of different libc-with-memcpy equivalents with all run some bizarre code contraption which was at some point in time considered ideal. The problem goes away if you're the sole player, supply your own libc which knows about the tricks, and make sure everyone else on your new platform does it that way, too.

      The "8-wide decoding" is a bit unclear to me. The interesting part is how many instructions can be issued at the end of the pipeline to the instruction units, and how good that width gets saturated. 4 seemed to be the point of diminishing returns so far. Look at the architecture diagram of a recent Intel CPU, and you'll see that there's much more going on where they do a lot of detection heuristics. But maybe it gets faster, if you don't do that stuff, because you don't have to (because performance critical code comes with hints about what is intended).

      Finally, most of the improvements from Intel throughout the Core-i model years have been due to an improved branch predictor (and also access prediction). There are branch predictor competitions in academia, and I assume Apple have made their homework here and are state of the art, too, or a little beyond.

      • (Score: 1) by anyanka on Thursday December 03 2020, @05:08AM

        by anyanka (1381) on Thursday December 03 2020, @05:08AM (#1083511)

        Also, there's no reason the CPU has to do a memcpy itself. It could very well just schedule that to the memory controller which can do that in the background and put up hazards for accesses to not-yet-copied areas, or even predict those and/or satisfy them with priority. This won't work on x86, because there are thousands of different libc-with-memcpy equivalents with all run some bizarre code contraption which was at some point in time considered ideal. The problem goes away if you're the sole player, supply your own libc which knows about the tricks, and make sure everyone else on your new platform does it that way, too.

        Sure, even primitive 1970s hardware like the Z80's DMA controller lets you do superfast memcpy without wasting CPU time one. And it's not much of a problem that various obscure libc versions doesn't support it, as long as the one you're using does. The worse issue on x86 is probably that not all CPUs support all instructions (just like many Z80 systems didn't have a DMA controller back in the day), but the same goes for any other hardware acceleration – including on the Mac when Apple introduces the M2 with its Quantum Processing Unit. A cursory look at GNU libc seems to indicate it'll try to do bulk 'copying' by remapping memory with copy-on-write pages – though this seems to only be implemented on the Mach kernel. Plain copying is usually surprisingly fast anyway, and probably not worth worrying about until profiling tells you it's a bottleneck.

    • (Score: 1, Interesting) by Anonymous Coward on Wednesday December 02 2020, @07:39PM (1 child)

      by Anonymous Coward on Wednesday December 02 2020, @07:39PM (#1083358)

      so... it is analogous to the Amiga, if all its coprocessors were instead implemented on the SOC with the CPU.

      I wonder how a 5nm-scale MC680x0 would perform today...

      • (Score: 1, Funny) by Anonymous Coward on Wednesday December 02 2020, @09:00PM

        by Anonymous Coward on Wednesday December 02 2020, @09:00PM (#1083374)

        All progress lies through the Amiga. The Amiga is rising again, albeit in unexpected forms.

    • (Score: 2) by JoeMerchant on Wednesday December 02 2020, @07:43PM (2 children)

      by JoeMerchant (3937) on Wednesday December 02 2020, @07:43PM (#1083362)

      80286 memory addressing just about made me throw up when I got introduced to how it functions. The whole thing screams job security through obscurity.

      --
      🌻🌻 [google.com]
      • (Score: 2) by turgid on Wednesday December 02 2020, @09:05PM (1 child)

        by turgid (4318) Subscriber Badge on Wednesday December 02 2020, @09:05PM (#1083377) Journal

        An old-timer once told me that there was method in the madness. The 80286 "protected mode" was very complex (modelled after mainframes) in that segments could be of arbitrary length, so you could set a segment to be the length of an array and therefore ensure that your program couldn't write outside the array (it wrapped around). It differs from the way the 32-bit CPUs/RISC CPUs worked where you had fixed-size 4k pages. It was a long time ago and I think beer was involved so take it with a pinch of salt.

        • (Score: 2) by JoeMerchant on Wednesday December 02 2020, @09:41PM

          by JoeMerchant (3937) on Wednesday December 02 2020, @09:41PM (#1083389)

          Oh, I see tons of ways the 286 architecture could be used (abused) to various ends, but at the end of it all a flat address space would have had all the same capabilities as the segment register scheme and not required the maddening specificity of design to the quirks of the hardware.

          --
          🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Thursday December 03 2020, @11:18AM

      by Anonymous Coward on Thursday December 03 2020, @11:18AM (#1083567)

      x87 stack architecture was the most efficient way to implement it as a coprocessor without gobs of additional glue logic. Today, it doesn't matter: The CPU detects the common pattern of push/pop and optimizes it away. None of those "memory" operations for x87 ever hit actual memory today under typical circumstances.

      "Unified" memory is just marketing blather for shared memory. It's true that some low-end CPUs have to copy data between CPU and GPU. The Raspberry Pi used to work like this, for example. It makes sense: The CPU used in the Pi was intended for TV set-top boxes where all the real work happened in the GPU and you just needed a small CPU on the side to do things like talk network protocols. You were probably streaming video over the network and the chip would copy the data directly from the network into the GPU part of memory, so it didn't matter what happened in the CPU's memory area. Now that it's used in a mini-PC, I think the latest drivers have eliminated or at least are working toward eliminating this problem. Architectures designed for serious graphics performance using shared memory - such as game consoles - certainly never had this sort of limitation.

      The real problem with shared memory that kills performance isn't copying textures from one part of RAM to another. It's that GPU and CPU have conflicting requirements. The GPU needs RAM that's very fast and optimized for sequential access. The CPU needs RAM that's not quite as fast but much more random in its access. And they both need plenty of bandwidth and power. PS5 and other serious shared-memory architectures use GDDR memory for the whole system, and the CPU just has to live with being second fiddle. That's fine because consoles do most of their work on the GPU and the CPU has gobs of cache and just doesn't need the system memory all that often. PS5 can stream data right off its SSD into graphics memory. What bottleneck? PCs that care about graphics performance use a discrete GPU with its own memory that, sure, copies data because it's a separate bank of RAM, but which means the CPU and GPU can each have exclusive access to their own memory, optimized for their own uses and without having to share bandwidth except when they load textures (which doesn't happen too often). The other reason is that discrete chips also have discrete power connections and discrete heat sinks. This part doesn't matter as much in a laptop where you're limited by battery and getting the heat out of the case, but desktop gamers, crypto miners, protein folders, and neural net scientists use discrete GPUs and blow all the integrated setups out of the water.

      M1 is a medium-grade mobile GPU, intended for rendering the desktop. Up against AMD or nVidia, it competes with laptop or low-end desktop graphics from two generations ago. A current-tech APU runs away from it, and it's an order of magnitude behind the Playstation 5, which itself still lags the new discrete GPU cards released for PC from both major players at pretty much the same time. M1 will run FaceTime just fine, but it is not any kind of wonder chip, except in the power consumption area. This isn't useless, but it benefits its battery life, not its performance.

      It remains to be seen if this neural processing unit means anything. If it does, Intel and AMD will copy it. There's a reason they have recently purchased the two biggest FPGA makers, and it's to hedge against things like this.

  • (Score: -1, Offtopic) by Anonymous Coward on Wednesday December 02 2020, @05:37PM

    by Anonymous Coward on Wednesday December 02 2020, @05:37PM (#1083311)

    Because [ROTTEN] Apple [infested with HUGE MAGGOTS] says it is.

  • (Score: 5, Interesting) by dltaylor on Wednesday December 02 2020, @09:57PM (3 children)

    by dltaylor (4693) on Wednesday December 02 2020, @09:57PM (#1083399)

    "Back in the day ..." I had an A1000, based on a 68000, that ran x86 opcodes in simulation. The bundle included a 5.25" drive and a copy of MS-DOS. PCs of the day were utter crap: expensive and slow, with limited graphics (as in MDA and CGA), and a "beep". Later, there was a board to hold Mac EPROMs, because the Amiga could run Mac programs faster than a MAC, particularly when the "Slow Draw" EPROM code was run from RAM.

    Natively, the Amiga was even faster, because it had co-processors in custom silicon. A video plane (for monochrome), or collection of planes (for color) could be anywhere and toggled between on a raster boundary. The 2-channel 14-bit, or 4-channel 8-bit sound, was DMA driven.

    BTW, the OS was real-time multi-tasking, which PCs and Macs got around to somewhat later.

    Now, we're back, except that the coprocessors are in the same silicon as the CPU(s), thanks to 30+years of fab technology improvements.

    • (Score: 2) by bzipitidoo on Thursday December 03 2020, @02:08AM (1 child)

      by bzipitidoo (4388) on Thursday December 03 2020, @02:08AM (#1083475) Journal

      Genuine IBM PCs were expensive, yes. But the PC clones weren't. Their low prices are why the PC was so dominant. And often, the clones were better, with such little extras as a reset button, a keypress to skip the Power On Self Test (POST), and a little more raw speed. IBM has long had lots of resources to throw around, and so at IBM it wasn't uncommon to see a genuine IBM 286 based PC with 12M of RAM, which was a heck of a lot of RAM in those days. (1M was fairly common for a 286 clone.) Took 10 minutes to go through the POST, and you couldn't skip it.

      The Amiga was tragic. Great computer, but Commodore blew it on the marketing. Asked too much, and worst of all, kept everything locked down so tight you couldn't breathe. Linux started on PCs, not Amigas. Yeah, the 386 had a major deficiency in the awkward code required to implement a semaphore, but the price was right. And when the 486 came along, it addressed that deficiency. Commodore didn't take their knee off the Amiga's neck in time, and it was dead.

      • (Score: 2) by turgid on Friday December 04 2020, @03:25PM

        by turgid (4318) Subscriber Badge on Friday December 04 2020, @03:25PM (#1084031) Journal

        Genuine IBM PCs were expensive, yes. But the PC clones weren't. Their low prices are why the PC was so dominant.

        I seem to remember back in the 1980s that here in the UK you could buy an Amiga A500 (hardware accelerated graphics, audio DSP., 512k RAM, floppy disk drive, multi-tasking GUI OS) for £399 where the cheapest PeeCee clone (monochrome text, green screen, 16-bit, 512k RAM, floppy disk) would cost nearly twice that. Amstrad brought out some cheap PeeCees but they were very basic and still more expensive than the basic Amiga. The Amiga was faster, had a better architecture, had a better OS and had a really cool GUI and was cheaper.

        The reason the PeeCee won was because everyone wanted a machine that could run Lotus 1-2-3 and WordPerfect, like they had at work.

        The Atari ST was pretty good too (but without the multi-tasking and fancy graphics hardware initially), but again people bought PeeCees because that's what they had a work. The Acorn Archimedes was a 32-bit RISC that could knock the socks of a 286 PC and I think it cost about £600, but again people bought PeeCees.

    • (Score: 0) by Anonymous Coward on Thursday December 03 2020, @05:29AM

      by Anonymous Coward on Thursday December 03 2020, @05:29AM (#1083516)

      And in the early '80s I got a bargain on a genuine HP 3.5-inch floppy drive... only $650. It was a Sony on the inside.

      Some threads should be stickier. I'd be more likely to join.

  • (Score: 2) by MIRV888 on Monday December 21 2020, @01:32AM

    by MIRV888 (11376) on Monday December 21 2020, @01:32AM (#1089760)

    'But fortunately for AMD and Intel, Apple doesn’t sell their chips on the market. So PC users will simply have to put up with whatever they are offering. PC users may jump ship, but that is a slow process. You don’t leave immediately a platform you are heavily invested in.
    But young professionals, with money to burn without too deep investments in any platform, may increasingly turn to Apple in the future, beefing up their hold on the premium market and consequently their share of the total profit in the PC market.'

    Apple just isn't for me.

(1)