Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Wednesday January 23 2019, @09:22PM   Printer-friendly
from the class-excavation dept.

Core blimey... When is an AMD CPU core not a CPU core? It's now up to a jury of 12 to decide

A class-action lawsuit against AMD claiming false advertising over its "eight core" FX processors has been given the go-ahead by a California judge.

US district judge Haywood Gilliam last week rejected [PDF] AMD's claim that "a significant majority" of people understood the term "core" the same way it did as "not persuasive."

What tech buyers imagine represents a core when it comes to processors would be a significant part of such a lawsuit, the judge noted, and so AMD's arguments were "premature."

The so-called "eight core" chips contain four Bulldozer modules, the lawsuit notes, and these "sub-processors" each contain a pair of instruction-executing CPU cores. So, four modules times two CPU cores equals, in AMD's mind, eight CPU cores.

And here's the sticking point: these two CPU cores, within a single Bulldozer module, share caches, frontend circuitry, and a single floating point unit (FPU). These shared resources cause bottlenecks that can slow the processor, it is claimed.

The plaintiffs, who sued back in 2015, argue that they bought a chip they thought would have eight independent processor cores – the advertising said it was the "first native 8-core desktop processor" – and paid a premium for that.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday January 23 2019, @09:54PM (17 children)

    by Anonymous Coward on Wednesday January 23 2019, @09:54PM (#790834)

    When I bought my FX-8350, one of the features that excited me (besides the superior cost/performance vs a comparable Intel processor) was that, where Intel's processors were pitched as quad-core with hyperthreading, my AMD FX processor didn't fake 8 with 4 cores + hyperthreading, but provided an actual 8 cores. Are they now admitting they lied?

    I stand by my selection and definitely received better value for my money than if I'd bought a comparable Core i7 processor at the time, but if AMD lied there should certainly be some repercussions.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Wednesday January 23 2019, @10:00PM (1 child)

    by Anonymous Coward on Wednesday January 23 2019, @10:00PM (#790840)

    RTFA'd. It sounds like the description is consistent with what I remember when I bought my AMD FX processors. Eight cores, as advertised. Not necessarily optimally configured and sharing FPUs between pairs of cores, but definitely 8, not like Intel's hokey 4+hyperthreading.

    • (Score: 0) by Anonymous Coward on Thursday January 24 2019, @08:26PM

      by Anonymous Coward on Thursday January 24 2019, @08:26PM (#791417)

      I knew what the FX processors were when I bought mine. I agree that the description is consistent to what I knew back then. I'm very happy with mine, and I still use an 8 core and a 4 core FX today, along with a Ryzen.

  • (Score: 5, Informative) by Immerman on Wednesday January 23 2019, @10:08PM (13 children)

    by Immerman (3985) on Wednesday January 23 2019, @10:08PM (#790845)

    No, they absolutely have 8 physical cores - those cores just don't each have their own dedicated cache and FPU.

    For reference, most 386 CPUs had *no* on-board cache or FPU (a.k.a math coprocessor), those were implemented in separate chips that a motherboard manufacturer could choose to include (or not). The 486 included some on-chip cache, but still mostly had no FPU (as I recall that was one of the big distinctions between the SX and DX lines in both chips). And I don't think anyone is going to argue that they were "zero-core" processors.

    Every usage of "CPU core" I've encountered basically amounts to "smallest unit that (with access to external RAM) is capable of running a computer program. That usually means integer processing and conditional branching, and not much else. Though specific designs may make for much more sophisticated cores with caching, floating-point and/or vector processing capabilities, etc.

    • (Score: 2) by FatPhil on Wednesday January 23 2019, @10:34PM

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday January 23 2019, @10:34PM (#790868) Homepage
      In particular, the sharing of at least some layers of cache between cores is absolutely standard procedure, a complete non-event. There's a stronger argument for considering the FPU to be part of the CPU now, as they've been inseparable since Pentium times.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by RS3 on Wednesday January 23 2019, @11:14PM (3 children)

      by RS3 (6367) on Wednesday January 23 2019, @11:14PM (#790900)

      I mostly agree, and for sure many of technology's terms are not well defined, or fall into misuse. A minor (but annoying) example is hard disk sizes where 1 GB ≠ 1024^3 , but rather = 1,000,000,000. Sigh.

      Just to clarify, the 80486 DX has both cache and math coprocessor. The SX has cache but no math coprocessor. https://en.wikipedia.org/wiki/Intel_80486 [wikipedia.org]

      • (Score: 3, Disagree) by AthanasiusKircher on Thursday January 24 2019, @04:24AM (2 children)

        by AthanasiusKircher (5291) on Thursday January 24 2019, @04:24AM (#791036) Journal

        A minor (but annoying) example is hard disk sizes where 1 GB ≠ 1024^3 , but rather = 1,000,000,000. Sigh.

        The hard disk manufacturers are correct. Giga- is an SI prefix that strictly refers to 10^9. SI, IEEE, and IEC standards have been existence for a while now that clarify this. 1 GB (gigabyte) is indeed 10^9 bytes. If you want to refer to 1024^3, that's 1 GiB (Gibibyte).

        It's true that there was loose usage of the SI prefixes early on where the error didn't matter as much (e.g., 1024 vs. 1000). But that use has been deprecated by standards organizations for at least 15 years or so now.

        • (Score: 3, Insightful) by AthanasiusKircher on Thursday January 24 2019, @02:12PM (1 child)

          by AthanasiusKircher (5291) on Thursday January 24 2019, @02:12PM (#791219) Journal

          Since I've been modded as "disagree," I'll just note that almost all most post is just factual. The only revision I would make is the first sentence should say "The hard disk manufacturers are correct NOW." It's true there were lawsuits and disagreement years ago, but the official standards organizations have weighed in and clarified the correct usage.

          Note that usually I'm happy to accept changes in language when usage changes decidedly. But we are not dealing with "language" in the normal sense here. The entire point of the SI system is to establish standards. The early use in the computing industry to call 1024 bytes a "kilobyte" because 1024 is close to 1000 was a deviation from the definition of the prefix. It took hold mostly when computing was still a niche industry.

          The SI system was put in place in part to avoid specifically these kinds of deviations in meaning regarding units. Before the metric system, you not only had regional differences in unit definitions (e.g., a "foot" could mean a somewhat different length in different countries, or even in different cities), but also different unit definitions by industry (e.g., an "ounce" of gold could be different from an "ounce" of water, and a "plumber's ounce" might be different from a "wine merchant's ounce").

          The computing industry appropriated the SI prefixes and attempted this very type of redefinition to fit their purposes. It's of course reasonable to have multiples of 2 used in computing for units. It is, however, against the very rationale of the metric system to have a deviation from definitions in a particular industry. (Actually, not even a whole industry -- it's not like "giga-" in "gigahertz" means 1024^3. It was basically only a deviation for the unit of storage space and sometimes data transfer rate.)

          I'll be the first one to agree that terms like "gibibyte" sound silly. But it's very reasonable to make the distinction in these units. And I believe Mac OS and Linux now make the GB/GiB distinction clearly and following the standards. Windows may be the only outlier that actually still measures hard disk space using the (inaccurate) KB/MB/GB/TB definition of powers of 2 -- though I haven't used Windows regularly in a few years, so I don't know.

          I'm surprised when these disagreements arise on tech forums, because arguing in favor of the 1 GB = 1024^3 definition is like arguing in favor of some other weird parochial measurement unit, like the U.S. should stick to pounds and feet. Such an attitude is usually condemned in places like this. Actually, it's a bit worse, since the SI prefixes are being misappropriated. It would be kind of like the U.S. "adopting" the kilogram, but simply defining it to be 2 pounds "because it's close enough, and it suits us better to use a unit that's a whole multiple of a unit we're already using." Such an error would be within 10%, which is approximately the error present in the TB definition.

          Anyhow, we can disagree about what should be the standard measurement unit (GB or GiB), but equating them or pretending that GB means GiB flies in the face of reasonable standards for measurement usage.

          • (Score: 2) by RS3 on Friday January 25 2019, @08:30AM

            by RS3 (6367) on Friday January 25 2019, @08:30AM (#791670)

            I surely did not downmod you, and rarely do to anyone (only the very obvious trolls). I could discuss my dislike of the mod systems but that would be offtopic tome.

            I have hopefully intelligent thoughts but it's extremely late where I am and anything I write will tomorrow appear scattered to my then rested brain.

            But I will say that my immediate reaction to your post was going to be to mention the "K" and "M" in both hard disk and RAM sizing. As far as I know, in RAM sizing, "G" really does = 1024^3. I'll have to check in my IT museum and see how they sized hard drives long ago, but I'm pretty sure I'll find that "M" is 1024^2.

            For the record, I have great disdain for heated discussion, esp. this type. But I will point out: you mentioned the adoption of "K" meaning 1024, but you didn't disprove it, and it (K equaling 1024) kind of undermines your GiB argument.

            My thoughts / feeling: when "K" was adopted to mean 1024 in the computing world, and later "M", I and so many others thought "G" would naturally follow as 1024^3, _especially_ since the prefix "giga" is generally only used in technical / scientific parlance.

            I'm writing too much and I'm too tired. I have much more to say when rested. Bottom lines are: I and most people have been okay with "K", "M", "G", "T", "P", etc., all being powers of 2 when used in reference to computer / data storage. We all (well, most of us) felt cheated by big-business tycoons when they demoted hard disk "G" to mean 1,000,000,000. It just seemed a bit too convenient and certainly disingenuous. But most of us have accepted it and moved on with life, not sweating the details. :)

    • (Score: 1, Informative) by Anonymous Coward on Wednesday January 23 2019, @11:40PM (2 children)

      by Anonymous Coward on Wednesday January 23 2019, @11:40PM (#790921)

      For reference, most 386 CPUs had *no* on-board cache or FPU (a.k.a math coprocessor), those were implemented in separate chips that a motherboard manufacturer could choose to include (or not). The 486 included some on-chip cache, but still mostly had no FPU (as I recall that was one of the big distinctions between the SX and DX lines in both chips). And I don't think anyone is going to argue that they were "zero-core" processors.

      It's quite a bit more complicated than that!

      Intel released the 80386 in 1985, with no FPU, and they released the (separate) 80387 FPU in 1987.

      In 1988 they released the 80386SX which had a 16-bit data bus to reduce board complexity. The normal 80387 is not compatible so they released the 80387SX to work with this variant of the processor. In an effort to reduce confusion between their product offerings, the original 80386 design (still with no FPU) is now called the 80386DX.

      Late 1989 Intel released the 80486DX. This was the first x86 offering from Intel with an integrated FPU.

      In 1991 AMD releases the Am386DX which, like the Intel 386 has no integrated FPU. At this time FPU performance mattered only for pretty niche applications and this processor ended up competing directly against Intel's early 486 offerings since (other than floating point) this still performed pretty well and was much cheaper.

      In response, and perhaps due to early production problems with the integrated FPU, Intel subsequently releases the 80486SX. This is exactly the same die as the 80486DX with the FPU present, but disabled (later respins do remove the FPU completely to save die area). It is sold at a lower cost to compete with the AMD offerings.

      Then in a bit of hilarity Intel releases the 80487SX which is also exactly the same design as the 80486DX, but an extra pin is added to the package so it does not fit in the same socket. Motherboards that accept this version work by completely disabling the "486SX" and then the "487SX" does everything. It should also be possible to remove the extra pin to fit the 487SX into the regular socket.

      • (Score: 1) by Guppy on Thursday January 24 2019, @05:11AM

        by Guppy (3213) on Thursday January 24 2019, @05:11AM (#791062)

        Also the "486DLC" processors from Cyrix and IBM. Physically compatible with the 386DX socket (and usually drop-in compatible, but not always), it was a mix of 386 and 486 features, plus a small block of L1 cache that 386 processors lacked. And no math co-processor, but compatible with the 387DX. For a time, they offered really good price/performance value.

        There was also a 486SLC, physically compatible with the 386SX socket.

      • (Score: 0) by Anonymous Coward on Thursday January 24 2019, @07:25AM

        by Anonymous Coward on Thursday January 24 2019, @07:25AM (#791121)

        Had an Olivetti with the 486DX. Best machine I ever had. My mom threw it out... and she still denies having thrown out the Vic-20.

    • (Score: 2) by Runaway1956 on Thursday January 24 2019, @12:53AM (4 children)

      by Runaway1956 (2926) Subscriber Badge on Thursday January 24 2019, @12:53AM (#790948) Journal

      Thank you for the reminder. It helps to settle the issue in my own mind. That doesn't mean it will help to settle the issue in court though. I side with AMD here - a dual core is definitely much faster than a single core, a quad much faster than a dual, etc.

      There is some danger here that marketing may have used misleading language somewhere that has put AMD's teat in the wringer. Legal beagles can do strange things with language when they want to, and marketing is no better.

      • (Score: 2) by takyon on Thursday January 24 2019, @02:18AM (3 children)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday January 24 2019, @02:18AM (#790995) Journal

        a dual core is definitely much faster than a single core, a quad much faster than a dual, etc.

        Not necessarily.

        Here's a blast from the past (2011):

        https://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested/2 [anandtech.com]

        The "8-core" FX-8150 does not beat the 4-core i7-2600K most of the time, even in the multithreaded workloads that should favor it. In some cases, the 6-core AMD Phenom II X6 beat the Bulldozer chip or came close.

        It's hard to overstate just how bad the Bulldozer launch was. And it's the reason we are hearing about this class action in 2019.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 0) by Anonymous Coward on Friday January 25 2019, @05:38AM (2 children)

          by Anonymous Coward on Friday January 25 2019, @05:38AM (#791627)

          Now divide the retail price by the performance points in each benchmark. AMD's offerings have always been about more for your money rather than absolute peak performance. You'll also notice that single-threaded performance is not great relative to the 2600K but the gap becomes pretty narrow in multi-threaded tests. Are you going to whip out some quadruple-digit priced Xeons from the same era and go "but muh benchmarks show dat duh Xeons move faster dan duh AMD!" too?

          • (Score: 2) by takyon on Friday January 25 2019, @06:48AM

            by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday January 25 2019, @06:48AM (#791641) Journal

            I was just addressing Runaway's assertion about cores.

            At launch, i7-2600K was $317, FX-8150 was $245. For about 30% more, you got much better single-threaded performance, with the Intel chip reaching 30%, or sometimes even 50-65% on certain benchmarks and games. Throw in a cheaper Intel chip from the time and it would also do well against the "8-core".

            It was far from a slam dunk like Zen tends to be today, and anemic compared to AMD's previous generation chips. Bulldozer was bad, and the bad design choices were only fixed with the arrival of Zen over 5 years later.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by toddestan on Saturday January 26 2019, @12:02AM

            by toddestan (4982) on Saturday January 26 2019, @12:02AM (#792107)

            One of the problems really was peak performance. Back in those days, AMD really didn't have an answer to Intel's i7. Their very best chips competed with the i5, and it went down from there. Now, it may not be a problem to not have an answer to Intel's ridiculous $1000 Core i7 Extreme chip, but this was the mainstream $250-$300 i7's that were very popular and bought in droves (I have one, typing on it right now). To make it more embarrassing, the top of the line chip from the previous generation was still nipping at its heels even though it was now a couple of years old. Those weren't good days for AMD.

  • (Score: 3, Informative) by sjames on Wednesday January 23 2019, @11:41PM

    by sjames (2882) on Wednesday January 23 2019, @11:41PM (#790923) Journal

    TL/DR: It's complicated.

    In a full core, you have the instruction decoder/dispatcher assigning work to a full set of integer and floating point execution units. In hyperthreading, you have two decoders dispatching different threads of execution to a single set of integer and floating point units.

    On that AMD processor, you have 2 dispatchers dispatching to 2 sets of integer units but just one set of floating point units. It is really half way between the two extremes.

    How useful that is depends on the workload.