Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday October 16 2015, @11:45PM   Printer-friendly
from the hanging-in-there dept.

Advanced Micro Devices (AMD) has posted another quarter of disappointing financial results:

The Computing and Graphics segment continues to struggle, although AMD did see stronger sequential growth here with the recent launch of Carrizo. Revenue increased 12% over last quarter, although it is still down 46% year-over-year. This segment had an operating loss of $181 million for the quarter, up from a loss of $147 million last quarter and a loss of $17 million a year ago. Sequentially, the loss is mostly attributed to a write-down of $65 million which AMD is taking on older-generation products. Annually, the decrease is due to lower overall sales. Unlike Intel, AMD processors had a decrease in Average Selling Price (ASP) both sequentially and year-over-year, so there was no help there from the lower sales volume. The GPU ASP was a different story, staying flat sequentially and increasing year-over-year. Recent launches of new AMD graphics cards have helped here.

Alongside the Q3 2015 earnings release, AMD has announced that it is selling an 85% stake in its back-end manufacturing operations. ATMP, "for assembly, test, mark, and pack," is the step in semiconductor manufacturing that takes a finished wafer of chips and cuts them up into individual chips for customer use. AMD retained these operations even after the spin-off of chip fabrication in the form of GlobalFoundries in 2009. Nantong Fujitsu Microelectronics (NFME) will pay AMD $371 million ($320 million after taxes and expenses), and operate a joint venture to produce chips:

[More after the break.]

As for the joint venture itself, this gives NFME the ability to further expand into the market for semiconductor assembly and test services (SATS). With AMD's lower product volumes no doubt making it harder to fully utilize their high-volume ATMP facilities, a joint venture with NFME can bring more work into those facilities by having them work for additional customers beyond AMD. Furthermore NVME also gains the R&D experience that comes with AMD's ATMP operations, which for them is a competitive advantage against other 3rd party SATS providers.

The news comes just days after AMD "Corporate Fellow" Phil Rogers departed for competitor NVIDIA after working at ATI and AMD for 21 years:

As one of AMD's high-ranking technology & engineering corporate fellows, Rogers' held an important position at AMD. For the last several years, Rogers has been responsible for helping to develop the software ecosystem behind AMD's heterogeneous computing products and the Heterogeneous System Architecture. As a result, Rogers has straddled the line as a public figure for AMD; in his position at AMD, Rogers was very active on the software development and evangelism side, frequently presenting the latest HSA tech and announcements for AMD at keynotes and conferences.

[...] Meanwhile of equal interest is where Rogers has landed: AMD's arch-rival NVIDIA. According to his LinkedIn profile Phil Rogers is now NVIDIA's "Chief Software Architect – Compute Server" a position that sounds very similar to what he was doing over at AMD. NVIDIA is not a member of the HSA Foundation, but they are currently gearing up for the launch of the Pascal GPU family, which has some features that overlap well with Phil Rogers' expertise. Pascal's NVLink CPU & GPU interconnect would allow tightly coupled heterogonous computing similar to what AMD has been working on, so for NVIDIA to bring over a heterogeneous compute specialist makes a great deal of sense for the company. And similarly for Rogers, in leaving AMD, NVIDIA is the most logical place for him to go.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by martyb on Saturday October 17 2015, @01:37AM

    by martyb (76) Subscriber Badge on Saturday October 17 2015, @01:37AM (#250928) Journal

    It's sad. In my opinion, there is a need for a company to run toe-to-toe with Intel.

    AMD was the first in the marketplace with a 64-bit x86 architecture.

    Their Mantle programming tools seem to provide a means by which developers can get more performance out of their GPUs.

    Their merging the CPU and GPU onto one die seems like it has some real staying power.

    Unfortunately, as I recall it (please correct me if I am wrong) that Intel provided funds for advertising to PC OEMs, but only so long as they were an all-Intel shop. That, effectively, blocked AMD from making any real inroads with the likes of Dell.

    That said, AMD made some mistakes of their own. Their video drivers had a reputation of being buggy. They built their own chip fab (at a huge cost) well ahead of the time they were able to drive enough business to make it a sustainable, going concern. That cut into their R&D budget, reducing their ability to keep competitive products coming down the pipe.

    I certainly hope they can pull through and keep the products coming. The more competion between them, the better off we will be.

    Disclaimer: I bought an AMD Athlon 64 based HP laptop back in '05 and it gave me 10 solid years of service and the main reason I am no longer using it is that the hard drive started reporting errors. That, and for the last 7 years, it had a broken screen (so I used an external monitor) and the battery had long-since stopped holding a charge (so I just kept it plugged in all the time.)

    --
    Wit is intellect, dancing.
    • (Score: 2) by bart9h on Saturday October 17 2015, @01:57AM

      by bart9h (767) on Saturday October 17 2015, @01:57AM (#250930)

      It's sad. In my opinion, there is a need for a company to run toe-to-toe with Intel.

      And, there is a need for a company to run toe-to-toe with Nvidia.

    • (Score: 0) by Anonymous Coward on Saturday October 17 2015, @02:27AM

      by Anonymous Coward on Saturday October 17 2015, @02:27AM (#250936)

      Intel has probably outdone AMD on power sensitive systems like laptops as well, which helps them in modern times where people have a Personal Laptop rather than Computer.

      AMD really needs to aim at owning the games market in the short/medium term.

    • (Score: 1, Informative) by Anonymous Coward on Saturday October 17 2015, @02:29AM

      by Anonymous Coward on Saturday October 17 2015, @02:29AM (#250938)

      x86-64 was a great achievement, but it was borne out of necessity: in the late 90's, Intel was betting on Itanium to be the 64-bit architecture to replace x86, first on mainframe-scale systems, then servers, then desktops. That never made its way out of the ultra-expensive niche of the mainframe-scale systems, mostly due to the need to port to IA-64, as well as the dog slow x86 emulation, which IIRC was handled by a coprocessor chip on the mainboard. Microsoft's cynical, grizzled veteran Raymond Chen (arguably an engineer who is about as un-21st-century-Microsoftian as is possible) posted a long, LONG series on the Itanium architecture, which starts here: http://blogs.msdn.com/b/oldnewthing/archive/2015/07/27/10630772.aspx [msdn.com]

      Alongside the release of Itanium, Intel then worked on the NetBurst architecture which became the Pentium 4. When it first came out, it was ridiculously expensive, and required Rambus RIMM modules. I remember John Romero posting on his blog (this was 2000, back when he was supposedly still working on Daikatana) about buying a new Pentium 4 CPU with Rambus RIMMs and its expensive motherboard. I remember thinking, "Wow, what an idiot. Ah well, he has the money for it." The Athlon was a far better bang-for-buck, and even had faster FPU performance later on. I remember upgrading my P3 500MHz system to an Athlon Thunderbird 1.33 GHz (and being very, very careful about the heatsink, back in the days when installing it wrong meant the CPU die literally going up in smoke: https://www.youtube.com/watch?v=UoXRHexGIok [youtube.com] ).

      Intel eventually arrived to the x86-64 party late, in 2004, but they were still struggling along with the NetBurst architecture, which despite its higher clock speed, had a longer pipeline, and ridiculous thermal output for the time (back before videocards became the hottest component in a gaming PC). What eventually saved Intel was Nehalem, which was based off of the Core (Yonah) microarchitecture, which was far more scalable, and abandoned the strange long pipeline of NetBurst. Somewhere between the Core 2 Duo and i7 was the last time that AMD was performance-competitive with Intel in the CPU field.

      With ATI's flaky drivers, and Intel's rock solid chipsets for almost the past decade, I really haven't looked back at AMD. I don't think they can become competitive again without some magical way to shed their past baggage, but that would involve losing control of the quality control pipeline, which makes their quality problem worse. And being the provider of all the game console APUs apparently isn't paying the bills enough. This all seems self-inflicted.

    • (Score: -1, Troll) by Anonymous Coward on Saturday October 17 2015, @02:50AM

      by Anonymous Coward on Saturday October 17 2015, @02:50AM (#250942)

      It is always sad to see a retarded kid think he can beat a pro athlete. AMD has been the retarded kid since Intel's Core2.

      Flush this turd down the toilet and put it out of its misery.

      • (Score: 2, Troll) by meisterister on Saturday October 17 2015, @07:34PM

        by meisterister (949) on Saturday October 17 2015, @07:34PM (#251194) Journal

        ...but they can and did for about 6 years.

        The original Athlon kicked the PIII and later P4's ass, and even then AMD was up against a "pro athlete." Hell, even the K6 and K6-II were still very competitive with Intel's offerings in the mid-late '90s.

        The only reason that AMD is in the position it's in right now is because of horrible mismanagement. The K8 core in the Athlon 64 was originally going to be far wider and more ambitious, but internal scuffles ensured that it was just a beefed up K7. AMD also sat still and watched as Intel whipped the Pentium III into shape with the Pentium M, and didn't even bother starting development on a new architecture to take over from K8.

        --
        (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
        • (Score: 0) by Anonymous Coward on Sunday October 18 2015, @08:43PM

          by Anonymous Coward on Sunday October 18 2015, @08:43PM (#251591)

          Reading comprehension ain't your strong suit I see. Yes, they were ahead until Intel's Core2. Simce them AMD has been a turd.

    • (Score: 1, Interesting) by Anonymous Coward on Saturday October 17 2015, @09:55AM

      by Anonymous Coward on Saturday October 17 2015, @09:55AM (#251035)

      AMD has been making the same mistakes since the 70s. And they all come from the top.

      While the founders of Intel were driving around in middle class passenger cars, the CEO of AMD was driving around in a Ferrari. While Intel was rolling its money heavily back into R&D, AMD was already cutting corners. It is one of the reasons they had processors that seemed so much faster than Intel's when they were just a second source supplier: Intel's testing was much more thorough and their numbers much more conservative.

      Intel has acted like they were taken over by MBAs since the late 80s/early 90s. (they did a number of really scummy things, mostly pushed by the sales dept.) However, while Intel's reinvestment strategy allowed them to overtake the entire semiconductor manufacturing industry, technology-wise, AMD's cost cutting, golden parachutes and salaries, and general 'corporate slacker culture' has sunk them ever further into irrelevancy.

      That said: I'm going to stock up on a few ECC-capable AMD chips and mobos before they go under. Fuck if I'm going to let a bunch of Intel hardware into my house with signed firmware I can't audit or replace.

  • (Score: 2, Insightful) by linkdude64 on Saturday October 17 2015, @04:01AM

    by linkdude64 (5482) on Saturday October 17 2015, @04:01AM (#250962)

    I hope they make it to Zen in 2016, I really do. The alleged higher IPC might finally sate the PC gaming crowd, and the lack of integrated hardware backdoors in their CPUs (AFAIK) might sate privacy nuts who also need high IPC for other applications. I'm waiting for it, and hope it delivers. I just can't support Intel's business practices anymore.

  • (Score: 2) by boltronics on Saturday October 17 2015, @04:05AM

    by boltronics (580) on Saturday October 17 2015, @04:05AM (#250965) Homepage Journal

    They just can't seem to do anything right.

    I used to buy AMD for all my CPUs. I remember owning a Athlon K7 500 slot processor, and later upgraded to a Duron 900MHz which I overclocked the heck out of. Later, I got a number of Athlon XP computers, and those were amazing and lasted quite a few years. My spouse later upgraded to an Athlon 64... and that's around the time AMD started to run out of steam. The Core 2 Duos just blew everything AMD had away, and then the i7s went even further. It's as if AMD wasn't even trying any more, and just gave up. Nothing they produced going forwards seemed competitive.

    Today, AMDs CPUs use crazy amounts of power at the high-end. Many motherboards don't support the CPUs because the wattage is so high, and you can't even run them with air cooling. Madness! But what really annoys me is that they still don't support DDR4 (and DDR3 is reportedly unsafe [soylentnews.org]), and they still only support up to PCIe 2.0. AMD's been shipping graphics cards for nearly 4 years that aren't even properly supported by its own CPUs!

    And then there's the drivers... what an absolute mess. Especially on GNU/Linux. I understand they wanted to do the whole amdgpu driver architecture thing and are pouring dev resources into that (with the intent to ditch Catalyst from the end user for all but the workstation GPUs), but why start so late? My spouse current has a R9 285, which only recently started to be usable using free software drivers - almost a year after the card was released! AMD all but admits Catalyst on GNU/Linux is crap, so why did they take so painfully long to put this plan into action? AMD is basically the only GPU manufacturer that releases cards that don't work properly (or at all) on day one - even Catalyst for GNU/Linux took a few weeks to appear IIRC. It just makes me want to pull my hair out.

    Any the way Fury was marketed? OMG. Firstly, 4Gb of RAM, while the cheaper GPUs had 8? That's a hard sell for early adopters, even with the HBM. Everyone's just going to wait until the 8Gb HBM version comes out, or at least wait to see if 8Gb is really proven to be unnecessary. And if the card can't quite compete with a similarly priced Nvidia GPU, why aren't you selling it a bit cheaper? Unless you have very low yields, why would you do that? How does that generate demand?

    I had a laptop stolen a couple of weeks back so I purchased a new one with an AMD APU (a quad-core A4-5000 1.5Gz with an integrated Radeon HD 8330) because I want to support AMD and want to support free software (and I needed something small and lightweight), and I'm very pleased with it. I had an AMD E-350 APU before that, and the A4-5000 just annihilates it in performance - but I'm obviously not looking to be running anything too demanding on it. But I'm also going to be needing to upgrade my gaming machine soon. I'm *trying* to hold off for Zen, since hopefully by the time it's out, the Fury (which should be cheaper) will have an amdgpu driver that is sufficient to retire Catalyst and I can just go AMD all the way... but it's hard because I really don't know that I have enough trust left in AMD to pull it off.

    And if an AMD fan such as myself has these doubts, it's no wonder they are in trouble.

    --
    It's GNU/Linux dammit!
    • (Score: 2) by takyon on Saturday October 17 2015, @05:19AM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Saturday October 17 2015, @05:19AM (#250978) Journal

      You got a laptop with an A4-5000? I found an A8-6410 for $250 [soylentnews.org] the other day.

      http://www.cpubenchmark.net/compare.php?cmp[]=2005&cmp[]=2266 [cpubenchmark.net]

      Oh, you wanted "small and lightweight".

      Indications are that Zen will come out in Q4 2016. That's a long time to wait, but AMD lovers should because they claim it will blow Bulldozer/Excavator out of the water [semiaccurate.com]. AMD APUs are already competitively priced in the low-end segments. If they can add a 40% IPC increase, you wouldn't want an Intel chip in many laptops.

      About the Fury. Maybe the 4 GB was a poorly conceived move, but benchmarks show that almost nobody needs more than 4 GB of VRAM right now, and the cards use memory a bit more efficiently than their predecessors. Here's the results:

      http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/7 [anandtech.com]

      To be clear, we can without fail “break” the R9 Fury X and place it in situations where performance nosedives because it has run out of VRAM. However of the tests we’ve put together, those cases are essentially edge cases; any scenario we come up with that breaks the R9 Fury X also results in average framerates that are too low to be playable in the first place. So it is very difficult (though I do not believe impossible) to come up with a scenario where the R9 Fury X would produce playable framerates if only it had more VRAM.

      [...] Meanwhile with GTA5 we can break the R9 Fury X, but only at unplayable settings. The card already teeters on the brink with our standard 4K “Very High” settings, which includes 4x MSAA but no “advanced” draw distance enhancements, with minimum framerates well below the GTX 980 Ti. Turning up the draw distance in turn further halves those minimums, driving the minimum framerate to 6fps as the R9 Fury X is forced to swap between VRAM and system RAM over the very slow PCIe bus.

      But in both of these cases the average framerate is below 30fps (never mind 60fps), and not just for the R9 Fury X, but for the GTX 980 Ti as well. No scenario we’ve tried that breaks the R9 Fury X leaves it or the GTX 980 Ti running a game at 30fps or better, typically because in order to break the R9 Fury X we have to run with MSAA, which is itself a performance killer.

      If you can't reach 20-30 FPS in the situations where the card would run out of VRAM, it doesn't matter that it has only 4 GB.

      The good news is that AMD beat NVIDIA to HBM by a year, and is using HBM 1.0. Both of them will be using HBM 2.0 in 2016, [fudzilla.com] and it should allow up to 16 GB (4-Hi) or 32 GB (8-Hi) if 4 stacks are used like with Fury.

      I somehow doubt we will see 4 stacks of 8-Hi (32 GB) on mainstream/enthusiast cards in the next 2 years. But with HBM2, AMD and NVIDIA will be able to double VRAM to 8 GB while halving the amount of stacks needed to just two. AMD will learn its lesson and 8 GB may become the new minimum amount of VRAM for high-end graphics cards (with the absolute lowest-end possibly using a single stack of 4 GB, or just GDDR5).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by boltronics on Saturday October 17 2015, @06:01AM

        by boltronics (580) on Saturday October 17 2015, @06:01AM (#250988) Homepage Journal

        Also, this is in Australia, so I'm not exactly spoiled for choice. I also had strict requirements for the physical dimensions and weight, and this was the fastest AMD machine I could get for under AU$500 that was suitable. I know for the price I could have gotten something faster, but it would have been too big and heavy to fit in my bike pannier. The one I got is a tight squeeze as it is.

        I think it's fair to say that the people who will drop the kind of cash required for a new Fury will also have no problems affording a 4K monitor. It's true that most people don't need 4Gb of VRAM, but it's also true that most people don't game at those high resolutions. It was well advertised that HBM can help mitigate the need for so much RAM, but enough to cut it by half of some of the lesser cards in the R9 300 series? That just sounds doubtful.

        I had not read the Anandtech review you quoted, but I'm also somewhat sceptical that this won't be an issue for future games at high resolutions that are better optimised for the Fury architecture. In any case, I feel that putting 8Gb of RAM on other lesser 300-series cards was a really bad move purely from a marketing perspective.

        But I agree with you on your other points, and I hope we don't have to wait too long to seeing what 2.0 brings.

        --
        It's GNU/Linux dammit!
    • (Score: 0) by Anonymous Coward on Saturday October 17 2015, @05:25AM

      by Anonymous Coward on Saturday October 17 2015, @05:25AM (#250981)

      Catalyst is crap everywhere.

  • (Score: 2) by turgid on Sunday October 18 2015, @08:18PM

    by turgid (4318) Subscriber Badge on Sunday October 18 2015, @08:18PM (#251584) Journal

    I seem to remember reading a couple of years ago or so that the PHBs at AMD had the brilliant idea that they could offshore a lot of CPU design work to China, because they figured cheaper and less experienced people could do the work with "automated layout tools." I believe this lead to hotter-running CPUs and a lack of direction regarding future designs.

    Back in the day when HP killed the Alpha [wikipedia.org] in favour of the magnificent itanic, there were all kinds of stories about where the Alpha engineers went. I know for a fact that several went to Sun, but people also said that AMD snapped up a bunch of them, and they were responsible for the Athlon [wikipedia.org] (x86-32) and the Opteron [wikipedia.org] (x86-64) when AMD started kicking intel's behind.

    I'm not an AMD fanboy, but I am a loyal customer, and I did work for Sun at the time that the Opteron came out and we put together some boxes for doing Solaris 10 x86-64 ("x64") builds on. I've used Dell servers, Sun servers and workstations, and HP and Dell (x86) workstations. Back in those days a 2.8GHz Pentium IV Xeon was getting its bottom resoundingly spanked by a 1.6GHz Opteron. And the intel boxes didn't scale linearly with multiple CPUs. More than two CPUs in an intel box was a waste of money and power. Opteron and UltraSPARC [wikipedia.org] did scale, though.

    Also, in my experience, AMD CPUs/systems feel better when heavily loaded i,e, the multi-tasking seems better. I put that down to a better cache/MMU/TLB/interconnect architecture that the intel stuff. Up until a year ago I was using a lot of intel i7 systems with stupidly fast (expensive) RAM running 64-bit Linux of various flavours, and I wasn't that impressed.

    AMD brought out Hypertransport [wikipedia.org] (similar to the Cray/SGI/Sun interconnect) with the Opteron in 2003. I think it was five more years before intel Quickpath [wikipedia.org] came along...

    I've been buying AMD CPUs for my own machines since 1999 (before that I used to believe the pro-intel FUD): K6-2/400, K6-2/500, Athlon XP 2000+, Athlon 64 3200(?) 2GHs, Dual core Athlon 64 (2.6GHz?), Phenom II X4 3.0GHz and finally a Phenom II X6 2.7GHz. We've also got a dual-core AMD laptop from about 3 years back. I've always been pleased with them, and they always perform well. Anecdotally, they hold their own against the recent intel stuff for what I do. I use Linux almost exclusively, so everything I run is compiled with gcc (not intel's C compiler or Microsoft Visual whatever Windows uses) and it's fine. I've bought good motherboards that have lasted years, and taken many CPUs.

    So I don't believe the benchmark hype as much as many, and I don't care about binaries compiled with other compilers used when doing the official benchmarks. I care about gcc-compiled binaries on x86-64 Linux.

    I've heard it said that the AMD Bulldozer [wikipedia.org] ("Faildozer") design is very reminiscent of Alpha 21264 [wikipedia.org], which was a great idea back in the late 90s. I planned to buy one this year, but events overtook me and the money had to be used for other things, but my 6-core 2.7GHz machine is still plenty fast enough. Even despite the negativity surrounding this CPU, it's actually not that bad, and is very good for the price.

    The Zen [wikipedia.org] architecture looks really exciting, and I will definitely get one (after the lunatic fringe have found the bugs in the first batch...) 16 real cores on a die looks amazing, especially if I can afford one for home.

    My phone has a quad-core ARM in it and I have a Raspberry Pi model B (ARM) running Slackware on my LAN which I use as a compilation target for some home-made software. ARM is definitely still rising, so AMD is backing a winner there.

    Finally, my Athlon XP 2000+ (running Slackware) from 2002 is still going strong as a printer server and a 32-bit compile target (I love ssh). It's running a binary-only driver for my laser printer.