Stories
Slash Boxes
Comments

SoylentNews is people

posted by azrael on Sunday August 03 2014, @11:01AM   Printer-friendly
from the what-is-this-L-A-N-party-you-mention? dept.

Advanced Micro Devices wants to help gamers build cheaper, smaller desktops through new processors the company started shipping on Thursday. AMD's A10, A8 and A6 chips -- code-named Kaveri -- are targeted at small-form factor and even larger desktops starting at US$399, said Adam Kozak, AMD's desktop product marketing manager.

The chips have up to eight integrated graphics cores and are capable of rendering 4K video, which has a resolution of 3840 x 2160 pixels, four times that of traditional high-definition video. Most gaming titles can be played on desktops with the chips, Kozak said.

Users could build quiet, cool systems that could be easily carried to LAN parties, a reference to gatherings of gamers competing in multiplayer games, Kozak said. "Lot of gamers are looking at these systems," Kozak said. "It's quiet, it's cool."

But enthusiasts won't be able to build desktops as small as Intel's NUC (new unit of computing) mini-desktops, which are small PCs that could be held in a hand, Kozak said.

The A10, A8 and A6 chips have the same GCN (graphics core next) graphics technologies used in Microsoft's Xbox One and Sony's PlayStation 4 gaming consoles. The chips are compliant with AMD's Mantle technology, which like Microsoft's DirectX helps improve video effects in PCs.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by jasassin on Sunday August 03 2014, @12:54PM

    by jasassin (3566) <jasassin@gmail.com> on Sunday August 03 2014, @12:54PM (#76889) Homepage Journal
    --
    jasassin@gmail.com GPG Key ID: 0xE6462C68A9A3DB5A
    • (Score: 5, Interesting) by visaris on Sunday August 03 2014, @01:30PM

      by visaris (2041) on Sunday August 03 2014, @01:30PM (#76891) Journal

      The AMD chips are impressive. In CPU-only loads, it (AMD A10-7850) beats out the Core i3 in most cases. Further, let's not forget that the entire point of this chip is for decent GPU performance as well. Even the review you posted yourself confirms this. On page 6 (somehow I doubt you managed to read that far...) results clearly show the new AMD chip beating _everything_ else in graphics performance, except for one of the Core i7 chips that costs at least 1.6x as much (not even counting the fact that one has to buy an extra GPU to pair with that Intel chip, while AMD has the integrated GPU). Intel costs over 1.6x the price in that case and doesn't get 1.6x the performance either.. Page 7 doesn't look much better for Intel. AMD is again beating out the much more expensive Intel product in all but one or two tests. If you look at the price/performance ratio, AMD is doing very wall. Though, for those of us following these two CPU vendors for some time, this does not come as a surprise. In the consumer space, AMD has always been where one goes to get good value, and Intel is where one goes to waste money on the size of one's e-peen...

      • (Score: 4, Interesting) by Hairyfeet on Sunday August 03 2014, @02:36PM

        by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Sunday August 03 2014, @02:36PM (#76903) Journal

        I would also point out that you can't trust the benches unless they list which compiler they use because if its compiled with ICC its as rigged as quack.exe [theinquirer.net] and worthless. If you test AMD chips with real world programs (like I have at the shop) you'll find that in most tasks you are looking at a couple percentage points [youtube.com] with the highest end Intel chips only finishing tasks around 15% faster than the top end AMD, which when you consider that the intel is 300% more expensive? The bang for the buck is still clearly in the AMD camp.

        That said what I don't like about the current AMD roadmap is it tops out at quadcore which I think is a serious mistake as having more cores for a given price was a big advantage for AMD. If the roadmap leaked online is to be believed the Steamroller hexa and octo chips are the last AM3+ and when they are gone so too is the hexa and octocore options. Sure at this point in time quad is the standard in gaming but most use their PCs for more than gaming and having more real cores to throw at tasks is really nice. But for your everyday tasks? The new APUs are great and they have chips for every market. If you need to get rid of some P4s in your office the quad jaguar boards turn a power hog into a power sipper (also makes a great media tank) and if you want to game the A series will let you build a cheap system that can game at lower settings OOTB while still giving you the ability to add a discrete later.

        Oh and for those that don't know if you go for a discrete later get an AMD discrete because thanks to AMD zerotech, which is supported on any of the GCN chips, the card can turn itself off when not in use thus dropping power down to a couple watts. When paired with an AMD APU the APU can do video decoding leaving the GPU asleep longer which means less power usage and heat which is always a plus. All in all I've built plenty of FN2/FN2+ and built several of the new Jag APU systems since its release and the customers are VERY happy with the bang for the buck.

        --
        ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
        • (Score: 0) by Anonymous Coward on Sunday August 03 2014, @09:03PM

          by Anonymous Coward on Sunday August 03 2014, @09:03PM (#76978)

          > I would also point out that you can't trust the benches unless they list which compiler they use

          It is Phoronix. The dude is obsessive about benchmarks.
          He used gcc, with -march=native to compile the binaries for each specific system.

          • (Score: 2) by opinionated_science on Sunday August 03 2014, @09:19PM

            by opinionated_science (4031) on Sunday August 03 2014, @09:19PM (#76981)

            It would be nice to see some LINPACK or other scientific benchmarks, which are objective tests of floating point mathematic ability.

            These tests are far more portable, and also will allow comparison to the supercomputers that are being build out of certain combinations of components.

            I'm waiting for 10 Tflops PC so I can run gromacs > 10us/day....!

            • (Score: 2) by Hairyfeet on Monday August 04 2014, @12:53AM

              by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Monday August 04 2014, @12:53AM (#77039) Journal

              If someone wants to do some truly fair benchmarking I'd say the easiest way would be to download the source for whatever programs they wish to use for the benchmark and then use the AMD compiler. Unlike the Intel compiler the AMD compiler (which is based on GCC and has source available) merely adds support for the latest CPU extensions like SSE 4 and is CPU agnostic. Its been probably a year and a half since i looked at GCC but last I checked they were a version or two behind when it came to CPU extensions (which is understandable seeing as how they support multiple arches) which makes the AMD compiler probably the best and certainly the most CPU agnostic when it comes to X86-64 so would be perfect for the task.

              But until someone does that frankly the only real way to test is to do like in the video I linked to and simply place an AMD and an Intel system side by side with as close to identical parts as possible and run real world programs side by side and see the results first hand. I have done this several times at the shop when an Intel system got traded in and from those tests I learned TWO important things...

              1.- that CPUs are so insanely overpowered now that its almost never the CPU that is the bottleneck. I have placed unlocked Athlon quads and Phenom II X4s and X6s against the latest FX and A series against C2D and C2Q against the latest i series and honestly? The scores are DAMN close, really damned close, close enough I seriously doubt if I put all these systems behind a curtain on a KVM switch that anybody would be able to tel the difference. And most importantly for my customers and family 2.- the amount of performance gained by using Intel is NOT WORTH THE EXTRA MARKUP you pay for the Intel over the AMD, not even close. I don't have time to Google it but Tom's hardware did a test of C2D and C2Q versus the latest i chips and found the same thing, the performance was 10% at best while the cost was as much as 300% over the AMD, which had the same thing with the Athlons and Phenoms over the FX and A series.

              So unless you are one of the 3% of PC users that are doing insanely CPU locked tasks that require every cycle you can squeeze (like the guy I spoke to that was doing high sped crash physics simulations for guardrail design) you will end up with a more powerful system overall if you buy AMD and use the money you save to add a faster GPU/RAM/SSD etc. I can vouch for this as my X6 is 5 years old now and still plays the latest games and just chews through A/V tasks like a hot knife through butter.

              --
              ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
              • (Score: 2) by opinionated_science on Monday August 04 2014, @08:47PM

                by opinionated_science (4031) on Monday August 04 2014, @08:47PM (#77353)

                Biophysics could use every CPU cycle on the Earth for the next 10^9 years. There is no such thing as "too much cpu", just impractical computer networking!!!

                Seriously, there are alot of scientific applications that have the same constraints of many games (e.g. physics). There are some key density thresholds that might be achieved in the near term by CPU-GPUs becoming more tightly coupled - AMD seems to have a plan. Intel sort of has MIC, and NVidia is playing too.

                Predictions are difficult , exspecially about the future...
                 

                • (Score: 2) by Hairyfeet on Monday August 04 2014, @11:22PM

                  by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Monday August 04 2014, @11:22PM (#77397) Journal

                  If that is the case then frankly you should be using an AMD workstation with a 2 or 4 socket G34 board, as you'll be able to add more cores than Intel at a given price. If you are using programs that aren't compiled with the cripple compiler you'll find that AMD is on average within single digits of intel performance wise which means having 64 AMD cores versus 48 intel cores should give you a pretty decent advantage, even if Intel cores are 10% faster a piece.

                  --
                  ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
        • (Score: 2) by sjames on Monday August 04 2014, @04:43AM

          by sjames (2882) on Monday August 04 2014, @04:43AM (#77096) Journal

          I am all too aware that icc cheated with the cripple AMD function, and that they have been ordered to stop doing that, but I can't find any information on if/when they have actually done that and which versions of icc are fixed. Any idea?

          • (Score: 2) by Hairyfeet on Monday August 04 2014, @07:45AM

            by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Monday August 04 2014, @07:45AM (#77123) Journal

            The answer is simple...they didn't. All they did was add a line to the documentation saying to the effect "this compiler isn't suitable for non Intel processors" and because AMD can't afford another round of lawsuits and the DoJ don't have teeth anymore that's it. The only hope now is the EU puts the smackdown on Intel because its obvious the DoJ are too weak to do squat anymore.

            --
            ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
            • (Score: 2) by sjames on Monday August 04 2014, @03:50PM

              by sjames (2882) on Monday August 04 2014, @03:50PM (#77245) Journal

              I was afraid that would be the case. The really sad part is the way a bunch of people who should know better have fallen for the benchmarks compiled with ICC.

              Fingers crossed for the EU then.

              • (Score: 2) by Hairyfeet on Monday August 04 2014, @06:13PM

                by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Monday August 04 2014, @06:13PM (#77295) Journal

                This is why I hope you will do as I do and continue to point out the rigging whenever and wherever benches are brought up, because if you run tests without the rigged compiler? The results are VERY different, with AMD in most cases staying within single digit range of most Intel offerings. If someone wins fair and square, like Intel with the P3 over the K5 or AMD with the Athlon versus the P4? Then I wish them nothing but good fortune. Its when they start rigging the market, like Intel bribing OEMs to take the P4 over the Athlon or rigging their compiler and then paying benchmark companies to use it (as was recently found to be the case with Cinebench) that I start throwing the red flag.

                BTW you may hear from Intel apologists that "Oh Intel just knows their arch better so its only natural that their compiler runs better on their chips" but we already have the smoking gun that proves without a doubt this is a lie...the Pentium 3. The year before Intel released the cripple compiler the Pentium 3 was curbstomping P4s that had as much as 40% higher clocks by over 30% on pretty much every benchmark....can you guess what happened when Intel released the cripple compiler and started paying benchmark companies to use it? Yep the very next year the same P3s pitted against the same P4s and the P4s were "winning" by over 30% across the board. If anybody wants to see how badly the compiler rigs all you have to know is that with an ICC rigged program a first gen Intel Atom will beat a third generation AMD E40, even though the E450 is 25% faster, is out of order compared to the in order atom, and has nearly 30% higher clocks AND Turbo clocking.

                  The reason is the ICC will look at the CPUID and if it doesn't return Intel it throws ALL math functions into X87 mode...as in 487. that's right, it will force any AMD chip to use the old 487 integer functions which just FYI were depreciated as legacy back in 1998. Naturally this ties a boat anchor to any AMD chip since it is having to use several cycles to process math that should be done in half a cycle thanks to lack of SSE. So you can see its not just crippled, the code the ICC puts out is broken on the most fundamental levels when run on anything but Intel approved chips.

                --
                ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
                • (Score: 2) by sjames on Monday August 04 2014, @09:13PM

                  by sjames (2882) on Monday August 04 2014, @09:13PM (#77364) Journal

                  Even more damning for Intel, if you defeat the check for GenuineIntel with a binary patch, suddenly the AMD looks much better.

                  You will be happy to know, I am sometimes called upon to make recommendations for cluster supercomputers and have frequently explained the whole benchmark debacle to people, influencing CPU and compiler selection. It gets hard to stay non-political in the face of the DOJ, courts, FTC, and on and on failing utterly to do more than shake their finger at Intel.

                  • (Score: 2) by Hairyfeet on Monday August 04 2014, @11:15PM

                    by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Monday August 04 2014, @11:15PM (#77396) Journal

                    Well with clusters AMD does even better as not only AMD is opening up their CPUs AND their GPUs (Intel has been buying GPUs from PowerVR which is notorious for how lousy their support for open standards are) but they are the only one building chips with both X86 and ARM cores. This combined with their new HSA (which will allow a programmer to simply write a program and then the chip will divide the work between CPU and GPU depending on which would do the task faster) should give AMD some pretty big advantages in the cluster market.

                    It is just a shame that regulatory capture and outright bribery has made the DoJ a toothless org as IMHO what Intel is doing makes the MSFT of the 90s look like the Care Bears. You have OEMs outright admitting under oath that Intel paid them in kickbacks to not sell AMD chips when at the time the athlon was clearly superior in every metric to the P4, you have Intel rigging their compiler and then bribing benchmark companies with everything from increased advertising to corp sponsorship to use the rigged compiler, you have Intel refusing to allow Nvidia to access their chips so they could kill their chipsets so intel could take that for themselves...you have a looong history of flagrant disregard for antitrust laws and pretty blatant market rigging and collusion, yet our DoJ won't lift a single finger to do anything about it. If the EU doesn't step in Intel can continue to rig the tests, rig the market, and to profit handsomely from these illegal acts.

                    --
                    ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
  • (Score: 1) by doublerot13 on Sunday August 03 2014, @02:10PM

    by doublerot13 (4497) on Sunday August 03 2014, @02:10PM (#76897)

    This is good to see. I think I still prefer the i7-4770R in the GB-BXi7-4770R. Iris Pro 5200 goes a long way.

  • (Score: 1, Interesting) by Anonymous Coward on Sunday August 03 2014, @07:08PM

    by Anonymous Coward on Sunday August 03 2014, @07:08PM (#76946)

    I'm still looking to get a board with:
    AMD Opteron (Kyoto) / Jaguar (4-core)

    X1150 (no gpu) 2.0 Ghz / 9-17W
    X2150 (+gpu) 1.9Ghz / 11 - 22 W

    it was announced a year ago? short of getting a HP moonshot rack i cannot seem to find any normal mainboard with that?
    where are these chips hiding? anyone know?

  • (Score: 0) by Anonymous Coward on Sunday August 03 2014, @08:25PM

    by Anonymous Coward on Sunday August 03 2014, @08:25PM (#76968)

    The chips have up to eight integrated graphics cores and are capable of rendering 4K video, which has a resolution of 3840 x 2160 pixels, four times that of traditional high-definition video

    4k is a digital cinema format, it's 4096 pixels wide at 2.39:1 cinemascope. UHD is 3840 x 2160 at an aspect ratio of 16:9, it is twice the resolution of HD. Twice the resolution equals four times as many pixels.

    Is it too much to ask that geeks get the basic maths right?

  • (Score: 2, Interesting) by Walzmyn on Sunday August 03 2014, @09:57PM

    by Walzmyn (987) on Sunday August 03 2014, @09:57PM (#76992)

    Does it ship with better Linux drivers?