Submitted via IRC for TheMightyBuzzard
The race for 4K gaming has begun. PlayStation 4 Pro is in the marketplace, and while success in supporting ultra HD gaming varies dramatically between releases, an established series of techniques is in place that is already capable of effectively servicing a 4K resolution with a comparatively modest level of GPU power. In the wake of its E3 2016 reveal for the new Project Scorpio console, Microsoft began to share details with developers on how they expect to see 4K supported on its new hardware. A whitepaper was released on its development portal, entitled 'Reaching 4K and GPU Scaling Across Multiple Xbox Devices'. It's a fascinating outlook on Microsoft's ultra HD plans - and it also reveals more about the Scorpio hardware itself. For starters, Xbox One's contentious ESRAM is gone.
No link provided to the whitepaper referred to in the article.
Related Stories
Microsoft's mid-cycle refresh for the Xbox One, the Xbox One X, has been announced. Graphics performance is quadrupled (and then some) to allow for 2160p gaming:
As far as the hardware itself goes, thanks to Microsoft's ongoing campaign, we already know the bulk of the details of the console. The 16nm SoC at the heart of the new Xbox One design is meant to be significantly more powerful than the original and S versions of the Xbox One, vaulting MS from having the least powerful console to the most powerful console. All told, the Xbox One X will offer almost 4.3x the GPU compute throughput of the Xbox One S, while the CPU cores have received a healthy 31% clockspeed boost (Interesting aside: Microsoft is still not calling it Jaguar, unlike the XB1/XB1S). The memory feeding the beast has also gotten a great deal faster as well, with Microsoft switching out their 8GB of DDR3 for a large and very fast 12GB of GDDR5, which has a combined memory bandwidth of 326GB/sec.
AKA the X-OX. Can it run NetHack in 4K?
Previously: PlayStation Neo and Xbox "Project Scorpio" to Bring 4K Resolution and VR to Console Gaming
The Race for 4K: How Project Scorpio Targets Ultra HD Gaming
More Details About the "Project Scorpio" Xbox One Successor
Microsoft has detailed the system-on-a-chip powering its refresh of the Xbox One (the company's answer to the PS4 Pro, which was released in November 2016):
Today at the Hot Chips conference, the company released schematics and details about the internal workings of the SoC that is set to power the upcoming 4K-ready gaming console. We already knew much of what the company discussed at the Hot Chips presentation, including the core count; clock speed; and bandwidth specifications of the CPU, GPU, and memory used in the system, but now we know how the components interact with each other.
[...] The Scorpio Engine is a monster of an SoC developed by AMD, featuring a 359mm2 die with seven billion transistors built on TSMC's 16nm FinFETT+ technology. The GPU compute units (the yellow section of the layout) consume most of the large die's surface area. The Scorpio Engine's GPU components include four shader arrays that each offer 11 compute units. Microsoft said that one compute unit per shader array is left inactive to compensate for yield problems that may occur.
The right side of the SoC die features the two four-core 2.3GHz CPU clusters (represented in dark green on the diagram). A pair of cache controllers flanks each CPU cluster. Twelve GDDR5 memory controllers line the top, bottom, and right edges of the SoC. The retail Xbox One X features 12GB of memory. Developer kits offer 2GB per channel for a total of 24GB system memory.
[...] When Microsoft announced Project Scorpio, the company boasted that the new console would be the first to deliver 6Tflops of 32-bit floating point performance. During the Hot Chips presentation, the company said that it managed to squeeze out "just a hair more than 6Tflops." Each of the 40 compute units can perform 128 floating point operations second. Multiplied by the 1,172MHz core clock, that's a total of 6,000,640 Flops. [sic - see comment below -- Ed.(FP)]
[...] The new console features an eight-core Jaguar-derived CPU like the one found in the Xbox One S console, but it operates 31% faster than the previous version. Microsoft said that most of the CPU performance optimizations revolve around memory latency improvements of the main memory controllers (up to 20%). The company attributes the improvement to tripling the available memory channels and increasing the number of main memory banks by a multiple of six. It also credits the rearrangement and enlargement of the TLB cache, and the introduction of a redesigned and larger Page Descriptor Cache, which "caches information about nesting page translations" and improves performance by "up to 4.3%."
The image in question from the article.
Previously: PlayStation Neo and Xbox "Project Scorpio" to Bring 4K Resolution and VR to Console Gaming
The Race for 4K: How Project Scorpio Targets Ultra HD Gaming
More Details About the "Project Scorpio" Xbox One Successor
Xbox One X, Formerly Project Scorpio, to be Released November 7th for $499
(Score: 2) by takyon on Sunday January 29 2017, @06:58PM
I like unnecessary progress for profit's sake just fine, as long as the hardware gets better. ZEN ZEN ZEN!
What is the 32 MB limit exactly?
Basically just what those 900p titles needed to jump to 2160p (except that they say later on that a 5.76x increase would reflect the pixel increase, but close enough). Obviously, other tricks can help hit the higher targets, and some of them are mentioned. I guess the ghetto 4K will be... 1800p (3200×1800).
It's interesting that you have 3 paths for using this increased power; try to hit 4K resolution, try to double framerate to about 60 FPS, or try to increase framerate even further to the 90 FPS desired for VR. And it looks like none of them will be completely achievable, with 2160p30 or 1080p60 possible only in some games or with some detail thrown away.
Consoles with more RAM than my laptop? It's almost time to upgrade.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Interesting) by VLM on Sunday January 29 2017, @09:11PM
What is the 32 MB limit exactly?
The PS4 memory design is medium speed main memory and no cache. The xbox one memory architecture is slow main memory and 32 megs of on chip very fast cache.
There's a lot of hand waving about memory bandwidth and what should vs should not be enough but in practice cache management seems to be a bottleneck somewhere above 720p and below or around 1080p much less 4K resolutions.
There's not a hell of a lot of difference in specs between the two systems other than the memory architecture.
And software devs find it easier to program around a constant medium speed of memory access ps4 style rather than access being either extremely fast if in the cache otherwise slowish.
There's an infinite amount of internet BS on the topic of what MS should do to fix it... invest an extra $5 in making main memory faster, perhaps as fast as the PS4, then telling devs not to sweat it, or invest $XXXXXXXXXX into improving the system-on-chip to have "enough cache memory that even dumb programmers don't reach the limit". And of course suggestions that the xbox one just sux and maybe we're in a post resolution world and all the usual gamer drama some of which might even be true.
Usually these kind of "accidents" don't happen at the EE technical level... Someone made a higher up application decision that the average gamer in 2017 is going to not shovel as much raw memory bandwidth as they predicted. Maybe they expected xbox games to look more like wii-U games or everyone would be playing minecraft instead of FPS, I donno the details but it was obviously a higher level mgmt mistake not a EE level mistake.
I have no dog in the fight, I own neither. Although I've always been interested in high performance computing and this is a classic example in that field of ur memory architecture sux (at least WRT xbox one playing FPS sequels)
(Score: 2) by opinionated_science on Sunday January 29 2017, @09:22PM
I only have experience in the scientific viz screens , which can be physically very large and hi-res.
Is there an established size limit for games whereupon the size (not just pixels, but screen dimensions), improves experience?
My impression of the room sized viz lounges are that even *without* the 3d glasses (which were available), the illusion of a huge screen really helped immersion.
A brief look at google tells me that buying a 55in 4K screen seems doable, is this preferable to monitors?
(Score: 2) by takyon on Sunday January 29 2017, @09:25PM
One thought is that 4K resolution may require less anti-aliasing.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Monday January 30 2017, @03:37AM
In the short term that's probably going to be the biggest difference. In the long term as memory and disk space get to be larger, they'll probably be able to make better use of the extra pixels.
But, at 720p, I found that the aliasing was the second biggest ruiner of immersion just behind the rendition of human expressions. Just about everything else looked great, but then you'd see things like electric wires between poles getting aliased and similarly any plants and it would ruin the immersion if you weren't engaged in the story.
(Score: 2) by VLM on Sunday January 29 2017, @09:43PM
Google image search for something like monitor size vs resolution. You'll get the same endlessly stolen copyrighted graph.
I'm not a big TV watcher, my living room TV is small and the room is normal size so 480p content looks the same as 4K, until people start getting bionic eyeballs it won't matter. Color rendition and black level is way more important for quality viewing at my house. Glare reduction with a nice matte finish would help a lot more than higher res.
For your 55 inch 4K example, the graph shows that the window would be 3 to 7 feet. More than 7 feet with 55 inches and unless you have bionic eyes you won't be able to see the difference between 480 and 4k and closer than 7 feet and 4K would look better than 1080. If you stood 3 feet away from the five foot display then you'd start being able to see 4K pixels and presumably some kind of 8k or 16k would look better.
WRT visual retina resolution there is more to life than just seeing a pixel or whatever and staring into an emacs screen of text my eyes hurt from anti-aliasing (something to do with tricking the focusing algorithm in my eyes) so I prefer all the artistic fuzzing and blurring technologies be shut off with the result that higher res means better font quality. All of which boils down to crappy action movie has much lower eyeball performance requirements than a screen full of emacs and source code. Going the other direction my sight is 20/20 but frankly most people are not and 4K is less important for them, my MiL may as well just stare into a one pixel flashlight for all it matters. So the "4K size graphs" are fine for relative comparison but using a ruler is probably excessive precision.
(Score: 2) by opinionated_science on Monday January 30 2017, @02:08AM
Well at work, we're looking at 55in/4K screens for viz, since they have chromecast built in. Good for phones and tablets etc...
At home, until I get some decent BW, I'm not so bothered but tempted by chromecast and upscaled DVD's....
(Score: 2) by Celestial on Monday January 30 2017, @09:09PM
I regularly re-watch "Babylon 5." It is my favorite non-Star Trek television show. Unfortunately, it is stuck in 480i Hell as Warner Bros. sees no point in spending the money remastering it to 1080p or Heaven forbid 4K Ultra HD. On a 46" 1080p television, the close-up shots look fine, but on shots where the camera is scanned back and there's a lot going on, it looks really fuzzy. On a 60" 4K Ultra HD television, forget it. The close-up shots look fuzzy, and the scanned back camera shots look like a smear. It's pretty much unwatchable.
(Score: 2) by VLM on Sunday January 29 2017, @09:27PM
For starters, Xbox One's contentious ESRAM is gone.
This was inevitable, wasn't it? It wasn't that much faster than commodity external dram but it burned up a lot of die space and thermal limit, and of course it's expensive to put that cache on chip so now to meet budget you have crap slow external memory and programmer headaches because the ratio of cache to memory is fixed and likely not ideal for whatever app you're trying to run. Commodity dram industry will always outperform your little on-chip cache on average even if for a short amount of time your on-chip can be faster than commodity dram.
Its a sucker bet to "save money" on main memory by making the cache more complicated and expensive than just buying better memory. Been that way since the 70s maybe late 60s mainframes.
There is this "microcontroller-ization" meme that all devices will have precisely one integrated chip and maybe in decades that'll happen but people have been stuck in the microcontroller meme for decades now and it never happens. Folks who specialize in dram are always going to be better at squirting out bits than you will, even if you put your crappy 32MB or whatever on the same die.
The real world seems to end up with processors and memory both being faster and the architecture between them staying a vaguely constant level of agonizing complexity and extremes of memory architecture end up paying a heavy price in the marketplace, every time.
(Score: 2) by EvilSS on Sunday January 29 2017, @10:20PM
(Score: -1, Flamebait) by Anonymous Coward on Sunday January 29 2017, @10:01PM
I am thinking about upgrading my radeon 2400 which plays sauerbraten just fine on a 1024x768 crt, all because the updated proprietary driver does not support it anymore. Richard Stali.. Stallmann was right again.