Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday June 26 2019, @03:53AM   Printer-friendly
from the all-the-better-to-display-double-wide-and-double-tall-characters dept.

Intel beats AMD and Nvidia to crowd-pleasing graphics feature: integer scaling

Intel Gen11 and next-gen graphics will support integer scaling following requests by the community. Intel's Lisa Pearce confirmed that a patch will roll out sometime in August for Gen11 chips, adding support for the highly-requested functionality in the Intel Graphics Command Center, with future Intel Xe graphics expected to follow suit in 2020.

Enthusiasts have been calling out for the functionality for quite some time, even petitioning AMD and Nvidia for driver support. Why, you ask? Essentially integer scaling is an upscaling technique that takes each pixel at, let's say, 1080p, and times it by four – a whole number. The resulting 4K pixel values are identical to their 1080p original values, however, the user retains clarity and sharpness in the final image.

Current upscaling techniques, such as bicubic or bilinear, interpolate colour values for pixels, which often renders lines, details, and text blurry in games. This is particularly noticeable in pixel art games, whose art style relies on that sharp, blocky image. Other upscaling techniques, such as nearest-neighbour interpolation, carry out a similar task to integer scaling but on a more precise scale, which can similarly cause image quality loss.

April AMD thread.

It's baffling that this feature hasn't been available for years.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Kilo110 on Wednesday June 26 2019, @03:57AM (5 children)

    by Kilo110 (2853) Subscriber Badge on Wednesday June 26 2019, @03:57AM (#859985)

    This sounds incredibly simple to implement. Am I missing something?

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2, Touché) by Anonymous Coward on Wednesday June 26 2019, @04:06AM

    by Anonymous Coward on Wednesday June 26 2019, @04:06AM (#859988)

    yes. money.

  • (Score: 3, Informative) by shortscreen on Wednesday June 26 2019, @05:47AM (3 children)

    by shortscreen (2252) on Wednesday June 26 2019, @05:47AM (#860006) Journal

    Integer upscaling is a special case of nearest-neighbor, when the dimensions of the source image divide evenly into those of the destination image. This is what you get by using the StretchDiBits call in every version of windows ever.

    Upscaling being accelerated by the video driver is also not new. OpenGL 1.x glDrawPixels gets me the same result on my antique laptop with ATI graphics, more quickly than StretchDiBits. (I don't think Intel supports OpenGL 1.x in their drivers "because old" but AMD/nvidia still do)

    Last but not least, upscaling low resolutions for a high-res display is not new either. VGA has done scandoubling since 1987. However, at some point in the mid '00s when analog VGA was being replaced by DVI (and the other, DRM-plagued interfaces) GPU makers decided they would upscale low resolutions on the card instead of producing native VGA scanrates. And this is filtered rather than nearest-neighbor. So I'm guessing this is what Intel is going to do, is output 2160p from a 1080p framebuffer, essentially bringing back scandoubling. Since 2160p is the new hotness but text is awfully hard to read at 300dpi.

    • (Score: 0) by Anonymous Coward on Wednesday June 26 2019, @08:14AM

      by Anonymous Coward on Wednesday June 26 2019, @08:14AM (#860019)

      Ah,... now I see. Being short makes you be into this doubling business...

    • (Score: 2) by FatPhil on Wednesday June 26 2019, @04:16PM (1 child)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday June 26 2019, @04:16PM (#860134) Homepage
      Yeah, this is not just an "already solved" problem, it's a "the naive solution being asked for is totally obsolete" problem.
      They're asking for "nearest" here, whilst complaining about bilinear and bicubic:
          https://upload.wikimedia.org/wikipedia/commons/c/ce/Pixel-Art_Scaling_Comparison.png
      Most of the alternatives are superior to "nearest" at achieving what they're actually asking for.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by DeVilla on Saturday June 29 2019, @11:38PM

        by DeVilla (5354) on Saturday June 29 2019, @11:38PM (#861472)

        To be fair though, bilinear & bicubic were both worse in the example you provided. The first 2 rows were pretty bad. There were much better options to if you weren't trying to preserve the blocky natuve of the image. The choice then comes down to cost. "Nearest" sounds cheap and like good if you what the blocky, pixel art look.