Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday February 05 2020, @12:09PM   Printer-friendly
from the next-up-the-zapruder-film dept.

Neural Networks Upscale Film from 1896 to 4K, Make It Look Like It Was Shot on a Modern Smartphone:

When watching old film footage that's plagued with excessive amounts of grain, gate weave, soft focus, and a complete lack of color, it's hard to feel connected to the people in the clip, or what's going on. It looks like a movie, and over the years that medium has taught our brains that what they're seeing on screen might not actually be real. By comparison, the experience of watching videos of friends and family captured on your smartphone is completely different thanks to 4K resolutions and high frame-rates. Those clips feel more authentic and while watching them there's more of a connection to the moment, even if you weren't actually there while it was being shot.

[...] L'Arrivée d'un train en gare de La Ciotat doesn't have the same effect on modern audiences, but Denis Shiryaev wondered if it could be made more compelling by using neural network powered algorithms (including Topaz Labs' Gigapixel AI and DAIN) to not only upscale the footage to 4K, but also increase the frame rate to 60 frames per second. You might yell at your parents for using the motion smoothing setting on their fancy new TV, but here the increased frame rate has a dramatic effect on drawing you into the action.

[...] The results are far from perfect; we're hoping Shiryaev applies one of the many deep learning algorithms that can colorize black and white photos to this film as well, but the obvious potential of these tools to enhance historical footage to increase its impact is just as exciting as the potential for it to be misused.


Original Submission

Related Stories

Facebook Researchers Show Off Machine Learning-Based Upsampling Technique 10 comments

Neural SuperSampling Is a Hardware Agnostic DLSS Alternative by Facebook

A new paper published by Facebook researchers just ahead of SIGGRAPH 2020 introduces neural supersampling, a machine learning-based upsampling approach not too dissimilar from NVIDIA's Deep Learning Super Sampling. However, neural supersampling does not require any proprietary hardware or software to run and its results are quite impressive as you can see in the example images, with researchers comparing them to the quality we've come to expect from DLSS.

Video examples on Facebook's blog post.

The researchers use some extremely low-fi upscales to make their point, but you could also imagine scaling from a resolution like 1080p straight to 8K. Upscaling could be combined with eye tracking and foveated rendering to reduce rendering times even further.

Also at UploadVR and VentureBeat.

Journal Reference:
Lei Xiao, Salah Nouri, Matt Chapman, Alexander Fix, Douglas Lanman, Anton Kaplanyan,Neural Supersampling for Real-time Rendering - Facebook Research, (DOI: https://research.fb.com/publications/neural-supersampling-for-real-time-rendering/)

Related: With Google's RAISR, Images Can be Up to 75% Smaller Without Losing Detail
Nvidia's Turing GPU Pricing and Performance "Poorly Received"
HD Emulation Mod Makes "Mode 7" SNES Games Look Like New
Neural Networks Upscale Film From 1896 to 4K, Make It Look Like It Was Shot on a Modern Smartphone
Apple Goes on an Acquisition Spree, Turns Attention to NextVR


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Funny) by Anonymous Coward on Wednesday February 05 2020, @12:14PM (3 children)

    by Anonymous Coward on Wednesday February 05 2020, @12:14PM (#954189)

    1896 is an odd resolution for video. I wonder what device it was videoed with? Perhaps some off-brand Chinese video camera from Walmart?

    Upscaling to 4k is good though. I wonder if the aspect ratio is preserved.

    • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @05:19PM (2 children)

      by Anonymous Coward on Wednesday February 05 2020, @05:19PM (#954301)

      I guess the editor forgot to include an important word like "year" before 1896. It's a steam locomotive filmed in 1896.

      • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @06:30PM (1 child)

        by Anonymous Coward on Wednesday February 05 2020, @06:30PM (#954336)

        AC you replied to here.

        The year is 1896? A.D.? [ibras.dk]

        It's a joke, son. [youtube.com] Pay attention boy!

        • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @07:40PM

          by Anonymous Coward on Wednesday February 05 2020, @07:40PM (#954363)

          AC that replied to you here.
          No, it's 125 B.B. (Before Bernie)

  • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @12:42PM (1 child)

    by Anonymous Coward on Wednesday February 05 2020, @12:42PM (#954190)

    I like the cleaning up that the neural network did, but my eye says that 4K is overkill.
    It actually has artifacts that remind me of video tape, although it's better quality than video tape.

    • (Score: 4, Informative) by zocalo on Wednesday February 05 2020, @01:31PM

      by zocalo (302) on Wednesday February 05 2020, @01:31PM (#954202)
      Note that TFA's "original" version (also used on Ars' version [arstechnica.com] of the story) is already considerably cleaned up and gives the impression that the Lumière brothers were much further ahead of their time they were. Even so, given where the photographic industry was at in 1896, the quality of the *actual* original [youtube.com] footage is quite remarkable in its own right, with amazing clarity and tonal range, even by the standards of the then state of the art.

      4K might be overkill, although that the original even supports that level of upscaling without serious visual issues is a highly impressive demonstration of the AI algorithm though. I'd really love to see one of those split montages with each stage of the refinement show in a separate strip of the screen (original, cleaned up, upscaled, FPS increase, colourized) to make it clear just how much difference each enhancement pass made; a bit of a shame that Shiryaev doesn't seem to have thought of that.
      --
      UNIX? They're not even circumcised! Savages!
  • (Score: 3, Funny) by wisnoskij on Wednesday February 05 2020, @01:44PM (1 child)

    by wisnoskij (5149) <{jonathonwisnoski} {at} {gmail.com}> on Wednesday February 05 2020, @01:44PM (#954205)

    But can it be used to remove the grain and complete lack of color in art films?

    • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @07:42PM

      by Anonymous Coward on Wednesday February 05 2020, @07:42PM (#954365)

      No, but it can remove warts in pron.

  • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @01:54PM (2 children)

    by Anonymous Coward on Wednesday February 05 2020, @01:54PM (#954208)

    Obviously thinks the Star Wars "special editions" are better than the originals

    • (Score: 4, Interesting) by takyon on Wednesday February 05 2020, @02:37PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday February 05 2020, @02:37PM (#954219) Journal
      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 1) by Ethanol-fueled on Thursday February 06 2020, @01:57AM

      by Ethanol-fueled (2792) on Thursday February 06 2020, @01:57AM (#954552) Homepage

      There's a person in a classic video game forum who points out the same analog except with audio using StarFox games as an example. In Starfox64 all the voice acting has been put through a filter that gives that scratchy low-fi sound like you'd hear from people talking in handheld radios.

      In the later StarFox games, that effect is removed, so all the voices sound super hi-fi. The effect is confusing because it gives that face-to-face vibe over distance communication. I guess the latter works well in Star Trek while they're actually hailing and talking face-to-face, but there is nothing better for the combat communication experience than 20th century Earth handheld radio style.

  • (Score: 3, Interesting) by Mojibake Tengu on Wednesday February 05 2020, @01:56PM (4 children)

    by Mojibake Tengu (8598) on Wednesday February 05 2020, @01:56PM (#954209) Journal

    What neural networks say about religious books they read?

    --
    Rust programming language offends both my Intelligence and my Spirit.
    • (Score: 3, Insightful) by zocalo on Wednesday February 05 2020, @02:58PM (2 children)

      by zocalo (302) on Wednesday February 05 2020, @02:58PM (#954224)
      Interesting notion, but I suspect the end result will be either the neural network equivalent of insanity or dying of laughter depending on how well they handle all the contraditions and totally ludicrous notions. Either way, the blue smoke is probably going to be coming out.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 2) by bzipitidoo on Wednesday February 05 2020, @05:45PM (1 child)

        by bzipitidoo (4388) on Wednesday February 05 2020, @05:45PM (#954315) Journal

        Oh no, I don't think so. Instead, the AI will easily perceive that religion is a human construct, made to suit our needs and tastes. We tend to be too close to it all to have a truly objective view. Though religion seems very irrational, it actually has an appeal to rationality. If some natural force is actually controlled by a god, a being of considerable intelligence and supernatural ability to communicate with humans, then perhaps that god can be reasoned with or appeased or something. For millennia, people have been trying to talk to natural disasters, begging them not to strike or to go elsewhere, offering various sacrifices and such like. "Rain, rain go away. Come again some other day."

        There's no particular reason for the universe to be monotheist, polytheist, or atheist. So why did most people choose polytheism, and then gradually shift to monotheism? That's the way they want the universe to be. Then they put in lots of effort trying to make the facts fit with their decisions.

        Sometimes religion has helped. I have read that we are actually by nature mildly promiscuous. So why do so many practice monogamy? I often imagine that the whole monogamy thing was to keep the young men from killing over the young women, which would weaken the tribe's ability to defend itself from outside attack. Another huge plus is reducing disease transmission. Nice too that it also resulted in some basic fairness, and more genetic diversity than if only the biggest and strongest man fathers all the children while the rest of the men are denied.

        • (Score: 2) by zocalo on Wednesday February 05 2020, @07:11PM

          by zocalo (302) on Wednesday February 05 2020, @07:11PM (#954354)
          Sure, the underlying messages of religions can be very rational once the more theological aspects are stripped away, and it's particularly interesting when you put the evolution of religions alongside the evolution of civilization as you can clearly see how it parallels the accepted social norms of the day and also the level of understanding of how things are believed to work at the physical/chemical/biological level - later religions tend to be progressively less "fanciful" in things that science has provided better answers for. Equally, a lot of the religious prohibitions are sensible advice for the time; many non-kosher/halal foods would have entailed a considerable health risk with the catering standards of the day, for instance.

          The question though would be whether a neural net would be able to grasp things at that level of abstraction and comprehend the underlying messages. Given we don't *really* understand how deep learning networks actually work - e.g. we often can't reliably predict how they are going to respond to a given set of previously unseen inputs - I think we'd struggle to actually train it to do something useful in terms of gaining additional insights into meaningful teachings that we don't already know but, regardless of what happens on that front, I do think the results might be interesting from the perspective of the construction and teaching of deep learning systems.
          --
          UNIX? They're not even circumcised! Savages!
    • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @10:59PM

      by Anonymous Coward on Wednesday February 05 2020, @10:59PM (#954467)

      Nothing you need a concrete termination condition and your question is abstract. Play them at go instead.

  • (Score: 3, Insightful) by nitehawk214 on Wednesday February 05 2020, @03:22PM (3 children)

    by nitehawk214 (1304) on Wednesday February 05 2020, @03:22PM (#954232)

    This is a bad thing, right? Like adding shitty shakey-cam effects, emphasis on megapickles over actual image quality, vertical video?

    From the article:

    Those clips feel more authentic and while watching them there’s more of a connection to the moment, even if you weren’t actually there while it was being shot.

    Does it, though?

    --
    "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
    • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @03:37PM

      by Anonymous Coward on Wednesday February 05 2020, @03:37PM (#954239)

      That was my thought, but not terribly surprising. I'm just surprised they didn't complain that the film was in landscape rather than portrait.

    • (Score: 2) by nobu_the_bard on Wednesday February 05 2020, @05:07PM

      by nobu_the_bard (6373) on Wednesday February 05 2020, @05:07PM (#954291)

      Huh? I don't get it, how is it more authentic? It seems pretty authentic for when it was made.

    • (Score: 2, Touché) by Ethanol-fueled on Thursday February 06 2020, @01:59AM

      by Ethanol-fueled (2792) on Thursday February 06 2020, @01:59AM (#954554) Homepage

      You didn't like the shaky camera effects in Saving Private Ryan?

  • (Score: 4, Interesting) by bzipitidoo on Wednesday February 05 2020, @05:53PM

    by bzipitidoo (4388) on Wednesday February 05 2020, @05:53PM (#954320) Journal

    I wonder what kind of job this can do on blurry photos?

    An interesting manual method I came across to fix motion blur is to "fight blur with blur". I would have thought more blur would just make a blurry photo worse, but it mostly worked. Have to blur just right though. Figure out the direction and size of the original motion blur, then motion blur in the opposite direction. And it worked. The grain in the bricks sharpened up amazingly well. The part that didn't do so well was the creation of heavy shadows. It certainly should be possible to do better, and I expect AI could.

  • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @06:19PM (1 child)

    by Anonymous Coward on Wednesday February 05 2020, @06:19PM (#954329)

    In general I scoff at vinyl and tube amps and 24p and I'm certainly not a nostalgia-ist, but to me the upscaled version just looks plain bad. I see lots of sharpening artifacts like an overcompressed jpg and the motion looks wonky. Yes it is sometimes sharper but unevenly so.

    • (Score: 0) by Anonymous Coward on Wednesday February 05 2020, @06:51PM

      by Anonymous Coward on Wednesday February 05 2020, @06:51PM (#954347)

      It's probably not that different than digital remastering but I think when a human actually does more of the work it probably looks better (and takes longer and costs more since you have to pay someone to do it).

      The thing is you can't add quality to a bitmap. You can extrapolate what you think those pixels would be if they did exist in a higher resolution but such extrapolation is going to have differences from what the actual higher quality image would be.

  • (Score: 2) by dltaylor on Wednesday February 05 2020, @08:07PM

    by dltaylor (4693) on Wednesday February 05 2020, @08:07PM (#954376)

    Shooting a film or still image properly in greyscale is an art of its own. The lighting, camera placement, performers' makeup, set dressing and other factors all contribute to setting the mood of the film or still image, and are usually quite different from how they would be done for color. Turning these films/images into color loses all of that work and those qualities.

  • (Score: 2) by VLM on Thursday February 06 2020, @01:08PM

    by VLM (445) on Thursday February 06 2020, @01:08PM (#954722)

    They seemed to have missed depth of focus differences, which is the main difference between quarter inch aperture cell phone cams and half foot wide lens cinematic equipment.

    Thats the main difference that immediately jumps out to me, aside from pitiful low light performance of cell phones.

  • (Score: 0) by Anonymous Coward on Thursday February 06 2020, @01:57PM

    by Anonymous Coward on Thursday February 06 2020, @01:57PM (#954735)

    Am I the only one that felt like the people had been filmed on a green screen (each separately) and then composited into the shot?

    At least that is the feeling I got from the short.

    Overall, I felt that the update felt fake (which makes sense, it was), and that made me feel less attached to it than the original.

(1)