Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Thursday August 13 2015, @10:33AM   Printer-friendly
from the don't-let-the-smoke-out-of-the-chips dept.

Tom's Hardware conducted an interview with Palmer Luckey, the founder of Oculus VR. The defining takeaway? Virtual reality needs as much graphics resources as can be thrown at it:

Tom's Hardware: If there was one challenge in VR that you had to overcome that you really wish wasn't an issue, which would it be?

Palmer Luckey: Probably unlimited GPU horsepower. It is one of the issues in VR that cannot be solved at this time. We can make our hardware as good as we want, our optics as sharp as we can, but at the end of the day we are reliant on how many flops the GPU can push, how high a framerate can it push? Right now, to get 90 frames per second [the minimum target framerate for Oculus VR] and very low latencies we need heaps of power, and we need to bump the quality of the graphics way down.

If we had unlimited GPU horsepower in everybody's computer, that will make our lives very much easier. Of course, that's not something we can control, and it's a problem that will be solved in due time.

TH: Isn't it okay to deal with the limited power we have today, because we're still in the stepping stones of VR technology?

PL: It's not just about the graphics being simple. You can have lots of objects in the virtual environment, and it can still cripple the experience. Yes, we are able to make immersive games on VR with simpler graphics on this limited power, but the reality is that our ability to create what we are imagining is being limited by the limited GPU horsepower.

[...] The goal in the long run is not only to sell to people who buy game consoles, but also to people who buy mobile phones. You need to expand so that you can connect hundreds of millions of people to VR. It may not necessarily exist in the form of a phone dropping into a headset, but it will be mobile technologies -- mobile CPUs, mobile graphics cards, etc.

In the future, VR headsets are going to have all the render hardware on board, no longer being hardwired to a PC. A self-contained set of glasses is a whole other level of mainstream.

[More after the Break]

An article about AMD's VR hype/marketing at Gamescom 2015 lays out the "problem" of achieving "absolute immersion" in virtual reality:

Using [pixels per degree (PPD)], AMD calculated the resolution required as part of the recipe for truly immersive virtual reality. There are two parts of the vision to consider: there's the part of human vision that we can see in 3D, and beyond that is our peripheral vision. AMD's calculations take into account only the 3D segment. For good measure, you'd expand it further to include peripheral vision. Horizontally, humans have a 120-degree range of 3D sight, with peripheral vision expanding 30 degrees further each way, totaling 200 degrees of vision. Vertically, we are able to perceive up to 135 degrees in 3D.

With those numbers, and the resolution of the fovea (the most sensitive part of the eye), AMD calculated the required resolution. The fovea sees at about 60 PPD, which combined with 120 degrees of horizontal vision and 135 degrees of vertical vision, and multiplying that by two (because of two eyes) tallies up to a total of 116 megapixels. Yes, you read that right: 116 megapixels. The closest resolution by today's numbers is 16K, or around [132] megapixels.

While 90 Hz (albeit with reduced frame stuttering and minimal latency) is considered a starting point for VR, AMD ultimately wants to reach 200 Hz. Compare that to commercially available 2560×1440 @ 144 Hz monitors or HDMI 2.0 recently adding the ability to transport 3840×2160 @ 60 Hz. The 2016 consumer version of Oculus Rift will use two 1080×1200 panels, for a resolution of 2160×1200 refreshed at 90 Hz. That's over 233 million pixels per second. 116 megapixels times 200 Hz is 23.2 billion pixels per second. It's interesting (but no surprise) that AMD's endgame target for VR would require almost exactly one hundred times the graphics performance of the GPU powering the Rift, which recommends an NVIDIA GTX 970 or AMD Radeon R9 290.

In conclusion, today's consumer VR might deliver an experience that feels novel and worth $300+ to people. It might not make them queasy due to the use of higher framerates and innovations like virtual noses. But if you have the patience to wait for 15 years or so of early adopters to pay for stone/bronze age VR, you can achieve "absolute immersion," also known as enlightenment.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0, Touché) by Anonymous Coward on Thursday August 13 2015, @10:40AM

    by Anonymous Coward on Thursday August 13 2015, @10:40AM (#222211)

    Tap the other 90% of the users' brains to drive the GPU. They won't even notice.

  • (Score: 2, Informative) by Anonymous Coward on Thursday August 13 2015, @11:04AM

    by Anonymous Coward on Thursday August 13 2015, @11:04AM (#222217)

    If we had unlimited GPU horsepower in everybody's computer, that will make our lives very much easier. Of course, that's not something we can control, and it's a problem that will be solved in due time.

    No, it will never ever be solved.

    Besides that Oculus VR is a company owned by spy corporation Failbook. I will never give them privileged access to my Cartesian theater.

    • (Score: 2) by takyon on Thursday August 13 2015, @04:19PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday August 13 2015, @04:19PM (#222371) Journal

      Besides that Oculus VR is a company owned by spy corporation Failbook. I will never give them privileged access to my Cartesian theater.

      It doesn't matter. There are several competitors offering a similar product and some degree of interoperability. Oculus itself will probably not reach as many people as phone manufacturers will.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 1, Redundant) by jimshatt on Thursday August 13 2015, @11:09AM

    by jimshatt (978) on Thursday August 13 2015, @11:09AM (#222219) Journal
    I don't have a simple solution, but here's a thought. For MMO applications, why does every device (PC / smartphone) need to have the entire scenery in memory and render the entire thing while there's likely to be tremendous overlap? Maybe there is some way to cache previously rendered scenery and only apply a delta for the changeable objects.
    • (Score: 0, Funny) by Anonymous Coward on Thursday August 13 2015, @11:17AM

      by Anonymous Coward on Thursday August 13 2015, @11:17AM (#222222)

      Ooh yeah this is a good porno. Aw hell my Internet connection is lagging. Fuck! My dick is stuck and I can't pull it out!!

      • (Score: 1) by Ethanol-fueled on Thursday August 13 2015, @05:39PM

        by Ethanol-fueled (2792) on Thursday August 13 2015, @05:39PM (#222422) Homepage

        While this has been modded troll, I could see the technology becoming a problem with porn-addicts.

        Traditionally the porn addict couldn't do their public thing because the only porn available would be in magazines, VHS tapes and DVDs, or technologically confined to desktop computers. Even with the advent of decent laptops, to watch porn in a public (or even in a public restroom scenario) was too unwieldy and cumbersome.

        Phones changed the game drastically. Much productivity was lost on the job as employees begun watching porn on their phones and masturbating in restrooms of their employers and the public. They walk everywhere, among us now, sweaty and grunting and breathing heavily and kneading themselves in between their restroom porn fixes.

        And that's the way things are now. Imagine all those people who would now have something like that affixed to their heads at all times. You'd be seeing them everywhere, doing what they do now except non-stop with splotches of semen all over the crotches of their pants. Everything would devolve back to the caveman days as porn-addled mankind communicates in moans and grunts because that's all they see, all day, every day.

        Yeah, no. We saw what happened with the Glassholes. The occulus and all others like it must be publicly shamed accordingly.

        • (Score: 3, Informative) by takyon on Thursday August 13 2015, @06:06PM

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday August 13 2015, @06:06PM (#222432) Journal

          Pfft. Like hell people will wear Oculus in public. That's mugging 3.1. Employers will not adopt VR because they won't figure out how to squeeze productivity out of it. AR might get some adoption in niche workplaces like health care, but the employer will hold back the cum tsunami. VR's big market will be gaming, with a side market of immersive content that targets the baseline of Cardboard/phone holders.

          What happens in the basement stays in the basement. Only your entire home is your basement once you are on the floor wearing Oculus and twitching your fingers. The crowd that can play shitty MMOs for 20+ hours a week will adopt Oculus fast. Some porn users will adopt Oculus, but not a significant proportion will blow $300 on a new head-mounted display. Instead, the MMO crowd will heavily overlap with the Oculus porn segment, and some of this porn will be gamified.

          The public shaming of Oculus will never happen because Oculus will shame itself into the basement and stay there for 420 hours a week. Unlike Google Glass, there is no front-facing camera on the initial Oculus although v2 will have it. It will never be associated with the relative ease of recording that Google Glass was. Will the Cardboard users be shamed? I think they will have trouble finding a significant amount of VR content to view on the bus or park bench, and they will be relieved of their wallets if they try to feel the immersion. Cardboard could be seen as an evolution of restroom porn fix, but that won't decrease productivity any further, because the TTJ (time to jizz) will be the same.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Insightful) by gman003 on Thursday August 13 2015, @12:55PM

      by gman003 (4155) on Thursday August 13 2015, @12:55PM (#222257)

      The overlap would be extremely minimal, basically just render list generation, maybe shadow map generation if they're using a shading method like that. Once you start on the actual hard-work portions of the frame render, it becomes independent for each camera.

      Plus, latency is an absolute killer for VR. I don't think adding 100ms of network latency will be tolerable when they've gone to great lengths to cut the display latency down from 16ms to 11ms.

      • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @03:11PM

        by Anonymous Coward on Thursday August 13 2015, @03:11PM (#222337)

        Agreed, we're playing a game, not watching TV or streaming a movie enhanced with fancy CGI. Our actions will be interactive, as opposed to simply "3D".

        Considering the bandwidth needed to transmit the graphics between cards in SLI or Crossfire, and considering the bandwidth needed to actually deliver it to a digital monitor -- the cloud is not the place I want my real-time stream of highly complex computational data and results are.

        Something has to get the data to the GPUs in whatever datacenter. Something has to manage the workload across GPUs, then put it on the network to deliver to me.

        If people use wireless, will they lose frames while playing because the video stream dropped? Where is the game itself hosted?

        If we're talking about a phantom console type of streaming, or STEAM OS/big picture streaming, then these VR glasses are not even applicable beyond the local LAN due to the bandwidth needed... and how it'd compete against everything else it encounters. In real time.

        Turn your head, and that action has to go to "The Cloud" and then send it back...

        Getting a high speed connection for the internet at home would not be enough unless the content being accessed is nearby on a similar speed and low latency connection... people using netflix in your neighborhood/apartment complex will surely impede such application of VR use...

      • (Score: 2) by jimshatt on Friday August 14 2015, @08:53AM

        by jimshatt (978) on Friday August 14 2015, @08:53AM (#222741) Journal
        I agree it wouldn't work. I just had a nagging feeling of wastefulness with everyone carrying around an entire set of the same models and scenery and whatnot. I suppose it can't be avoided.
  • (Score: -1, Redundant) by Anonymous Coward on Thursday August 13 2015, @11:26AM

    by Anonymous Coward on Thursday August 13 2015, @11:26AM (#222225)

    Eye resolution is not uniform. Couldn't you just follow the focus point, put more pixels there and interpolate the living shit out of everything outside that area?

    • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @11:31AM

      by Anonymous Coward on Thursday August 13 2015, @11:31AM (#222226)

      Yes! Let's make Oculus VR work exactly like idTech 5! MegaInterpolate the living shit out of it! No one will complain that rendering quality has gone to shit these days! Truly we live in a bad future!!!

    • (Score: 2) by ledow on Thursday August 13 2015, @12:07PM

      by ledow (5567) on Thursday August 13 2015, @12:07PM (#222239) Homepage

      So rather than a single flat high res screen, you want a tiny high-res screen surrounded seamlessly by low-res screens that's capable of following some of the fastest movement a human being makes, right in front of the most sensitive instrument the human body has, in such a way that it can't detect the movement visibly down to the resolution of your tiny high-res screen?

      You just made a difficult problem even closer to impossible.

      • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @12:53PM

        by Anonymous Coward on Thursday August 13 2015, @12:53PM (#222256)

        No, "interpolate the living shit out" means there are physical pixels there. The point is that in the near future (~20 years) actually computing all the 116 million pixels at 200fps consumes more power than you can feasibly insert into ridiculously lightweight glasses that people want to actually wear and it might not be the smartest way to go.

        I highly doubt that you have required level of knowledge to throw that idea out without even attempting. Mostly, people don't have that even regarding way simpler issues like standard code optimizations, thus warranting the mantra, profile, profile, profile.

      • (Score: 1) by islisis on Thursday August 13 2015, @02:14PM

        by islisis (2901) on Thursday August 13 2015, @02:14PM (#222305) Homepage

        http://www.roadtovr.com/fove-eye-tracking-vr-headset-hands-on-ces-2015/ [roadtovr.com]
        https://en.wikipedia.org/wiki/Foveated_imaging [wikipedia.org]

        Eye tracking ought to be the next big target in input devices

      • (Score: 2) by acid andy on Thursday August 13 2015, @05:01PM

        by acid andy (1683) on Thursday August 13 2015, @05:01PM (#222397) Homepage Journal

        And right there you've just ended any hope of people sharing half decent screenshots or videos of their VR experience. The game review industry would be dead. I suppose for a screenshot a hotkey could momentarily force a full quality render of the whole scene but consider that a lot of gaming screenshot applications aren't natively supported by the game.

        Also, I can only see your idea working when the resolution in front of the eye's point of focus is much, much higher than what's currently available. On the current VR headsets you can easily make out individual pixels. When that's noticably blocky at the eye's point of focus, good luck reducing the res even further around it. I can see how it could work in principle with the kind of tech Palmer wants in the TFA.

        Also while I admire the ambition of striving for 90 fps, there are plenty of people loving VR at waaaay lower frame rates today. Once you get your VR legs it's not so bad.

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
        • (Score: 2) by takyon on Thursday August 13 2015, @06:10PM

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday August 13 2015, @06:10PM (#222434) Journal

          Actually I like the anon's idea. There are two things to consider: screenshots and live streaming. Screenshots won't do the VR experience justice, and if you do need a screenshot, you can temporarily drop a hundred frames in order to capture 1 image. For the live streaming/twitch/youtube crowd, the framerate can be set lower, or the Twitch user can buy a better GPU with STREAMTUBE CROWDGOLD.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by Immerman on Thursday August 13 2015, @04:09PM

      by Immerman (3985) on Thursday August 13 2015, @04:09PM (#222367)

      A (potentially) excellent idea. Render a region around the focal point at maximum detail, and then a wide field of view at much much lower resolution and perform crude upscaling, potentially even on the display controller itself. Our eyes only have a high enough resolution for reading in an angular spot about 1-2 degrees across (the fovea). And I have no doubt that gaze tracking could be an awesome auxiliary input as well - seems like some of the fringe VR headsets are already using it to good effect.

      The problem though is that the fovea moves *very* quickly between resting spots (saccades). Up to 900 degrees/second in humans, with a typical resting time in the range of only 20-200ms (20-30 when reading, though under certain conditions it can be half that). If you've got a 10ms lag time in your eye-scanning -> final render pipeline that's likely going to be VERY obvious. And probably horribly nauseating as the world constantly "pops" from a blurry color field into a clear image.

      On the other hand, if we can get lag times down below, I don't know, maybe a single millisecond? Then we could potentially radically reduce the rendering overhead, maybe even to the point that it's almost within reach of current technology. But I suspect that kind of lag reduction is going to be brutal to achieve.

      If nothing else though, it does offer a tantalizing goal for the future - at some point, as resolution continues to improve and lag continues to fall, we'll hit a point where the necessary pixel-pushing horsepower suddenly falls by probably something around a thousandfold. And high-end VR will go from requiring a hefty high-performance rendering station to something that can be embedded in the headset itself in a single generation. And I suspect THEN we'll see VR really take off among the masses.

      • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @04:51PM

        by Anonymous Coward on Thursday August 13 2015, @04:51PM (#222391)

        Now _this_ is the answer I was looking for.

        My quick thoughts are:

        - If you go all the way with that idea you might be capable of obtaining a ridiculously low latency because the needed pixel count is quite low indeed. The catch is that that absolutely requires such latency in order to work at all. You'll might even need a display tech that can physically move the sharp spot around because 16K@0.5ms sounds like a pretty tough requirement no matter how low level is the implementation.

        - The opposite end is where you have render plenty of extra area with high resolution and even the rest at "sufficient" level. If you moved immediately from one end to other you'd see some blur but smaller motions wouldn't do that. This still needs maybe 1/10 of pixels of full rendering.

        Skipping focus-following, there's another hack: render to spheremap and use custom hardware very near the display to grid-warp it on fly with very high fps while detaching it from "actual" render rate. Maybe not that sensical at 60fps but could be something with higher rates.

        • (Score: 3, Interesting) by Immerman on Friday August 14 2015, @12:00AM

          by Immerman (3985) on Friday August 14 2015, @12:00AM (#222594)

          UPDATE: After typing all this out I remembered they changed axes when they introduced a 4K, so a 16K screen would only have 8x the linear pixels of 1080p, not 16x, and all the calculations I did would thus be for a 32K screen instead. I don't want to update the math, so instead I'm changing all references to 16K to 32K....

          I don't think there's actually much of a catch - I'm assuming you mean 32K in the "4K UHD TV format" sense, and sure, that would be a ridiculous amount of pixels to push every 0.5ms - what, 530M pixels, 2000 times a second? I want the hardware with that kind of pixel-pushing power!

          But there's a much easier way to do it, though we would need specialized screen-control electronics. Physically moving a sub-screen fast enough is unlikely to be viable thanks to that 900*/second saccade speed, but we don't need to thanks to the fact that current LCD refresh behavior is a historical artifact rather than a technological limitation: Let's say we have a physical 32K-equivalent display in the glasses, the trick is twofold:

          1) at the beginning of each frame we'd render the complete FOV at low resolution, *maybe* HD, and we could likely get away with even lower. Even if it's stretched across a 200*x100*FOV most of our retina has pretty lousy resolution, so it's a non-issue. Send that to the display, and have a simple ultra-low-lag upscaling circuit translate that low-resolution framebuffer to the entire display. Even linear interpolation would likely be more than good enough, we might even be able to get away with just giant square "superpixels". Either way, if it's implemented in the circuitry responsible for communicating the framebuffer with the display matrix it should incur minimal overhead.

          2) Then, multiple times per frame we render a much smaller "fovea-view" window at high resolution. None of the camera or scenery information is changed, we're literally just rendering a tiny portion of the exact same image at much higher resolution. If each eye gets a half-billion pixels evenly spread across the roughly 20,000 solid degrees covering the FOV, covering the maybe 4 solid degrees of fovea will require 5000x fewer pixels, or about 100,000 total, maybe a little 370x280 pixel block. We then send *just* this tiny updated patch of pixels to the display, and only that small sub-block of the display gets updated. This is a already a common feature in the control circuitry for e-ink displays where power-conservation is a high priority.

          That's it. If we render 20 fresh fovea-views per frame (2M pixels) plus the single HD "background" (2M pixels), we've still rendered and transmitted around 132x fewer pixels than needed to fully render the 32K display just once. If we assume a frame rate of 100 FPS, we've got a rate of 2000 "fovea views per second", or 0.5ms per update. Delivering an effectively 32K, 100FPS display at a pixel rate equivalent to driving a 1080p display at 200FPS.

      • (Score: 2) by SlimmPickens on Thursday August 13 2015, @10:08PM

        by SlimmPickens (1056) on Thursday August 13 2015, @10:08PM (#222553)

        I don't think that moving the scene in sync with the saccades would work. As I understand it, the whole point of the saccades is to drag 'feature detectors' across the scene. Feature detectors being ganglion specialised for detecting left/right movement, or light/dark edges etc.

        • (Score: 2) by Immerman on Friday August 14 2015, @12:51AM

          by Immerman (3985) on Friday August 14 2015, @12:51AM (#222608)

          Hmm, I remember it differently - that very little processing occurs *during* the saccade. That may only be on a conscious level though. Worth doing more research

          • (Score: 2) by SlimmPickens on Friday August 14 2015, @06:04AM

            by SlimmPickens (1056) on Friday August 14 2015, @06:04AM (#222707)

            You're right that very little processing occurs *during* the saccade (at least I think I read that in Jeff Hawkins' book). I just mean that the saccade is a functional thing and you probably don't want the VR to track the saccades just as the real world doesn't.

            • (Score: 2) by Immerman on Friday August 14 2015, @02:45PM

              by Immerman (3985) on Friday August 14 2015, @02:45PM (#222843)

              In that case I'm not sure I understand your objection. It's not like you'd be changing the view in any way in response to the saccades (that would probably cause issues), you'd just be re-rendering the exact same image at higher resolution in the small spot you're actually looking at. As I understand it even the saccade "target correction" mechanism is entirely internal with no reference to retinal input.

  • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @12:28PM

    by Anonymous Coward on Thursday August 13 2015, @12:28PM (#222248)

    tbh what i would really want (and probably would go take of the shelf in the store tomorrow) is the visual equivalent of audio headphones.
    go on a bus/subwait .. turn on the music and plug those tiny speakers into the ear.
    same but for movies: go to bed (or bus or subwait), "strap" on some glasses and watch a movie.
    no need for huge monster TV sets (power consumption) and monster speaker setup (to annoy the neighbors)...
    i don't even really care if it's 3D : )

    so .. visual headphones! WHEN?

    • (Score: 0) by Anonymous Coward on Thursday August 13 2015, @12:41PM

      by Anonymous Coward on Thursday August 13 2015, @12:41PM (#222250)

      when your eye will be able to focus on something an inch away. i.e. never, or after some serious surgery.

      • (Score: 1, Touché) by Anonymous Coward on Thursday August 13 2015, @03:04PM

        by Anonymous Coward on Thursday August 13 2015, @03:04PM (#222333)

        You know what people do if their lens cannot focus on usual distances? Yes, exactly: They put an extra lens in front of their eyes which corrects the focus.

        You know what people do when something is too small to be seen with the naked eye? Right, they take a magnifying glass.

        You know what people do to see far things they cannot see with bare eyes? Right, they put a collection of lenses in front of their eyes.

        You know what all those examples have in common? Right, the use of lenses to allow seeing clearly something which the eye alone wouldn't be able to see clearly.

        So what would you do to enable people to see a screen in front of them, that is too close to see clearly with the naked eye?

  • (Score: 2) by Hairyfeet on Thursday August 13 2015, @01:37PM

    by Hairyfeet (75) <{bassbeast1968} {at} {gmail.com}> on Thursday August 13 2015, @01:37PM (#222274) Journal

    Because every time I've tried a VR setup the puke factor was waaay too high! Maybe its just me but my body KNOWS that it isn't moving like the VR is saying I am (some have told me its the inner ear telling me I'm sitting instead of running, or micro latencies throwing off my sense of balance) but using one always made me feel sick to the stomach after a little bit. Thinking about the game I'm currently into (War Thunder) I can't even imagine using a Rift on it without a barf bucket handy, what with all the bouncing of the tank and the rocking, so for those with a rift, how bad is the puke factor?

    --
    ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
    • (Score: 2, Funny) by Anonymous Coward on Thursday August 13 2015, @01:39PM

      by Anonymous Coward on Thursday August 13 2015, @01:39PM (#222276)

      No, still owned by Facebook.

  • (Score: 3, Funny) by miljo on Thursday August 13 2015, @02:29PM

    by miljo (5757) on Thursday August 13 2015, @02:29PM (#222309) Journal

    Isn't this the exact same thing we were talking about 25 years ago, the last time VR had promise?

    --
    One should strive to achieve, not sit in bitter regret.
  • (Score: 2) by gringer on Thursday August 13 2015, @07:49PM

    by gringer (962) on Thursday August 13 2015, @07:49PM (#222478)

    How about unlimited detail? Sounds like a job for the good old 3D vapourware company, Euclideon:

    http://www.euclideon.com/technology-2/ [euclideon.com]

    [which is, by the way, still around and has sufficient commercial interest to keep going]

    --
    Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]