Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday July 11 2017, @09:34AM   Printer-friendly
from the just-pining-for-the-fjords dept.

Facebook has cut the price of the Oculus Rift for the second time this year. It debuted at $800, was cut to $600 in March, and is now $400. Is there real trouble in the virtual reality market, or is it just a normal price correction now that early adopters have been served?

It means that the Rift now costs less than the package offered by its cheapest rival, Sony, whose PlayStation VR currently totals $460 including headset and controllers.

Even so, it's not clear that it will be enough to lure people into buying a Rift. A year ago, our own Rachel Metz predicted that the Rift would struggle against Sony's offering because the former requires a powerful (and expensive) gaming computer to run, while the latter needs just a $350 PlayStation 4 game console.

Jason Rubin, vice president for content at Oculus, tells Reuters that the reduction isn't a sign of weak product sales, but rather a decision to give the headset more mass market appeal now that more games are available. Don't believe it: this is the latest in a string of bad news for the firm, which has also shut down its nascent film studio, shuttered in-store demo stations of its hardware, and stumped up $250 million as part of a painful intellectual property lawsuit in the last six months.

Here's a February story about the Oculus demo stations at Best Buy stores being shut down.

Previously: Facebook/Oculus Ordered to pay $500 Million to ZeniMax
Google Partnering With HTC and Lenovo for Standalone VR Headsets


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by LoRdTAW on Tuesday July 11 2017, @05:52PM (8 children)

    by LoRdTAW (3755) on Tuesday July 11 2017, @05:52PM (#537704) Journal

    Sound is only limited by your headphones and software/audio streams, because you only have two ears and not 5.1, 7.1, or 22.2 etc. ears.

    <ranton>
    This is something that has always bothered me. I've bought headphones that are 5.1 but I have 2.0 ears. Our ears and brain are setup in such a way that we can locate sound sources in 3D space without our vision. this is because sounds refracts off objects just like light. Thanks to the shape of our ears, we can not only hear these refracted sounds, but also judge their direction. Sound card technology is so stagnated that it's a complete afterthought for most people. You used to buy a dedicated card based on features and quality. Now just about EVERY motherboard comes with that fucking crabby Realtek garbage.

    Audio is still stuck in a primitive 1980's surround sound technology. Instead of "1D" stereo where you have a single axis of sound you have a 2D plane made up of four speaker quadrants. Then they throw in a fifth channel for dialogue and a bass channel. Then they play back different audio samples using different points on the 2D plane. You end up with something resembling 3D but you never feel immersed. Then they insulted us and pushed for even more speakers in 7.1 systems as if that was any better. 7.1 was a gimmick to sell more speakers.

    True 3D audio only needs two well placed speakers, or more properly, headphones. The same 3D geometry data of the game world can also be piped to a GPU or other processor to perform some sort of ray tracing or other 3D audio refraction technique. Perhaps even synthesized audio that uses algorithms to form sounds such as an explosion by calculating what the shockwave would sound like along with all the reverberations and refractions. Even sounds like knocking on wood, walking on metal or splashing water could in theory be described mathematically and rendered with even a little noise thrown in for randomness. Would solve a lot of sound issues as all sounds in games are pre recorded or synthesized. Now you assign a sound effect to an event and let the hardware synthesize it. Probably more complex than 3D rendering. But that's the problem; it's a lot of effort to throw at something everyone stopped caring about long ago.

    Part of the problem is raw processing power. We are still starved to processing power and what little we have is put towards graphics. There is little left for Audio, AI and Physics so the gameplay and graphics have to make up the difference. If people can barely run the latest AAA title at 60FPS @1080P+ then there is no chance more realistic anything can be implemented. I suppose we have to wait until we have more GPU power or crazy many core CPU's and better languages or API's to deal with concurrency and parallelism.
    </rantoff>

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by takyon on Tuesday July 11 2017, @06:01PM (6 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday July 11 2017, @06:01PM (#537710) Journal

    AFAIK, stealth games have implemented good audio, or at least placed a much greater emphasis on it than other genres. And you can get realistic sounding audio by using binaural recording [wikipedia.org]. Maybe that method doesn't even need fancy software techniques?

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by LoRdTAW on Tuesday July 11 2017, @06:35PM (5 children)

      by LoRdTAW (3755) on Tuesday July 11 2017, @06:35PM (#537727) Journal

      Binaural recording sounds more like a remote set of ears.

      I'm talking real time refraction by taking a pre-recorded mono sample and using the game's 3D geometry, create a new sample based on the multiple paths of reflection. So you have a layered cake of the same sample but with slight delays and differentiation in the left-right mix which all account for the shape of the human ear as well. Now you have a sound that can be "located" in 3D space. That is well beyond the simple left/right/front/rear we have now.

      Picture yourself in an older home, the kind where the floors creek, with a second story. Your in a 1st floor room and someone walk above you on the 2nd floor. I bet you that IRL, you can tell their location and heading from that sound. Now picture that in a game: You hear the footsteps of an enemy above you walking across the floor. Now you can shoot through the floor or predict where he's going. Thats a real game changer.

      • (Score: 2) by takyon on Tuesday July 11 2017, @07:00PM (4 children)

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday July 11 2017, @07:00PM (#537741) Journal

        So EAX [wikipedia.org], TrueAudio [wikipedia.org] et al. are not sufficient? Bummer.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by LoRdTAW on Wednesday July 12 2017, @12:31AM (3 children)

          by LoRdTAW (3755) on Wednesday July 12 2017, @12:31AM (#537870) Journal

          EAX: Dead.
          The AMD TrueAudio appears to be AMD only. And I have not heard of it until now, must have been under a rock or something. I'll have to see if I can demo it on my AMD Linux box. Otherwise, my windows gaming rig is Intel/Nvidia.

          • (Score: 1) by purple_cobra on Wednesday July 12 2017, @08:52PM (2 children)

            by purple_cobra (1435) on Wednesday July 12 2017, @08:52PM (#538327)

            Wasn't EAX's death due to some change Microsoft made to their driver model? OpenAL was supposed to make it work (or even take over from it), but I don't think there's much interest OpenAL.
            The only game I ever remember using EAX well was Thief: Deadly Shadows (Thief 3); it added so much to the game, plus it made the Cradle level even more disconcerting. I tried running the Steam version with OpenAL but it just crashed to desktop, sadly.

            • (Score: 2) by LoRdTAW on Thursday July 13 2017, @03:18PM (1 child)

              by LoRdTAW (3755) on Thursday July 13 2017, @03:18PM (#538724) Journal

              Wasn't EAX's death due to some change Microsoft made to their driver model?

              Yes. From the EAX wikipedia article [wikipedia.org]:

              As of 2010, EAX is rarely used, with modern games utilizing the CPU to process 3D audio rather than relying on dedicated hardware.

              And further down we find:

              Because hardware acceleration for DirectSound and DirectSound3D was dropped in Windows Vista, OpenAL will likely become more important for game developers who wish to use EAX in their games.

              Look like AMD is trying to bring back the idea but an AMD only option is a dead end in my opinion. Either we start using the GPU or we make use of all those cores we have coming about.

              • (Score: 1) by purple_cobra on Friday July 14 2017, @09:05AM

                by purple_cobra (1435) on Friday July 14 2017, @09:05AM (#539029)

                Thank you for finding that Wikipedia link that I was too lazy to dig out. :)
                I agree that anything that can't be freely implemented is just a postponed dead end. Using the CPU or GPU makes sense as the market for third-party sound cards for gaming is small and getting smaller; onboard sound is improving and most people seem happy with it, plus you have external sound cards that don't require any fiddling around inside the case. I wonder if AMD wanting their own implementation is tied to the upcoming APUs based on their Zen architecture?

  • (Score: 2) by Immerman on Saturday July 15 2017, @03:39AM

    by Immerman (3985) on Saturday July 15 2017, @03:39AM (#539454)

    Unfortunately you need more than two speakers, probably a lot more, because you need to actually recreate all the directions those reverberations and reflections are coming from. You can *try* to fake it with headphones, or better yet earbuds, as EAX tried to do (and I kept my EAX card for many years, until Windows stopped being compatible with the drivers), But it's inherently going to be a poor approximation.

    The problem is that all those wrinkles in your ears do an extremely subtle and highly spatially-sensitive pre-processing before the sound reaches your eardrums - and then your brain does a bunch of post-processing to determine the origin of the sound based on the distortions introduced by your ears.

    To fake that you need to first simulate the reflections and other interactions of the sounds with the environment to determine exactly what sounds are reaching your in-game ears from what direction, and when (including appropriately delayed reverberations if you want to be able to hear the shape and size of the rooms). But that's the easy part - the next part verges on impossible.

    Option one is deliver the sound to your ears from the appropriate direction - that a 3D array of speakers calibrated to deliver the sound as precisely as possible - the more speakers the better since any attempt to fake a sound coming from between speakers is going to be noticeable.

    Option two is to also simulate the pre-processing done by your ear-wrinkles and then deliver the sound directly to your ear drums, bypassing your real ear-wrinkles as much as possible (earbuds should do the job). Unfortunately that's going to be extremely difficult, because no two people have ears with exactly the same shape, and the spatially-based distortions they introduce are subtle enough that minor inaccuracies will have much larger effects in how your brain interprets them. Every user would have to get a highly detailed scan of their own ears, and feed that into their sound drivers.