As reported by The Register :
A Purdue University undergraduate has picked a way to stop virtual reality inducing motion sickness: program in a virtual nose.
Fixed-reference objects help to stop the sickness, Whittinghill says, but not every simulation lends itself to the inclusion of something like the window frames in a cockpit to give the brain something to latch onto.
While discussing this problem, undergraduate Bradley Ziegler piped up with the idea of programming in a virtual nose. The idea is that we're all used to our hooters haunting our field of vision, so much so that we take it for granted that it's always possible to see a slice of schnoz.
Subjects given the virtual nose staved off simulation sickness longer than their noseless counterparts in a variety of simulations, including a sickness-inducing roller coaster ride. The original source provides more information, including a finding that test subjects didn't notice the virtual nose during testing, even displaying skepticism over its presence when told about it later during post-testing debriefings.
(Score: 3, Interesting) by martyb on Friday March 27 2015, @07:36PM
I noticed from the screen capture that one side was lighter than the other, but I could not tell whether that was a static image, or a dynamic effect based on scene lighting.
Given the description of how they came about testing it, I could well imagine that they just put up a randomly lit nose, for a first pass, to see if their hypothesis played out (pun intended). I find it interesting that most (all?) of the participants were unaware that the nose was there, even when told about it after the testing. Hence my assumption of a static image. What need is there for dynamic lighting if they don't even know it is there? The results demonstrated it did have a positive effect.
So, my thought was a more accurate simulation, using appropriate lighting, may have an additional beneficial response. And for those for whom the benefit was not as dramatic, assuming a static simulation, may have subconsciously picked up on the incorrect lighting cues and so failed to derive as much of a benefit as they might have.
tl;dr: if it was already dynamically lit -- yay! If it was a static image, I would very much like to see what effect, if any, would result from dynamic nose shading.
Wit is intellect, dancing.