Anonymous Coward writes:
Two years ago John Carmack tweeted, "I can send an IP packet to Europe faster than I can send a pixel to the screen. How f'd up is that?" And if this weren't John Carmack, I'd file it under the interwebs being silly.
Not convinced? You aren't alone, but Carmack appeared when called out to defend this claim.
We looked further and found this informative article from AnandTech about input lag.
This discussion has been archived.
No new comments can be posted.
Transatlantic Ping Faster than Sending a Pixel to Screen
|
Log In/Create an Account
| Top
| 17 comments
| Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(Score: 4, Informative) by WizardFusion on Friday March 28 2014, @09:56AM
So, not only is this a dupe from a week ago (or there abouts), but it's also teo freaking years old.
Come on, this should have been binspam'd from the get go.
(Score: 5, Funny) by lx on Friday March 28 2014, @10:07AM
Yeah but with the Oculus takeover this is now a Facebook story!
(Score: 1, Offtopic) by WizardFusion on Friday March 28 2014, @10:30AM
+1 funny
+1 sad
(Score: 5, Funny) by mattyk on Friday March 28 2014, @12:49PM
Proof that it takes ages to update the screen?
_MattyK_
(Score: 1) by darinbob on Friday March 28 2014, @09:23PM
The real problem is that Carmack thinks that this is a problem. It's a freeking GAME, have some perspective!
But ya, USB is a stupid protocol, most people who've worked with it realize how messed up it is. But it was intended for slooow devices initially and thus a stupid protocol is not necessarily bad if it has redeeming features (like keeping the consortium happy and profitable). You just gotta love it though when USB interrupt pipelines are polled and everyone manages to keep a straight face about it.
(Score: 1, Insightful) by alioth on Friday March 28 2014, @10:39AM
On my 1982 Sinclair Spectrum (with a Z80 CPU running at 3.5MHz) I can send a pixel to the screen much faster than that. (In fact I can get the entire screen updated in under 20ms, a transatlantic IP packet takes about 60ms).
The fastest I can send a pixel to the screen on said computer would be something like this program, which will set the top left pixel on the screen:
ld a, 0x80
ld (0x4000), a
This takes approximately 5 microseconds.
(Score: 5, Informative) by Anonymous Coward on Friday March 28 2014, @12:00PM
> The fastest I can send a pixel to the screen
> on said computer would be something like this
> program, which will set the top left pixel on
> the screen:
>
> ld a, 0x80
> ld (0x4000), a
>
> This takes approximately 5 microseconds.
That doesn't send a pixel to the screen, it only
updates memory that represents the screen. To
actually see the new pixel, the electron gun in
your CRT has to sweep through that pixel. That
will take somewhere around 8 milliseconds, with a
worst-case around 17 ms.
Your fast update also excludes the whole "read
user input and figure out what to do with it"
part. TFA suggests ~10-100ms for that on a modern
computer; I'd guess 2-5x that on your Sinclair,
for a contemporary game.
TFA argues that there's maybe 20 or so ms of lag
associated with transmitting a frame to an LCD and
actually displaying that frame, lag which is not
present in direct-drive CRTs, but is really a
small part of the user-input-screen-update cycle.
(Score: 0) by Anonymous Coward on Friday March 28 2014, @01:24PM
I wouldn't be so sure about that. I don't know about the sinclair, but on the C64, the keyboard was laid out (electronically) in an 8x8 matrix, connected to two memory mapped 8 bit parralel ports. Although you do say "game", which is completely different from Carmacks test. For his test, which only needs to poll one key, you could set up one port ahead of time to the correct row (assuming interrupts disabled), after which polling the port is a single instruction plus a branch if the port read as zero. Updating the screen background color is a third instruction. Along with a jump back to the test, that's four instructions for the entire test routine (after setting up everything), and the jump is not time critical.
Even with 1 MHz, three instructions is pretty fast.
(Score: 2) by Koen on Friday March 28 2014, @03:24PM
And thanks to Z80's LDIR (load, increment & repeat) instruction one could send whole screens (or parts of it) very fast.
/. refugees on Usenet: comp.misc [comp.misc]
(Score: 5, Informative) by Boxzy on Friday March 28 2014, @11:16AM
Were always superior for input lag. They had actual variable resistors to control things like contrast, brightness, hue and volume.
Now, every display has to have its own computational engine, capable of painting the menu system on the screen because its cheaper. Bad engineering means the stream of pixels has to be sent through the computed pipeline the same way the menu is. There's no real reason other than money that the signals can't still be pass-through other than engineering complexity. (still money)
Go green, Go Soylent.
(Score: 2) by nitehawk214 on Friday March 28 2014, @04:27PM
You seem to be making two separate arguments.
First, that direct controls are superior to the stupid OSD on screen controls. I fully agree with this. Even if the monitor makers could design a UI worth a damn, physcial controls are almost always superior. The only issue is that modern monitors have tons of settings that can be changed, so some sort of on-screen will almost always be necessary.
Secondly, and independently, you argue that analog displays are superior to digital. Do you even recall consumer-grade analog crt screens? Even brand new they were shit, and after a few years of heavy use they were ready for the scrapheap.
Arguing that your >$1000 professional super-display was superior is useless. Those high end analog displays are easily outmatched in size and quality by mid-range lcd displays of today. Spend >$1000 on a current screen and you get... well I don't know. I haven't had the need to spend more than 250 on a display in more than 10 years, and I haven't had to discard one in as long either.
"Don't you ever miss the days when you used to be nostalgic?" -Loiosh
(Score: 2) by nitehawk214 on Friday March 28 2014, @04:29PM
Addendum: Actually I do know something you get on modern super expensive displays... lots of monitor controls! Just like in the old days you can have separate buttons for brightness, contrast, etc. Are physical buttons really that expensive?
"Don't you ever miss the days when you used to be nostalgic?" -Loiosh
(Score: 1) by Boxzy on Friday March 28 2014, @08:08PM
The only place where I argue CRT was ever superior is input lag. In virtually every respect LCD's have exceeded a CRT. LCD's introduced a problem CRT's never suffered with except in a few minor edge cases. Personally I have never been too bothered by tiny amounts of lag until lip-sync becomes a problem.
Go green, Go Soylent.
(Score: 1) by cybro on Monday March 31 2014, @03:17AM
You forgot black levels.
(Score: 2) by Boxzy on Monday March 31 2014, @08:04AM
Sure, that would be one of those minor edge cases. Not everybody obsesses about how black is black. "I'll stop wearing black when they invent a darker colour!"
Go green, Go Soylent.
(Score: 4, Insightful) by Foobar Bazbot on Friday March 28 2014, @04:36PM
This is bullshit. The presence of an internal framebuffer is strongly correlated to LCD vs. CRT, and not at all correlated to OSD vs. non-OSD.
The "dumb pass-through screens" were CRTs, fed off a VGA or component source. They received a pixel at a time, and displayed a pixel at a time.
Almost every current screen is an LCD of some sort, and LCDs don't display a pixel at a time, they display a whole row/column (or depending on the matrix design, some large fraction (usually 1/2) of a row/column) at a time. Neither a true pixel-at-once (VGA/component) nor DVI/HDMI's pixel-serialized-over-ten-bits connection is suitable for directly driving these -- there must be at least a line buffer (or, I suppose, an insanely wide parallel video interface that transfers a line at once). Therefore no LCD screen can really be a "dumb pass-through screen". Moreover, if they are to be useful at any other resolution than the panel's native resolution, as is commonly required, you need at least a ring buffer of multiple lines for good vertical scaling, and it's simplest with a full framebuffer.
Note that many CRTs weren't dumb screens with a half-dozen pots to twiddle, but had menu systems so everything could be adjusted with the front panel buttons/wheels and the OSD. Yet they implemented OSD overlays without routing the video signal through a framebuffer, because adding a framebuffer, ADCing the video signal into it, blitting an OSD on, and DACing it back out to the CRT proper would have been worse and more expensive than the genlocked overlay these screens actually used. Doing the OSD menu in a framebuffer only became cheaper once the framebuffer was already there (and already causing input lag) for scaling reasons, and that was only needed with LCDs.
(Score: 5, Informative) by wonkey_monkey on Friday March 28 2014, @11:29AM
Firstly, this is old, old news. The tweet was two years old, and the other linked article is 5 years old.
Secondly, the headline should actually read:
systemd is Roko's Basilisk