The rise and fall of FireWire—IEEE 1394, an interface standard boasting high-speed communications and isochronous real-time data transfer—is one of the most tragic tales in the history of computer technology. The standard was forged in the fires of collaboration. A joint effort from several competitors including Apple, IBM, and Sony, it was a triumph of design for the greater good. FireWire represented a unified standard across the whole industry, one serial bus to rule them all. Realized to the fullest, FireWire could replace SCSI and the unwieldy mess of ports and cables at the back of a desktop computer.
Yet FireWire's principal creator, Apple, nearly killed it before it could appear in a single device. And eventually the Cupertino company effectively did kill FireWire, just as it seemed poised to dominate the industry.
The story of how FireWire came to market and ultimately fell out of favor serves today as a fine reminder that no technology, however promising, well-engineered, or well-liked, is immune to inter- and intra-company politics or to our reluctance to step outside our comfort zone.
(Score: 2) by kaszz on Friday June 23 2017, @05:56AM (7 children)
What surprises me is that USB is such a fuckery for a peripheral port..
Couldn't they at least designed without most of the obvious crap?
It is shit!
* Polled 1000 times per second (hello? hello?)
* Half-duplex and back to back congestion that can physically burn the port.
* Relies on a single host at the top of the tree to control the network. All communications are between the host and one peripheral.
* Complex packet content and unusual payload sizes.
* Electrical single ended signaling for out of band indications.
* Limited to a maximum of 5 meters.
* A complete mess using hubs with different USB versions.
* Weak power at 2.5 watt.
* No electrical isolation.
* Mechanically weak connectors.
Now there are newer versions of USB. But it will always be stuck with the shitty legacy because you can't know what will be plugged in.
(Score: 2) by Immerman on Friday June 23 2017, @03:42PM (6 children)
Not that I'm a huge fan of the USB design, but...
> Polled 1000 times per second (hello? hello?)
Seems outlandish by human reference frames, but to the computer that's only once every few million cycles... Granted an interrupt-based design would be even more efficient.
>Relies on a single host at the top of the tree to control the network. All communications are between the host and one peripheral.
That does simplify things dramatically on the device side, which given the plethora of crappy barely-compliant-enough peripherals out there, is probably a good thing. Also sidesteps a lot of security and compatibility issues - for example you can be sure your phone and PC aren't corrupting your external hard drive by modifying the file system simultaneously.
> A complete mess using hubs with different USB versions.
Really? I've had whole "trees" of USB hubs of various ages connected and never encountered any problems. Obviously everything downstream from and old hub would be limited to the old standards, but that's true of pretty much anything.
(Score: 2) by kaszz on Friday June 23 2017, @04:03PM (5 children)
The polling is an issue because it causes latency. And when there are many devices on the bus on different hubs to complicate matters that multiplies.
Single host at the top of the tree to control the network makes it impossible to relieve the controller as a I/O bottleneck. And adds latency. If a feature is a security problem, then make it conditional to operator approval. Not impossible.
Mixed USB version cause some trouble when it's doing protocol translation.
(Score: 0) by Anonymous Coward on Friday June 23 2017, @05:04PM (1 child)
Sure, they should poll faster. 1000 Hz is borderline for music.
The UHCI interface is also kind of defective. The polling should happen fully in hardware, without needing to involve the OS.
6250 Hz, 8000 Hz, or 8192 Hz, or 10000 Hz, or 11025 Hz, or 12000 Hz, or 12500 Hz, would have been much nicer. (with various different clock compatibility tradeoffs)
But of course affordable chips (like mice and keyboard chips) were slow back in the day, so the sort-of-acceptable 1000 Hz was chosen.
(Score: 2) by kaszz on Sunday June 25 2017, @02:16AM
They could have allowed an option to specify faster polling speed. But then Winteloys and foresight are incompatible as proven by history..
(Score: 2) by Immerman on Saturday June 24 2017, @03:09PM (2 children)
True. But then USB was really designed as more of a peripheral bus initially, for use in applications where latency was basically a non-issue. You're not going to notice 1ms of latency in your mouse, keyboard, or external floppy drive. Even today it's a rare situation where it's going to matter, though granted it's a pretty lousy interface for a data-heavy simulation datastore or other latency-constrained application.
(Score: 2) by kaszz on Sunday June 25 2017, @02:23AM (1 child)
It's braindead by design. Any signal measure and characterization device will easily suffer. And it's not that it would cost a lot to make it sane.
(Score: 2) by Immerman on Sunday June 25 2017, @01:45PM
Only if the device requires real-time feedback from the PC it's feeding data to - so long as it's only reporting what it measured, a few ms of delay in getting the information is probably irrelevant. And if you're trying to precisely synchronize between devices then polling versus interrupts is irrelevant, what you need is a shared clock (which, honestly, I have no idea if USB provides in a useful manner)
As for not costing a lot to make it sane - maybe not in relation to a piece of signal processing equipment, but it would cost quite a bit more in relation to a $1 mouse (or $5 back when the standard was first created) And changing the standard today would quite likely cost backward compatibility, which is far more expensive.