SpallsHurgenson writes "Steve Perlman is ready to give you a personal cell phone signal that follows you from place to place, a signal that's about 1,000 times faster than what you have today because you needn't share it with anyone else.
"It's a complete rewrite of the wireless rulebook," says Perlman. The technology is now called pCell - short for "personal cell" - and it allows streaming video and other data to phones with a speed and a smoothness you're unlikely to achieve over current cell networks.
Perlman's invention - formerly known as DIDO - discards the current arrangement of cells shared by many users, giving each phone its own tiny cell, a bubble of signal that goes wherever the phone goes. This "personal cell" provides just as much network bandwidth as today's cells, Perlman says, but you needn't share the bandwidth with anyone else. The result is a significantly faster signal."
(Score: 5, Informative) by Foobar Bazbot on Sunday February 23 2014, @05:11AM
The idea of DIDO (distributed input, distributed output) is simple, and not wrong. If I have a bunch of transmitters capable of arbitrary transmission in a certain band, I can use them to construct a field for a bunch of receivers such that each receiver, sampling that field at one point, sees it's own single, full-bandwidth signal. As long as I have more transmitters than receivers, this is generally possible -- it's just a big hairy radio interferometry demo, like SKA running backwards. More receivers than transmitters means I don't have enough DoF, obviously -- exactly equal is theoretically enough, but completely impractical. (And obviously, it works both ways. If I replace all transmitters with transceivers labeled "T" and all the receivers with transceivers labeled "R", then I can discriminate all the "R" units' transmissions at once, if I have more "T" units listening than there are "R" units.)
But there's several serious practical considerations. Zeroth, you need as many towers as there are users. That's kinda obvious, so it kinda doesn't count. But still, think about filling, say, New York City, with enough antennas that there's more antennas than people... That's a mind-boggling infrastructure commitment.
First, there's the sheer computing power needed for real-time synthesis of all the signals for each transmitting antenna -- each transmitter's required signal is a function of the signals to be sent to every receiver, so this must all be calculated centrally.
Second, you need good knowledge of the transfer function from each transmitter to each receiver -- which, yes, can be gained by having them transmit a known signal, since the transfer function is generally symmetric; no need to completely characterize the propagation and use GPS positions or anything. But that's still a gigantic complex matrix that has to be kept track of and used in the aforementioned real-time computations.
Third, there's the backhaul requirements -- again, the computations are central, so a full-bandwidth signal must be reliably transported to each transmitter, regardless of network utilization. AIUI, the sort of leased-line and point-to-point-microwave infrastructure this dictates is perfectly normal for cell-tower backhaul, but the point is you can't mitigate the expense of one-transmitter-per-user by going with best-effort backhaul over cable modems and such -- each one needs reliable, full-bandwidth backhaul.
Fourth, it's highly centralized. This is something of a value judgement, but I feel we should be moving away from this sort of centralization, not towards it. The "practical" way of saying it is to suggest that centralization makes it less robust, more prone to a failure bringing down the whole network, etc. -- those are valid concerns, but they're also mitigable by redundancy, by putting the central signal-processing in some cave where it won't get hit by errant cars, etc. Besides, my real concern isn't that it's not robust, but that the centralization is an ugly, backward, design choice.
So it's possible, but I'm quite skeptical that it actually makes sense economically. If you're up for rolling out millions of cell towers, why not have an increased number of cells operating just as they do now, which gets you fewer users per cell, and gives proportionate performance improvements with fractional rollout, instead of this DIDO scheme that doesn't work at all without the whole infrastructure?
(Score: 3, Informative) by frojack on Monday February 24 2014, @01:51AM
Quote: Zeroth, you need as many towers as there are users.
No, not really. You just need an array of tiny antennas on the towers, steerable (electronically) would be ideal.
This isn't exactly new tech, its been in use (in its simplest form since 1905, and is on just about every Navy ship bigger than a harbor tug.
http://en.wikipedia.org/wiki/Phased_array [wikipedia.org]
Aereo has already developed a system with a massive amount of tiny individual antennas, for reception, and subtle modifications would allow using something similar for transmission.
And remember, the system works because at the intersection of two radio beams from different directions there is a spot where they re-inforce each other. Aiming that intersection at your phone (with a phased array) is a simple task for computers.
Changing that aim point (since nothing physically moves) is easy too. So when your phone does not need data, they can use the antenna pairs for a different phone, maybe closer or farther or just off to the left or right.
So You don't need new towers, and you certainly don't need one tower per phone.
You just need different antennas on those existing towers.
(Well, you probably do need slightly greater tower density than we have now, because in a LOT of places you phone can only "see" one usable tower. For this to work you really need at least two).
No, you are mistaken. I've always had this sig.
(Score: 1) by Foobar Bazbot on Monday February 24 2014, @09:53PM
Yes, one tower per independent transmitter is an oversimplification. No, two towers with phased arrays is not the same thing. /A1\ /s1\ = /1 1 0 0\ |B1|
Picture two towers with at least two DoF each, so you can transmit desired signals on two beams each. (In this example, steering nulls and/or more beams won't help.) We have two arbitrarily placed clients.
s1 and s2 are the desired signals to be received by clients 1 and 2. A1, A2, B1, B2 are the transmitted signals in each beam from tower A or B to client 1 or 2. Ignoring propagation delay and loss, we get:
\s2/ \0 0 1 1/
(Score: 1) by Foobar Bazbot on Monday February 24 2014, @09:55PM
Please disregard above; complete post to follow...
(Score: 2, Informative) by Foobar Bazbot on Monday February 24 2014, @11:19PM
Yes, one tower per independent transmitter is an oversimplification. No, two towers with phased arrays is not the same thing. Note that DIDO is a particular, extreme subset of MIMO -- while using multiple towers with phased arrays is definitely MIMO, it's not the same , and doesn't have the same characteristics as DIDO.
Picture two towers with at least two DoF each, so you can transmit desired signals on two beams each. (In this example, steering nulls and/or more beams won't help.) We have two arbitrarily placed clients.s1 and s2 are the desired signals to be received by clients 1 and 2. A1, A2, B1, B2 are the transmitted signals in each beam from tower A or B to client 1 or 2. Ignoring propagation delay and loss, we get:
which is trivially solved (e.g. A1=s1, B2=s2, B1=A2=0)) with 2 DoF to spare.
Adding client 3 at the intersection of beams A2 and B1 yields:
which also trivially solvable. Letting A1=s1, B1=0, then A2=s3 and B2=s2-A2=s2-s3.
But adding a fourth client at the intersection of beams A1 and B2 yields:
Which is of course, singular, and has no solution. Suddenly our four DOF aren't enough to serve four clients, and increasing the DoF on each array doesn't help as long as they're localized in two towers, and the 4 clients are in this geometric arrangement.
The radial range of coherence of an array or cluster of transmitters increases with range; at long range (relative to the array size), you get purely radial beams and nulls extending to infinity, which are usually desired, thus a compact (relative to target distance, still large compared to wavelength) array is commonly chosen. But for DIDO, those beams can interfere in multiple places, yielding singularities beyond this as shown in the example above. Since the idea of DIDO is not to form beams for their own sake, but to form local "hotspots" of coherent signal, it's far better to have the array quite large compared to the target distance, or in other words to have the transmitters distributed (hence the name) throughout the volume in which clients operate.
In practice, lumping a few transmitters on each tower, but still having several towers in view at any time, gets you enough distribution -- one per tower (or even three per tower, with sector antennas) isn't needed.
Sure, you've just described TDMA. But that's what we're (supposedly) trying to get away from -- DIDO gives each client all the bandwidth, all the time. If you argue that each client doesn't need that (at least WRT mobile phones), because we're each actually using full bandwidth only a small fraction of the time, (and not all the same fraction), you're absolutely right, and that's one reason why building out more smaller cells makes more sense than DIDO -- you can stop when 99% of clients have enough bandwidth 99% of the time.
I'm not sure DIDO makes any practical sense for anything, but if it does, it's for applications where very limited bandwidth is possible, and going with smaller cells containing fewer users is for one reason or another impractical. The NVIS example shown in the DIDO white paper is the sort of thing that could make sense.