Arthur T Knackerbracket has found the following story:
Skywatchers in Spain recording meteors being transformed into brilliant streaks of light by atmospheric compression are a bit miffed – as their view was rudely interrupted by a slew of Elon Musk’s Starlink satellites.
Below is a short clip of what it looked like above La Palma, one of Spain’s Canary Islands last week. The meteor shower known as Alpha Monocerotids crisscrossed the sky, though it becomes hard to spot them once the satellites come flooding in.
SpaceX's table-sized Starlink birds, which sport reflective solar panels, are closer and brighter as they zip across the camera’s line of sight like machine gun bullets.
Starlink satellites during a meteor shower on Nov. 22. pic.twitter.com/wJVk1qu49E
— Patrick Treuthardt, Ph.D. (@PTreuthardt)
Denis Vida, a geophysics PhD student at the University of Western Ontario, Canada, who wrote the code to generate the footage above captured from one of the Global Meteor Network’s cameras, said the obstruction happens every day.
“Note that this was not a one time occurrence,” he told The Register. “We see this every day before dawn with about half the cameras in our network. During that time we effectively lose about half our field of view because of this.
[...] “These satellites will most definitely interfere with important astronomical observations which can have implications on predicting future meteor shower outburst. Accurate meteor shower predictions are essential for understanding the hazard they pose to spacecraft – do you see the irony? – and astronauts in orbit.
(Score: 0) by Anonymous Coward on Monday December 02 2019, @01:19PM (3 children)
What makes this impractical, assuming of course you've worked out the technical details of how to sufficiently get the absolute positional knowledge of your telescopes as well as hold their relative positioning to handfuls of nanometers, is that your signal is limited by the area of your virtual telescope aperture. Sure, putting them spaced very large distances apart gets you the angular resolution of an aperture with a diameter of that separation (but only in the direction between the two!), but your signal-to-noise is horrible because the combined collection area of the telescopes is only a very, very small fraction of the virtual aperture. Your signal for the virtual telescope goes as the ratio of the actual aperture area to the virtual aperture area.
For instance, take two Hubble telescopes (2.4 meter diameter, or let's just consider the diameter--call it D_h). If you wrap some space-grade duct tape around them and fly the two tubes, the circle that circumscribes them has a diameter of 2 Hubbles (D = 2*D_h) giving a collection efficiency (e = (D_h/D)^2) of 25%. Now separate them so that they are 10 Hubble diameters apart (measured from their edges, to make the math easier), now the collection efficiency is 1%. You can see where this is going REAL fast. So to improve your signal-to-noise, you fly more telescopes (or you integrate that much longer), but in general you'll never fly enough to overcome the signal collection problem except for some very specific limiting cases. You've got a much better chance doing it at RF wavelengths (basically, moving the VLBA into space) because you don't need to actively maintain the relative phases of the apertures (you can time-tag the data as it is collected and phase it up in post processing), but you still need their relative position knowledge to a fraction of the wavelength.
(Score: 2) by fyngyrz on Monday December 02 2019, @01:41PM (2 children)
In a deep-space array, you can put the units anywhere you want relative to each other. At any time, really. So as you increase the size of the array over time, you can vary the aperture as appropriate for the task at hand.
You could arrange the units at the same spacing they would have had in orbit (or better it), but without the observational penalties of being in orbit: a deep space array has considerably less in the way of its observations than an orbiting array, so it can inherently observe more things, more easily, with more active units of the constellation at any one time.
The significant problem with a deep space array is mainly getting them out there — the initial cost is higher. But the performance advantages are clear. There's an additional advantage as well... in orbit, there's a practical limit to the number of units one can deploy, and higher risk to them as well; there are already way too many bits of broken/obsolete junk and currently active tech zipping around up there. And these things interfere with ground-based observation and pollute the view of the sky, too. But moving away from earth orbit, the available deployment opportunities skyrocket, no pun intended, but well, there it is. 😊
Ideally, such an observational system would be established sooner rather than later, even if small, and then added to constantly as there is always a benefit to be had by doing so although that decreases in magnitude as the array gets larger. Still, more units and a larger baseline will always be better to some degree. Such an array would ideally be added to automatically and cost-free once robotic space-based manufacturing finally gets happening, which is something we really need to do anyway.
--
No, I didn't trip. That was a random gravity check.
(Score: 0) by Anonymous Coward on Monday December 02 2019, @08:21PM (1 child)
If we're talking about optical wavelengths here, I think you don't have an appreciation for the technical challenges that are involved. Getting them into a deep space orbit is by far the easiest part of the whole thing. Even just doing two telescopes on separated spacecraft is one of those "maybe next decade" problem. The thing that gets you is that this kind of coherent beam combination needs to be done in real time before you take the data, not after-the-fact like you can do with RF wavelengths. So you need to phase up the light from a distance source, which means measuring and maintaining the optical path length between all of the separate telescopes to the tens of nanometers level. Not only do you need to hold the relative pathlengths stable, you need to make them equal. It is easy to picture this with a 2-telescope system where their beams are combined in the middle. If the beam combiner is exactly halfway between the two telescopes and they are looking at a distant source directly in front of them, that is easy to visualize to see how that would be done.
Now lets say the source is 45 degrees off to the side in the direction of one of the telescopes. If your telescope separation is L, then without moving anything you've now added 0.707*L of optical pathlength between the light hitting each telescope because that is the extra distance the light has to travel to enter the further away telescope. So you now need to make up this pathlength somehow. Two things you can do is to rotate the whole array so that the new source is directly in front of them again, or you have to add optical path length to the telescope that gets hit with the light first. Let's say your telescopes are spaced 1-km apart, so somehow you need to add 707 meters of delay. The way they do it with the Navy Precision Optical Interferometer [lowell.edu] is that they have a set of little cars on tracks with mirrors on them where each one can move something like 30 meters, so as a star passes overhead, the delay lines move in and out to maintain the same optical pathlength between each telescope.
What is implied in the above is that you can in principle measure the separation between the two telescopes to tens of nanometers, and even hold that relative position, but you can't measure the out-of-plane differences between the two. Think of the case where your two telescopes are not in a plane, but one is a little bit closer to the star than the other, but you have to measure that distance from within in the plane. You quickly end up with your "small" separate telescopes not being large spacecraft to handle all the pathlength interferometers you need to make those measurements, plus the pathlength corrective piezoelectric elements, plus all the station keeping and (very high) precision gyros, etc., etc., etc.
I'll leave as a problem for the student to consider pointing stability and accuracy (hint: turn it all into pathlength differences).
It is a really really really hard thing to do, and we're just talking about two individual telescopes (plus a third to be the beam combiner).
I should note that the LIGO-in-space ideas are a lot easier to do because you are not trying to phase up on a distance source, but upon each other. But keep in mind that even though that is "a lot easier", that is still a really hard engineering problem that will take many years of development to pull off (and those LIGO-like spacecraft won't be little cubesats).
(Score: 2) by fyngyrz on Wednesday December 04 2019, @12:50AM
No doubt; I never thought otherwise. But in the end, that part of it is just an engineering problem.
Engineers will be happy to solve those things.
The real problems are are money and politics. But I repeat myself.
--
Some drink from the fountain of knowledge. Others gargle.