Hubble In Trouble As Satellite Trails Start Affecting It Too
The idea that we can save astronomy from satellite interference by putting telescopes in space has run into an obstacle, or more precisely 8,500.
A study of images taken by the Hubble Space Telescope finds that more than one in 40 are crossed by satellite trails. In some cases these interfere with the science, wasting the exceptionally valuable time spent taking the image. Although the affected proportion is small, it's growing, refuting the claim we can solve the problems satellites are causing for astronomers by putting the large telescopes in space.
Spotting a satellite was once rare enough to be an exciting addition to a night under the stars away from the city lights. Today, it's become an annoying impediment to enjoying the beauty of everything else. It's not only wrong to wish on space hardware, if you start you'll never do anything else.
For astronomers the problem is not just a loss of beauty. It's becoming increasingly common for satellite trails to destroy images, often ruining time precious time a scientist had to fight hard to get and holding up important research. Although this issue is getting considerable attention, a new paper in Nature Astronomy addresses an aspect that has been largely ignored.
Elon Musk, among others, has responded to concerns about satellites' effect on astronomy by saying, "We need to move telescopes to orbit anyway", but that's not necessarily a complete solution.
The Hubble Space Telescope orbits at 540 kilometers (340 miles), which is above the majority of objects humanity has put in orbit, but there are 8,460 objects more than 10 centimeters (4 inches) across above it. A team led by Dr Sandor Kruk of the Max Planck Institute for Extraterrestrial Physics recruited citizen scientists through the Hubble Asteroid Hunter project, to study Hubble's archive from 2002 to 2021 and distinguish satellite trails from asteroids.
Journal Reference:
Kruk, Sandor, García-Martín, Pablo, Popescu, Marcel, et al. The impact of satellite trails on Hubble Space Telescope observations [open], Nature Astronomy (DOI: 10.1038/s41550-023-01903-3)
(Score: 5, Insightful) by Beryllium Sphere (r) on Thursday March 09, @06:53PM (11 children)
What am I missing? Can't planning software tell you whether you'll see a satellite if you point in a given direction at a given time?
(Score: 2, Insightful) by RS3 on Thursday March 09, @07:07PM (5 children)
Purely speculating, but are _all_ satellite orbits known? I'm thinking some military ones might not be? Russian, Chinese, who-knows-whose?
I don't really know Hubble's orbit distance, but you'd think it would be above most satellite tracks.
(Score: 3, Informative) by EvilSS on Thursday March 09, @07:25PM
https://www.google.com/search?q=spy+satellite+tracking [google.com]
https://www.google.com/search?q=Hubble%27s+orbit+distance [google.com]
https://www.google.com/search?q=leo+satellite+altitudes
(Score: 2) by JoeMerchant on Thursday March 09, @07:44PM (3 children)
>I don't really know Hubble's orbit distance, but you'd think it would be above most satellite tracks.
Whatever its orbit is, it could be higher.
40 is a big number, is it too much of a stretch to ask TFS to give us a denominator?
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 3, Insightful) by janrinok on Thursday March 09, @09:34PM (2 children)
The TFS says:
I.E more than 1 in 40 images are crossed by satellites. What 'denominator' do you want?
The Hubble Space Telescope orbits at 540 kilometers (340 miles), which is above the majority of objects humanity has put in orbit, but there are 8,460 objects more than 10 centimeters (4 inches) across above it.
It was only 6 short paragraphs....
(Score: 4, Interesting) by JoeMerchant on Thursday March 09, @10:54PM (1 child)
Sorry, skimming quickly between work distractions... gotta have some priorities in life, should just keep my fingers still when I've skimmed too quickly, I guess.
1/40 seems... unlikely to me for a mere (round numbers) 10,000 objects "above" the Hubble. What's an average Hubble exposure time? Google says 1200 seconds (that may be just the deep field camera). The Hubble field of view varies tremendously, for the deep field camera it's 1/24millionth of the sky. Now, during those 1200 exposure seconds, those 10,000 objects might be represented as streaks, they'll be moving slower than space station orbit, but just give them credit for a 90 minute orbital period, so they might complete up to about 1/4th of an orbit during the exposure. 10,000 objects, that's kinda like one streak completing 2500 orbits. If the impact is 1/40 images taken, then that means it would take a streak 100,000 orbits long to make an impact. Now, for some really really bad math, take the square root of that 1/24 millionth of the sky, it's looking at 1/5000th of the sky on a side? 4.32 arc minutes? That seems... kinda large for a deep field camera. But, my simple square root wasn't too far off, Google is saying deep field measures 2.6 arc minutes on a side. So, that's an aperture about 1/4000th of the 180 degrees you can choose to look at pole to pole.
Actually, if you've got 100,000 streaks running across the sky and your 2.6 arc minute window only gets intersected once... that's sort of unbelieveably good.
There will be times the objects aren't illuminated on the Hubble side, but probably not too often since that would usually mean the Hubble is looking sunward. I would tend to believe that, more often, these streaking objects are themselves so small and far away that they're insignificant. Again from the article: 2.6 arc minutes is a tennis ball at 100 meters, so if you've got a tennis ball at 100 km, it's going to be waaay smaller than a pixel in the image, though if it's bright enough it could still affect the image.
Of course, if we're going to bitch about 8460 manmade objects, how many asteroids and other natural objects are out there also affecting our imaging?
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by ChrisMaple on Saturday March 11, @03:21AM
There are also times when the space junk is in the Earth's shadow.
(Score: 3, Interesting) by zocalo on Thursday March 09, @09:20PM
UNIX? They're not even circumcised! Savages!
(Score: 3, Informative) by Anonymous Coward on Thursday March 09, @10:20PM (3 children)
Basically no. First off, the Hubble and any other imaging satellites don't have anything like that written into their mission planning nor onboard systems, so how do you get that information into any part of that system? Next, satellite orbits change all the time. You can get ephemerides for any satellite from various sources, but that only tells you about now. The orbit will be significantly different a day or more later. That's why the crossing distances for satellite collision warnings sound so big (a kilometer or so), it is because the error bars on the satellite positions are not known very well. The reason they are not know very well is that it is not easy propagating orbits into the future because the Earth has an atmosphere and is not a gravitational point source, so there are all sorts of nonuniformities and drag that mess up your future predictions.
(Score: 3, Insightful) by aafcac on Friday March 10, @05:03AM
Yes, and there's also the issue of the massive increase in the number of satellites being launched. Much of them are below satellites like Hubble that are intended to be up there for decades, but there are probably more of those as well. Spacing them to prevent collisions also makes it harder to find time to use satellites that need a completely clear view as well as more spacing means less total view available at any given time.
(Score: 0) by Anonymous Coward on Friday March 10, @12:19PM (1 child)
The Air Force (Space Force?) constantly tracks all of these objects in real time. They routinely provide warnings to the people managing the ISS, which then makes small changes in orbit to avoid collisions. And the Hubble doesn’t take pictures at random; human beings on the ground plan every picture of takes.
Maybe someone should put those two groups in the same room and have them figure out how to combine resources to make sure satellite tracks doubt screw up their pictures. A better solution then canceling all these satellites and losing the benefits they provide to everyone, just to appease a handful of astronomers.
(Score: 2) by aafcac on Friday March 10, @02:17PM
Those are conflicting goals. Safety dictates spacing the satellites out, but imaging requires being able to take the pictures when the subject is in position. Which is a lot harder as the number of satellites increases and the space between them decreases.
(Score: 3, Funny) by Anonymous Coward on Thursday March 09, @07:17PM (3 children)
The Webb telescope doesn't have this problem, does it?
(Score: 4, Informative) by takyon on Thursday March 09, @07:38PM
Most Starlink orbits going forward are going to be below 540 km, around 340 km. Maybe less in the future if it can improve service.
Meanwhile Hubble will eventually be allowed to fail and/or deorbit and burn up if it doesn't get another servicing mission [spacenews.com].
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 4, Informative) by ElizabethGreene on Friday March 10, @03:35PM (1 child)
Short answer: No.
Long answer: JWST is in a halo orbit at the Sun-Earth L2 Lagrange point. Normally when you talk about orbits, you're going around a big physical thing with a big honking gravity well like a star or planet. Lagrange points are different. They are places where the gravity of two big honking things add up to create a virtual point you can orbit instead. The Sun-Earth L2 Lagrange point is on the opposite side of the Earth from the Sun. The distance to JWST's out there puts it 160,000 to 500,000 miles away from us depending on where it is in its orbit. Even at its closest approach it's many hundreds of times farther away than Hubble, and still more than five times farther out than geosynchronous orbit where most "deep space" satellites are concentrated.
(Score: 0) by Anonymous Coward on Saturday March 11, @04:40AM
Yeah well, Voyagers 1 and 2 might spoil a shot or two
(Score: -1, Troll) by Anonymous Coward on Thursday March 09, @08:09PM (1 child)
Did they try turning it off and on again? It seems to have worked for that other space-thing. Or can't they get some photoshop-image-AI to remove the satellite trails from the images?
(Score: 3, Insightful) by janrinok on Thursday March 09, @09:37PM
Please read of the linked articles. All your questions are answered there.
(Score: 4, Interesting) by NotSanguine on Friday March 10, @07:54AM (12 children)
Is to put multiple telescopes at the various Lagrange Points [wikipedia.org] in Earth's orbital path around the sun, as we did with the JWST [wikipedia.org].
Which removes the telescopes from the growing piles of garbage [arstechnica.com] in low earth orbit.
An added advantage is that combining data from all those telescopes would yield the resolution of a single telescope the size of Earth's orbit around the sun.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Friday March 10, @12:14PM (11 children)
No, unfortunately not, at least if you are talking about general telescopes and not radiowave frequencies. You need to phase the wavefronts of all those signals to get that ultimate resolution and we can't do that. Radio astronomers can do it because the wavelengths they deal with are so long and atomic clocks are sufficiently precise and accurate that they can actually time-tag their data as it comes in from each dish and then they can go back after the fact and phase up all the signals (they can actually figure out which wavefront from Telescope A matches up with that of Telescope B). JWST phases the light up from each telescope segment actively, but their telescope segments are held on a common structure and they spent a lot of the commissioning time making the sub-micron positional changes to each segment to get them into the proper position, and they have an active system that monitors any movement changes and adjusts them on the fly. To do that on a larger scale, or using multiple spacecraft, would be it's own major dedicated mission.
(Score: 2) by NotSanguine on Friday March 10, @02:40PM (10 children)
Aperture synthesis [wikipedia.org] can be used with EM radiation regardless of wavelength, although the mechanisms for doing so do vary based on the wavelength.
cf. https://en.wikipedia.org/wiki/Astronomical_interferometer [wikipedia.org]
Yep. It would. But that doesn't mean we shouldn't do so.
A system of telescopes at the Lagrange points would provide an effective aperture of ~186,000,000km. Probably not enough to see (10s of km) surface details on an exoplanet, but certainly enough to provide significant details of other solar systems, and definitely enough to do spectroscopy on the atmospheres of exoplanets.
That seems like a good investment to me.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 2) by NotSanguine on Friday March 10, @02:45PM
My apologies, the diameter (roughly) of Earth's orbit is ~186,000,000 miles, not ~186,000,000 kilometers
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Friday March 10, @04:04PM (8 children)
Aperture synthesis requires you to phase up the light from the separate apertures, and that's the very hard part. Regardless of what approach you want to take, to get the benefit of the resolution you get from the different apertures, you need to maintain the phase between the light coming from the separate apertures. If you don't you're incoherently combining them and you do get the benefit of having a brighter signal from looking at the same thing with multiple telescopes, but you don't get the resolution: if you have two 1-meter telescopes and you separate their centers by 9-meters, thus making an effective 10-meter diameter telescope (at least in the direction of the line through them), if you phase them up you get the benefit of the resolving power of a 10-meter telescope (again, only along the direction of the line between them), but if you don't coherently phase them, you only get the resolving power of an individual 1-meter telescope.
The way to picture this is if you take a nice parabolic imaging mirror, the parabola is a shape such that for parallel rays that enter the telescope, they all travel the same distance when they get to the parabolic focus, meaning all the light rays are in phase. Once you deviate from that shape, the light rays travel different distances and arrive out of phase and it messes up your imaging quality. And the amount you can deviate from that shape before it really starts to mess you up goes as the fraction of the wavelength you're using. The way JWST operates mostly in the IR, so wavelengths on the order of, say a micron or longer, they have to position and hold their mirror segments to a fraction of a micron, which is what all of that segment phasing they had to go through when commissioning the satellite. If any of those mirrors deviate by, say, 100 nm from where they're supposed to be, then their imaging quality suffers. It is a lot easier to do this for RF astronomers because they are working with wavelengths that are meters or tens of meters, and they don't need to actively phase their telescopes because they can just record the RF signal and time-tag it with an atomic clock and go back after the fact and figure out what signals to pair up; for shorter wavelengths we can't detect and time tag the signals, so we have to phase these things in real time.
This is why we can't do this on separated telescopes, at least yet. Because we need to actively phase up their signals, which means measuring and correcting the relative optical paths through them to a fraction of a wavelength. We could probably do this with separated RF telescopes in space, but not anything shorter in wavelength than that. There are lots and lots of other issues that make this very hard (such as the area of those two 1-meter telescopes is 1/50th that of the 10-meter combined telescope, which means you need to increase your exposure times by a factor of 50 to get the same signal intensity you'd get for a real 10-meter telescope).
(Score: 2) by NotSanguine on Friday March 10, @04:26PM (7 children)
Except we can [nature.com]. And [sciencedirect.com] we [wikipedia.org] do [optica-opn.org]. Right now*.
*Note the dates on the papers linked above.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 2) by NotSanguine on Friday March 10, @04:31PM
Oops. Forgot this one [harvard.edu].
There are many other papers/implementations of optical aperture synthesis that have been done over the past thirty years or so as well.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 1, Informative) by Anonymous Coward on Friday March 10, @05:57PM (5 children)
No, we haven't done it in any manner that you are suggesting and everything I've stated still stands. The easiest way to do it yourself is to poke some holes in a lens cap and put it over your camera. What you get are multiple aperture phased images. Yes, we've done it in the lab, like in two of the papers you sent. It was actually done by none other than Michelson himself back in the early 20th century where he was using what is essentially the same technique as poking holes in your lens cap (that and the lens cap example, you are guaranteed your signals are in phase because you are only masking the main imaging system). It is even done in microscopy, but also using the masking technique.
For the reasons I've mentioned, it has not been done it in a flexible, dynamic system with a path to space. NPOI can do it [lowell.edu], but it relies on the fact that they use the top of the mesa as a huge optical table. There have been a number of NASA missions proposed over the years too, but they have all been for the 30-50 years out because the technology isn't there. Places like Keck and the Large Binocular Telescope do something similar, but again rely on a huge stable infrastructure like JWST does. Any demonstrations have been done on an optical bench and/or using monochromatic light (which gets around something called the coherence length problem that I didn't get into earlier). Antoine Labeyrie, who has done a LOT of work in this field, has been pitching a "hyper-telescope" idea [sciencesconf.org] which would be like a leap beyond the NPOI concept, but again, using the stability of the ground to help with the phasing problem.
If you want to follow the progress in this field, you'll want to use search terms like "sparse aperture telescope" and "Fizeau imaging" or "Fizeau interferometry."
(Score: 0) by Anonymous Coward on Friday March 10, @06:04PM
I should correct my "30-50 years out" to be more like "20 years out," but that was the consensus 20+ years ago and we still haven't demonstrated we can do it to justify a mission. We do know how to do it in principle (we know all the degrees of freedom we need to correct and to what level), but it has not been deemed worth effort to overcome the challenges. JWST is the design it is because that was what was deemed achievable for the mission (working in the longer IR wavelengths and using a rigid common backing structure to hold the mirror segments), and even that took a hell of a lot of effort. Earlier designs were more ambitious, but were not achievable. Where it is done on Earth, like at Keck, it requires literally tons of mass to make things stable.
(Score: 2) by NotSanguine on Saturday March 11, @12:40AM (3 children)
I didn't say "We have linked space-based telescopes right now creating huge synthetic apertures."
I said optical synthetic apertures are *currently* in use. You claimed that was only possible with long (infrared and longer) EM wavelengths. That's not true. It's not only possible, but as you acknowledge, is being done today albeit over much smaller distances.
But creating such apertures with large (millions of km) separation between mirrors is an *engineering* problem, not a science problem.
I'd also point out that one way to deal with the phase/timing issues would be to collect the data (with femtosecond timestamps) then collate and synthesize all of the data back on Earth.
No, we don't have a fleet of space-based telescopes at the Lagrange points today. And we won't for some time, but so what? We have the scientific understanding to design such large synthetic apertures but not the engineering know how to build such systems.
As such, it's just a matter of time, technology and political will and not physics that's holding us back from doing so.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 2) by ChrisMaple on Saturday March 11, @03:44AM (1 child)
I'm just guessing here; I don't really know. If you're dealing with femtosecond timestamps from objects not visible to the naked eye, aren't you trying to label frequency and phase of single photons on 2 detectors many miles from each other? Isn't Heisenberg going to raise a theoretical objection?
(Score: 2) by NotSanguine on Saturday March 11, @05:50AM
In the scenario I'm envisioning, I don't think so.
You'd just be collecting photons with multiple CCDs in multiple locations. The interferometry would happen later, after the data has been downloaded to Earth.
As such, I don't see how there could be an issue. But I'm not a physicist either.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0) by Anonymous Coward on Saturday March 11, @04:51AM
Absolutely trivial compared to the *finance* problem, but we can start by putting a few more Webb telescopes out there. We'll see the universe in 3D