Stories
Slash Boxes
Comments

SoylentNews is people

posted by Woods on Thursday June 26 2014, @07:35PM   Printer-friendly
from the alliteration-is-hard dept.

Not sure how many people here are into Amateur Astronomy. This is a neat project if it works as advertised.

Amateur astronomers worried that Big Astronomy would render them obsolete can relax: the kinds of techniques used to create huge virtual telescopes are now being applied to the huge collections of astro-pics published on the Internet. As keen astronomy-watchers know, the effective aperture of telescopes can be expanded by linking multiple instruments in different parts of the world. In radio-astronomy, this is the principle behind the Square Kilometer Array, and the same techniques can be applied to optical telescopes.

What's different about the proposal in this paper at Arxiv is that its authors, led by Dustin Lang of Carnegie Mellon University (along with David Hogg of New York University and Bernhard Scholkopf of the Max Planck Institute in Germany) is that they want to correlate and combine the vast store of astronomy images that amateurs publish on the Internet.

Example image here

The top row shows some of the input images Lang used to create the final composite. The final tone-mapped consensus image, bottom right, shows debris from the galactic cataclysm that isn't visible in any of the individual source images.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Tork on Thursday June 26 2014, @07:59PM

    by Tork (3914) Subscriber Badge on Thursday June 26 2014, @07:59PM (#60534)
    "Amateur Astronomers Amassed Into Virtual Telescope Array"

    I found a picture [tumblr.com] of the inner workings of this telescope.
    --
    🏳️‍🌈 Proud Ally 🏳️‍🌈
    • (Score: 3, Insightful) by Woods on Thursday June 26 2014, @08:17PM

      by Woods (2726) <woods12@gmail.com> on Thursday June 26 2014, @08:17PM (#60541) Journal

      At least it is way better than TFA: "Sky-scraping boffins mash amateur astronomers into huge virtual telescope."

      On topic: I am very excited to see what this project turns up, just the example image is pretty amazing. I think the picture on the bottom left of the image is all the photos added together, with the bottom right being the finished result. If that is the case, I wonder if we could try our hand at an amateur astronomer deep field or something. Might be worth a shot.

      • (Score: 0) by Anonymous Coward on Thursday June 26 2014, @08:40PM

        by Anonymous Coward on Thursday June 26 2014, @08:40PM (#60550)

        Thanks for the feedback -- title has been updated!

  • (Score: 3, Insightful) by BsAtHome on Thursday June 26 2014, @08:10PM

    by BsAtHome (889) on Thursday June 26 2014, @08:10PM (#60538)

    The results are very nice. Not up to par with the big telescopes, but still getting very nice detail not usually obtainable as a lone amateur. The results may improve considerably when a group of amateurs decides to make a coordinated imaging effort. If they all calibrate the recordings, then the result will improve further. I look forward to a brand new class of cosmic images with fantastic detail.

    • (Score: 3, Interesting) by umafuckitt on Thursday June 26 2014, @08:24PM

      by umafuckitt (20) on Thursday June 26 2014, @08:24PM (#60544)

      The whole point is that this in effect is a coordinated imaging effort. Frankly, the quality of the M51 shot is quite close to the HST image [robgendlerastropics.com] in terms of seeing the faint stuff. This is reasonable, since amassing large numbers of amateur shots is similar to taking a very, very, long exposure. Where the technique falls short is in resolution: this approach is not the same as a large array telescopes. The mirrors on each scope are small and they're not linked. They are all hobbled by atmospheric distortion. Consequently, this approach will always get its ass kicked by larger instruments and those with adaptive optics or in orbit.

      • (Score: 3, Informative) by BsAtHome on Thursday June 26 2014, @08:49PM

        by BsAtHome (889) on Thursday June 26 2014, @08:49PM (#60555)

        The paper writes that they need to adapt the images separately to make the compatible. The "long exposure" trick by combining images only works if the images can be summed in a consistent way. The paper describes the enhancement algorithm they use to make the images compatible, including spatial resampling.
        The discussion starts with: "We have proposed a system that can automatically combine uncalibrated, processed night-sky images to produce high quality, high dynamic-range images covering wider fields." If that does not qualify as uncoordinated, than what does?

        My point being, if you have camera spectral and spatial calibration values before you sum the images, then the convergence will improve and the result will improve too. That combined with the described method makes the result even more spectacular.

  • (Score: 2) by wonkey_monkey on Thursday June 26 2014, @10:17PM

    by wonkey_monkey (279) on Thursday June 26 2014, @10:17PM (#60613) Homepage

    The top row shows some of the input images Lang used to create the final composite. The final tone-mapped consensus image, bottom right, shows debris from the galactic cataclysm that isn't visible in any of the individual source images.

    I'll have to take your word for that, because the example image is tiny. The final image doesn't look much different from input image #2.

    --
    systemd is Roko's Basilisk
    • (Score: 3, Informative) by gringer on Friday June 27 2014, @12:08AM

      by gringer (962) on Friday June 27 2014, @12:08AM (#60651)

      I'll have to take your word for that, because the example image is tiny. The final image doesn't look much different from input image #2.

      The resolution of the image in the paper [arxiv.org] is a bit better. It also mentions under the image that it was tone-mapped to match image #2, so it's not too surprising that they look similar.

      --
      Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
      • (Score: 2) by wonkey_monkey on Friday June 27 2014, @09:33AM

        by wonkey_monkey (279) on Friday June 27 2014, @09:33AM (#60782) Homepage

        Hmm. #2 still looks better to me.

        Never been a fan of images in PDFs; I like to know I'm viewing 1:1 pixels!

        --
        systemd is Roko's Basilisk
  • (Score: 3, Informative) by BradTheGeek on Friday June 27 2014, @12:33AM

    by BradTheGeek (450) on Friday June 27 2014, @12:33AM (#60657)

    Not sure how many people here are into Amateur Astronomy. This is a neat project if it works as advertised.

    I cannot afford, nor do I live in a place conducive to amateur astronomy now. But I love it. In fact I have submissions here and on he who shall not be named related to astronomy.

    Keep em coming please!

  • (Score: 1) by axsdenied on Friday June 27 2014, @02:01AM

    by axsdenied (384) on Friday June 27 2014, @02:01AM (#60689)

    The example given is not the best.

    Stacking lots of short exposure images does not work as well as one long exposure, i.e. 1 x 10sec is not the same as 1 x 100 sec exposure (the fine details get lost in the noise for shorter exposures).

    In the example the original image 2 is by far the best. Adding several lower quality images won't help much and that's why there is not much improvement over the image 2. However, if they had several images of the quality of image 2, the result will be much better.

    Actually I think that any improvement visible in the resulting image could be obtained by processing the image 2.

    But I have my doubts how well this will work. There is a great variety of images on the Internet, some with lots of background and camera noise. They will need to be very, very picky to achieve good results. One bad image can ruin the final result.

    • (Score: 2) by umafuckitt on Friday June 27 2014, @06:38AM

      by umafuckitt (20) on Friday June 27 2014, @06:38AM (#60748)

      Stacking lots of short exposures works badly because you get hit by the CCD read noise each time. In this study they're stacking many long exposures. So it's a different situation to what you describes. The noisy image scenario can be dealt with by pre-screening for good images. This can be done automatically.

      • (Score: 1) by axsdenied on Friday June 27 2014, @03:40PM

        by axsdenied (384) on Friday June 27 2014, @03:40PM (#60905)

        No.

        1. The CCD readout noise is removed by subtracting bias frames when stacking. If done correctly it should be almost completely removed.

        In short exposures the weak signal is of the level of noise and cannot be distinguished from it. Remember that noise is not only caused by the CCD/CMOS but also by the statistical nature of the photon counting. Only in longer exposures the weak signal may raise to be significant above the noise. That's why stacking short exposures does not work as well as longer exposures if you want faint details).
        (Short exposure means exposure that captured (much) less light than other exposures. It still may be hours.)

        2. I was not talking about the study but about the example images they provided. If you look at images 2 and 3 you will notice much more detail than images 1 and 4. They will have either a longer exposure or have been taken with a larger telescope to collect more light.

        In particular image 4 hardly shows any spiral arms and no surrounding nebulosity. Adding this image will not contribute much to the other images and may even make it worse.

        They should have picked a different example.

        • (Score: 2) by umafuckitt on Friday June 27 2014, @08:07PM

          by umafuckitt (20) on Friday June 27 2014, @08:07PM (#61048)

          The CCD readout noise is removed by subtracting bias frames when stacking. If done correctly it should be almost completely removed.

          I think we're talking about different things. The bias frames remove consistent differences between pixels. i.e. they remove bias. The read noise is different every time, so you can't subtract it away. i.e. it's the noise floor. The read noise is what is left if you take the average of many bias frames and subtract this from a single bias frame. The smaller your signal (the shorter the exposure) the larger your read noise in comparison to the signal. This is why stacking many short exposures works less well. You're adding more noise because you're reading multiple times and adding noise each time. If there was no read noise, only photon noise, then stacking many short exposures would be the same as a single long one. It has to be this way, because there'd be no extra source of noise in the multiple short exposures.

          It's hard to quickly eyeball an image and say whether adding it will not contribute much. e.g. Averaging 10 sub-frames may show details not apparent in any of the single frames. You don't chuck out the single frames because individually they don't show the signal you are looking for. The images I work with produce horrifically noisy looking single frames. But once you align and average them, a huge amount of detail comes out. What matters is the size of the signal compared to the noise floor and minimising stray light.

          • (Score: 1) by axsdenied on Friday June 27 2014, @11:08PM

            by axsdenied (384) on Friday June 27 2014, @11:08PM (#61167)
            From Deep Sky Stacker:

            Bias Frames (aka Offset Frames) The Bias/Offset Frames are used to remove the CCD or CMOS chip readout signal from the light frames. Each CCD or CMOS chip is generating a readout signal which is a signal created by the electronic just by reading the content of the chip.

            • (Score: 2) by umafuckitt on Saturday June 28 2014, @06:53AM

              by umafuckitt (20) on Saturday June 28 2014, @06:53AM (#61276)

              Bias frames remove "bias", hence the name. They don't remove readout *noise*, which is a separate thing.

              • (Score: 1) by axsdenied on Saturday June 28 2014, @12:16PM

                by axsdenied (384) on Saturday June 28 2014, @12:16PM (#61322)

                So you are saying that the Deep Sky Stacker page is wrong:
                "The Bias/Offset Frames are used to remove the CCD or CMOS chip readout signal from the light frames".

                I can find heaps more examples basically saying the same thing as DSS.

                • (Score: 2) by umafuckitt on Saturday June 28 2014, @01:24PM

                  by umafuckitt (20) on Saturday June 28 2014, @01:24PM (#61328)

                  The readout noise and the readout signal (or bias) are two totally different things. The quote you supply is saying the bias frames remove the readout signal. i.e. something structured. The pixel bias is structured and so the so the DSS help page is correct: it's saying subtracting the bias frames remove pixel bias. The description you post is not saying readout noise is removed, is it? It's not saying that because you can't remove the readout noise by subtracting a bias frame. This is evident from first principles: the circuitry in the CCD produces readout noise (IIRC from the amplifiers at each pixel) each time a frame is pulled off the device. This is independent Gaussian noise. So the noise in your sub-exposure is totally uncorrelated with the noise in the bias frame. For that reason it's impossible to get rid of the noise in the sub by subtracting the bias frame. If you subtract noise from noise you just get noise. The only way you can remove independent noise is by averaging/integrating. Look half way down this page [qsimaging.com]. It's a graphical description of the readout noise and how you can see it by subtracting the mean of many bias frames from one bias frame. As you can see, there is always noise.

                  • (Score: 1) by axsdenied on Monday June 30 2014, @05:12AM

                    by axsdenied (384) on Monday June 30 2014, @05:12AM (#61809)

                    I was misreading/confusing signal and noise and didn't understand what you were trying to say before.

                    Thank you for a great description/explanation and for the link. It was a very nice reading.

  • (Score: 1) by axsdenied on Friday June 27 2014, @03:45PM

    by axsdenied (384) on Friday June 27 2014, @03:45PM (#60909)

    To be very critical of the paper, the faint nebulosity is often hidden and can be brought out by histogram stretching and other post-processing techniques. The stacked images usually have 16 or 32bit depth per RGB colour, lots of details are hidden.
    Image 2 seems to have the most of the signal and I am not sure if it could have been "stretched" to look like the final image. The PDF does not help, without original images we can only take their word for it.

    Also they claim that the two images were tone-matched - it does not look like that to me. The dark background in the final image seems brighter to me (look in the corners with no stars) and hence you would expect more faint stuff to show in image 2 if you would to match them.

    And their final image of M51 is much blurrier than the image 2. When I take images with my telescope the setup is always the same, at any time of the year. Unless they are picking images taken with the same magnification, camera resolution etc they will be scaling galactic objects and have significant degradation in sharpness/resolution.

    Having said that, image 2 looks better to me.

    • (Score: 2) by umafuckitt on Friday June 27 2014, @08:12PM

      by umafuckitt (20) on Friday June 27 2014, @08:12PM (#61057)

      I agree, it's blurrier. It's also not very professional to have the subs smaller in size than the final image. I also think they should have some metric that actually quantifies the improvement, rather than just show the images.