Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday February 11 2020, @11:56AM   Printer-friendly
from the nothing-quite-like-first-light dept.

Arthur T Knackerbracket has found the following story:

The tension was high: In front of a large screen at the house near Madrid where members of the Consortium participating in the commissioning of the satellite live, as well as at the other institutes involved in CHEOPS, the team waited for the first images from the space telescope. "The first images that were about to appear on the screen were crucial for us to be able to determine if the telescope's optics had survived the rocket launch in good shape," explains Willy Benz, Professor of Astrophysics at the University of Bern and Principal Investigator of the CHEOPS mission. "When the first images of a field of stars appeared on the screen, it was immediately clear to everyone that we did indeed have a working telescope," says Benz happily. Now the remaining question is how well it is working.

Preliminary analysis has shown that the images from CHEOPS are even better than expected. However, better for CHEOPS does not mean sharper as the telescope has been deliberately defocused. This is because spreading the light over many pixels ensures that the spacecraft's jitter and the pixel-to-pixel variations are smoothed out, allowing for better photometric precision.

"The good news is that the actual blurred images received are smoother and more symmetrical than what we expected from measurements performed in the laboratory," says Benz. High precision is necessary for CHEOPS to observe small changes in the brightness of stars outside our solar system caused by the transit of an exoplanet in front of the star. Since these changes in brightness are proportional to the surface of the transit planet, CHEOPS will be able to measure the size of the planets. "These initial promising analyses are a great relief and also a boost for the team," continues Benz.

How well CHEOPS is working will be tested further over the next two months. "We will analyze many more images in detail to determine the exact level of accuracy that can be achieved by CHEOPS in the different aspects of the science program," says David Ehrenreich, CHEOPS project scientist at the University of Geneva. "The results so far bode well," said Ehrenreich.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Tuesday February 11 2020, @01:23PM (11 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday February 11 2020, @01:23PM (#956815) Journal

    I didn't know about the blurriness aspect. That's a neat trick.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Informative) by FatPhil on Tuesday February 11 2020, @02:05PM (10 children)

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday February 11 2020, @02:05PM (#956831) Homepage
    I think it's broken.

    Defocussing is a horribly unpleasant thing to undo, it's pretty much the least reversible operation you can do to data (its inverse convolution matrix can go to infinity), so it can be considered an information-losing operation.

    So why would you do that - deliberately throw away information at source?

    If you want blurred data - get the cleanest data you can, and then apply precisely the blur you want to that clean data. Who knows - you might change the level of blur you want - you can only do that if you can go back to the raw data.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by FatPhil on Tuesday February 11 2020, @02:21PM (2 children)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Tuesday February 11 2020, @02:21PM (#956832) Homepage
      And to be 100% unambiguous - "jitter" is very easy to undo (heck, it's been standard in Photoshop for nearly a decade), and "the pixel-to-pixel variations" are even easier. I would always chose to have a jittery image from a sensor with pixel-to-pixel variations than a defocussed image. Every single time.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 1, Insightful) by Anonymous Coward on Tuesday February 11 2020, @03:13PM (1 child)

        by Anonymous Coward on Tuesday February 11 2020, @03:13PM (#956856)

        If someone submits a paper where they are using Photoshop to do photometry, I would move to have them scientifically disbarred. You can keep your backyard astronomy tricks to make pretty pictures. If I'm trying to really figure out the number of photons I'm capturing, I want my point spread function spread over multiple pixels.

        And the Internet is filled with all sorts of imagery crap of people taking Mars rover images and cycling through Photoshop zoom and sharpen buttons and seeing all sorts of interesting effects. Photoshop will happily keep interpolating and resampling all the way down past the point where your original pixel is larger than your image.

        • (Score: 2) by Osamabobama on Tuesday February 11 2020, @07:41PM

          by Osamabobama (5842) on Tuesday February 11 2020, @07:41PM (#956943)

          people taking Mars rover images and cycling through Photoshop zoom and sharpen buttons and seeing all sorts of interesting effects

          Headline: Amateur xenoarchaeologist finds amazing artifacts in Mars image

          --
          Appended to the end of comments you post. Max: 120 chars.
    • (Score: 2, Insightful) by Anonymous Coward on Tuesday February 11 2020, @02:37PM (1 child)

      by Anonymous Coward on Tuesday February 11 2020, @02:37PM (#956839)

      > I think it's broken.

      Hmmm, who do I believe? You (FP) shooting from the hip, or the scientists and engineers that specially designed this space telescope for their very specific purpose.

      Hint -- star photography/photometry is not like normal/terrestrial photography.

      • (Score: 2) by FatPhil on Wednesday February 12 2020, @09:41AM

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday February 12 2020, @09:41AM (#957146) Homepage
        Well, only one of us has "DSP Engineer" on their CV. Yes, I've been working in image processing since the times that DSPs were called DSPs, rather than "graphics cards".
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2, Informative) by Anonymous Coward on Tuesday February 11 2020, @03:26PM (1 child)

      by Anonymous Coward on Tuesday February 11 2020, @03:26PM (#956862)

      They do not intend to undo the blur. It is there by design. They are not looking for high spatial frequency information, they are looking for photometric changes over long observational times. The expected noise from a spot moving back and forth over multiple pixels due to spacecraft jitter is larger than smearing the signal over a larger number of pixels and having the intensity distribution change a bit, but to largely stay on the same pixels.

      If you want more information, check out sections 4.4.3 (Defocused PSFs performance) and 4.6 (Noise Budget) [esa.int].

      • (Score: 2) by FatPhil on Wednesday February 12 2020, @09:40AM

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday February 12 2020, @09:40AM (#957145) Homepage
        I did not say that they would or should need or want to undo the blur. You're arguing against a straw man. Reread for comprehension this time.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2, Informative) by nitehawk214 on Tuesday February 11 2020, @09:40PM (2 children)

      by nitehawk214 (1304) on Tuesday February 11 2020, @09:40PM (#956978)

      It is a lot more complicated than that. Using a higher resolution camera doesn't necessarily mean higher resolution images. By taking lots of pictures and doing math, you can enhance the useful resolution.

      Scientists working Hubble invented DRIZZLE [wikipedia.org] to cope with this. The final image is a higher resolution than the actual camera. It can't be higher than the angular resolution of the telescope itself, however.

      Also CHEOPS isn't up there to take pictures, it is searching for planets using the transit technique. It's the dimming of the star that matters, not it's angular resolution. Stars are pinpoint sources, defocusing them to multiple pixels lets them average out the noise and ultimately get a better signal.

      --
      "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
      • (Score: 2) by FatPhil on Wednesday February 12 2020, @09:38AM (1 child)

        by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Wednesday February 12 2020, @09:38AM (#957144) Homepage
        Yes, and defocussing can be done in DSP, rather than mechanically. The science has been around for many many decades. But a choice to defocus differently cannot be done if done mechanically, rather than in DSP. It simply sounds like they didn't have a DSP engineer in the team, and only had astronomers.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 1, Insightful) by Anonymous Coward on Wednesday February 12 2020, @03:17PM

          by Anonymous Coward on Wednesday February 12 2020, @03:17PM (#957200)

          I suggest you work up an error budget and get smart on interpixel and sampling noise. It is completely asinine to suggest gettting around pixel and electronic noise by using a DSP. Putting aside the effectiveness (or lack thereof) of reducing your noise, why the hell would you want to do a whole DSP and code development with all the preflight hardware and software development and certification and testing when you can just put in a slight defocus and get better results?

          Based on this and some of your other comments, I really don't think you understand why they are putting defocus there in the first place.