Using machine learning, a team of researchers has enhanced the first image ever taken of a distant black hole. Importantly, the newly updated image shows the full resolution of the telescope array for the very first time.
[...] The machine learning model has sharpened the otherwise blurry image of black hole M87, showcasing the utility of machine learning models in improving radio telescope images. The team's research was published today in the Astrophysical Journal Letters.
"Approximately four years after the first horizon-scale image of a black hole was unveiled by EHT in 2019, we have marked another milestone, producing an image that utilizes the full resolution of the array for the first time," said Dimitrios Psaltis, a researcher at Georgia Tech and a member of the EHT collaboration, in an Institute for Advanced Study release. "The new machine learning techniques that we have developed provide a golden opportunity for our collective work to understand black hole physics."
[...] But even using radio telescopes around the world doesn't give astronomers a complete view of the black hole; by incorporating a machine learning technique called PRIMO, the collaboration was able to improve the array's resolution. What appeared a bulbous, orange doughnut in a 2019 image has now taken on the delicate, thin circle of The One Ring.
PRIMO (principal-component interferometric modeling) was used to study over 30,000 simulated images of black holes in the process of accreting gas. It's the accretion of such superheated material that gives imaged black holes their eerie silhouettes. The patterns in the simulations were then used to boost the resolution of the fuzzy image released in 2019.
"We are using physics to fill in regions of missing data in a way that has never been done before by using machine learning," said Lia Medeiros, a researcher at the Institute for Advanced Study and the lead author of the paper, in an institute release. "This could have important implications for interferometry, which plays a role in fields from exo-planets to medicine."
(Score: 4, Insightful) by JoeMerchant on Sunday April 16, @04:50PM
Based on observations of other black holes, the enhanced image is what AI _thinks_ the data is indicating.
Such imagery is great, but should be clearly labeled as an extension of the actual observation based on myriad other data which may, or may not, be enhancing the current image's resolution and accuracy.
AI can also be trained to develop confounding datasets which makes the enhanced images as false as possible...
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by bzipitidoo on Sunday April 16, @08:18PM
Artistic depictions often show black holes as sharp edged objects against whatever is in the background. But I wonder if that's likely. For a fairly close observer, might a black hole's edge look fuzzy?
(Score: 2, Disagree) by Frosty Piss on Monday April 17, @12:09AM (3 children)
This of course is complete bullshit. This not what a Black Hole looks like. They took some numbers, interpreted them, and added pretty colors that could have just as well been pink and purple and green. Now they let a computer create something virtually unrelated. This is not what a Black Hole looks like. It's a completely imagery artistic interpretation of speculation.
(Score: 0, Flamebait) by Anonymous Coward on Monday April 17, @02:05AM
So what does a black hole look like then?
(Score: 2, Offtopic) by Jeremiah Cornelius on Monday April 17, @06:20AM (1 child)
Nothing "looks" like anything. The eye is not a camera, nor is vision a passive process of receiving photons on a surface like a silhouette lamp.
Vision is a complex and synthetic process, with objects in the "visual field" rendered by the brain. This rendering bears little resemblance to any supposed — but forever only hypothetically – knowable reality, as distinct from our sensory qualia.
The universe is filled not with objects and singular masses, but instead by various densities of particles. Except they aren't particles, they are pulsations of energy, except when they aren't. Oh yes! The objects which aren't objects, which form these pulsations, aren't specifically locatable. You get the idea.
There is no objective visual "image" of a black hole or anything else, and that's not only at the level of quantum ambiguity. Qualia are not objective externalities, and if there's no one to hear it, then a tree DOES NOT make a sound when falling in the forest. :-)
It is perfectly acceptable to render a black hole in a form as if we were able to process this ourselves as visual qualia. It makes use of our vision faculty for both witnessing and interpreting — and for kinds of visual analysis that would be otherwise inaccessible. We can see what we like to, and leave ontological reality out of the meaningful domain.
You're betting on the pantomime horse...
(Score: 1, Insightful) by Anonymous Coward on Monday April 17, @12:52PM
There are two different ways to argue this. One takes the literal approach and argues what things "actually look like" and you'll see some people complaining about the "false coloring" applied to data, particularly NASA data. I think you answer that pretty well. If you limit your data to only what your eyes can see, then it would be impossible to show 99% of anything that comes out of JWST because that is all invisible to our eyes.
The other argument is the complaint that they've used machine learning on other data to fill in the details on this. On its face that argument has merit and is worth further investigation, but presumably that is what the meat of this paper is all about. They'd have to make the convincing argument that they can fill in the details of this supermasive black hole by looking for common traits seen in all their other black hole data.
The OP sounds to be mixing the two different arguments.
(Score: 2) by hendrikboom on Monday April 17, @11:05PM
They train the AI on zillions of synthetic images computed from theory.
Then they feed in the actual data from the actual black hole.
And the AI makes it look the way it should, based on its training.
To what extent is the resulting image an artifact of the theory?