May 3

(Darker is better in this case)

Four years ago researchers from the Event Horizon Telescope (EHT) project presented the first ever photo of a black hole. The image culminated years of work and gave the world a peek at the enormous spatial object sitting 55-million light years away in the Messier 87 (M87) galaxy. Seeing a black hole in space, an object that’s not only very far away and very massive but also by its very nature doesn’t want to be seen, is not only a physical challenge but requires a lot of technology and work with data.

All images and models of black holes before had been estimations/simulations and the EHT achievement opened up a new era of visualization capabilities. A few years later building on the methodology the EHT team produced an image of the black hole in our own Milky Way galaxy known as Sagittarius A* (Sgr A*). Beyond these two milestones some of the team members at the Institute for Advanced Study in Princeton, N.J. began wondering if advancements in data processing technologies and artificial intelligence could be used to produce a better image from the data. Those researchers have now revealed a clearer, improved version of the original M87 photo made using machine learning techniques.

Big Telescopes and Big Data

How much data must you amass and compute to get an image of an object so far away, so many more times massive than our own Sun, from a network of telescopes that make a dish the size of Earth?
The process of getting from EHT observation to picture isn’t simply aiming a device at the sky and snapping but involves a sequence of steps, a lot of data capture and analysis. The Event Horizon Telescope isn’t just one telescope but a network of telescopes situated around the globe that combine to make up one virtual planet-sized dish. Once data was collected over a number of nights on numerous hard drives it was brought together to be correlated and analyzed. How much data must you amass and compute to get an image of an object so far away, so many more times massive than our own Sun, from a network of telescopes that make a dish the size of Earth? The first EHT capture came in at about 5 petabytes (PB). The second milestone of capturing Sgr A*, a smaller but faster moving black hole, took about 3.5 PB. That’s a lot of observation data that then must be transported to centralized locations where supercomputers and scientists analyze and process it.

Illustration of a globe with some telescope locations visible in places like Portugal and Hawaii with lines connecting them across the curvature of the Earth.
Earth-sized Dish: The Event Horizon Telescope is comprised of a worldwide network of telescopes. Learn more in our previous posts about the M87 data stack system and the Sagittarius A* collection process.

The Possibility For More Data

Researchers wondered if the correlation process would produce better results if there was even more data from another source: images generated using machine learning. Creating the final image of the black hole requires combing through collected data and removing noise caused by atmosphere and instruments. At the same time there are spots where data is missing and so smoothing and algorithmic assumptions must be made (contributing to some of the blurriness of the final image). But what if models could be used to generate simulated image data of black holes at a large scale and then applied to “fill in the gaps”? This new approach involved generating a large library of over 30,000 high-fidelity, high-resolution images. Then a process known as principal component analysis (PCA) was applied. Coined PRIMO (principal-component interferometric modeling), it was effectively using PCA to train the models used by machine-learning processes to generate the final images. The simulated data when combined with the EHT observations produced a reconstructed image with improved resolution and accuracy.

Figure of the process of reconstruction from Lia Medeiros et al report published in Astrophysical Journal Letters 947, 2023.
Reconstruction Process: (Left) The image of the M87 black hole based on 2017 observation data. (Middle) Result of applying the PRIMO reconstruction process to the data. (Right) The final PRIMO result presenting similar qualities to the EHT image, blurred shaping etc but a superior range and clearer evidence of a darker center. Image from Lia Medeiros et al. report published in Astrophysical Journal Letters, 2023.

Final Image and Implications

What was already a feat of science, technology and data now includes machine-learning in its sphere of influence. The final image rendered from petabytes of data to a few megabytes-sized image has revealing results. A thinner “donut” shape with a darker center making it even more “black hole-like” than the original. The implications to continuing research of black holes and other celestial objects is potentially big. With a boosted resolution the EHT images can give more insights into the mass, size and other characteristics of black holes long sought after. In research work very much centered on estimates and models of data any chance at refining and getting a clearer picture is useful.

Animated GIF: The M87 black hole image, first produced by the Event Horizon Telescope collaboration in 2019, dissolves into the updated image generated by the PRIMO algorithm using the same data set. Credit: The New York Times, source Medeiros et al., 2023
An animated look at the original Event Horizon Telescope image of the M87 black hole morphing into the refined image rebuilt by the PRIMO machine-learning technique.

From someone who also spends a lot of time looking for things in the dark, we love to see this expanded use of data in new ways and say congrats to the PRIMO team!

Sources/Further Reading

Category: data recovery

Tags: , , , , , , , , ,

Comments

Commenting is closed for this article.