By definition, a black hole cannot be observed. However, because it is said to attract light, and then absorb it, it is possible to observe a donut-like halo of light in which the black hole is the dark center of the donut. In the normal course of things, you observe stars that are bright in the center and dark around the center. Donut-shaped light forms are the anomalies.
To take a picture of this halo you need a telescope as big as the earth. Since that is practically impossible, the next best option is to distribute smaller telescopes all over the earth. Again, practically speaking, you cannot cover the entire earth with telescopes. So, inherently, the data collection produces a very grainy image even when dozens of telescopes are used.
This gives rise to the need for algorithms that can predict the correct form from this grainy picture. Just like you might try to decipher the face of a person from a grainy pixelated picture, similarly, computer scientists have been using algorithms to make the prediction.
Now, we can raise some methodological issues regarding whether the observation over time through many telescopes is valid. E.g. even if there was a rotating object (with a much smaller orbit around a center than what you expect due to gravitational force), and you observed it through tiny snapshots over time and then combined it, you could get a donut.
Similarly, the image recognition algorithm operates on pixelated data, which means there is a lot of information missing. The algorithm then tries to fill in that picture on the basis of training it has received to supply the missing pixels. Most of the focus of the research would be on whether this algorithm is good enough, and one answer to that question is that if the algorithm recognizes other kinds of pixelated images correctly then we can assume that it correctly predicts the donut as well. How good that assumption is, would be a highly technical matter.
So, there are many assumptions involved in this process: (1) that the donut is actually produced by sucking out uniform light from its center rather than a donut natively, (2) that this donut effect is not something we are constructing due to snapshots of an object going around in a much smaller orbit in contradiction to the gravitational theory, and (3) that our algorithms are good to fill in the massive amounts of missing data to create a picture.
This is as far as classical physical effects are concerned. If we add quantum effects to this, it is possible that there is indeed a round light object although the center of that object doesn’t interact with the telescopes on earth, so we think that it is indeed a black hole. Personally, I’m more inclined toward this type of explanation, because I believe that nature (even macroscopic objects) has to be described using quantum rather than classical physics. But it contradicts a fundamental assumption in physics, namely, that light spreads uniformly everywhere.
The data underdetermines the theory, so to interpret data we have to supply many assumptions. In this case, we are adding the assumptions of classical uniformity, gravitational theory, that a picture taken over time is not many pictures superposed, and that our predictive algorithms are indeed correct. If all these are granted, then yes it validates ‘black holes’. But it is not the only possible explanation of the data as we take out one assumption after another.