New Deepfake Identification Technique Detected Using Astronomical Methods

New Deepfake Identification Technique Detected Using Astronomical Methods

An Astronomical Look at Deepfakes

Nowadays, fake images or “deepfakes” are becoming more and more realistic, which poses a major problem: their malicious use, especially to spread false information. To meet this challenge, scientists from the University of Hull have developed an innovative method for detecting deepfakes, based on techniques inspired by astronomy.

By analyzing the reflections of light in human eyes, researchers can spot signs of image manipulation. The reflections in the eyes should be similar, but not identical, says Kevin Pimblett, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull and professor of astrophysics. The research was presented at the National Astronomical Meeting of the Royal Astronomical Society in the UK.

Physics of Light Reflections: Astronomical Techniques for Detecting Deepfakes

The images can be verified using “consistent physics.” This means that The reflections in the eyeball are consistent with a real person, but incorrect (from a physical point of view) for a fake person, “The study was conducted in collaboration with the University of California, Berkeley, and … respectively,” explains lead researcher and author Kevin Pimple to The Washington Post. Royal Astronomical Society.

The researchers used astronomical techniques to analyze these reflections, such as the CAS (Concentration, Asymmetry, and Regularity) system and the Gini index. These methods allow measuring the distribution of light within astronomical images, which were initially developed to study galaxies.

Adejumoke Owolabi, a data scientist at the University of Hull, used these techniques to analyze images of faces, both real and AI-generated, and was able to predict with about 70% accuracy whether an image was fake.

Advantages and limitations

Despite the promising results, challenges remain. Brant Robertson, an astrophysicist at the University of California, Santa Cruz, points out that nature For the importance of this research, but he warns: “ If you can measure the realism of a deep fake image, you can also train AI models to produce better fake images. By improving this procedure

The researchers also believe that the technique could complement existing methods for detecting anomalies and inconsistencies, for example in lighting or shadows. While the method is not yet infallible, it represents a step forward and a means of research into combating malicious deepfakes.

Leave a Reply

Your email address will not be published. Required fields are marked *