An exhibition of paintings that deceived the facial recognition system has opened online

Source: https://cobaltstrike.net/2022/04/11/an-exhibition-of-paintings-that-deceived-the-facial-recognition-system-has-opened-online/



A very unusual exhibition has opened online – one hundred copies of the same painting, the Mona Lisa by Leonardo da Vinci. However, there is a catch here. What appears to the human eye to be hundreds of identical images, the facial recognition system defines as portraits of a hundred different celebrities.

The organizer of the exhibition is the startup Adversa, specializing in the detection and elimination of unavoidable vulnerabilities in artificial intelligence (AI) technologies. In this case, the aim of the project is to demonstrate weaknesses in the facial recognition system.

As the experts of Adversa explained, AI sees in a hundred, in fact, identical images a hundred different ones due to biases and vulnerabilities in adversarial examples, which cybercriminals can potentially use to hack facial recognition systems, autonomous cars, medical scanning systems, financial algorithms, etc.

The collection of images of the Mona Lisa is based on photos of 8631 celebrities taken from open sources. The facial recognition model is Google’s FaceNet, trained on the most popular VGGFace2 datasets.

VGGFace2 is a data set for face recognition with different angles and ages. The set consists of more than 3 million images divided into more than 9 thousand categories, which makes it very attractive for deep learning of facial recognition models.

It is noteworthy that none of the images presented at the exhibition is a real copy of the “Mona Lisa”. All of them have been modified in a special way so that the AI recognizes them as portraits of different celebrities, although to the human eye it is all the same “Mona Lisa”.

“In order for the classifier to recognize a stranger, an adversarial patch can be added to a photo of a person. This patch is generated by a special algorithm that reads pixel values in the photo so that the classifier gives the desired value. In our case, the photo forces the face recognition model to see a celebrity instead of the Mona Lisa,” the Adversa explained.

Start a discussion …