Leonardo da Vinci’s famous painting, Mona Lisa, “lived up” with Samsung researchers, using artificial intelligence.
As the BBC notes, the relevant video created by a photo shows the model of the portrait to move his head, hands, eyes and mouth. This is an example of the so-called “deepfake” technology that emerged from Samsung’s Artificial Intelligence Research Laboratory in Moscow.
Samsung’s algorithms were trained in a public database of 7,000 celebrity images gathered from YouTube. The system -as the New Atlas explains- takes a series of photographs of a person and passes them from a “face landmark tracker” to see where the eyes, eyebrows, nose, etc. are. At the same time it does the same with another “driving” video source, going frame-frame to monitor the movement of these features.
Beyond that, different artificial intelligence networks are trained to do different jobs, using a large video data set of “talking heads”, followed by the composition of images; while another network tries to distinguish the original from the ones created artificially. After many tests, these networks -which work competitively- are greatly improved, and, as easily understood, the end result is also improving.
It is worth mentioning that the system was also used to create videos of Salvador Dali, Albert Einstein, Fyodor Dostoevsky and Marilyn Monroe.
In a related paper describing their work, Samsung researchers are talking about “realistic neural talking heads”. The video sparked mixed reactions, and some of those who saw it spoke of an experience that brought to mind futuristic threats such as the artificial intelligence “SkyNet” of the “Terminator”.
It is reminded that researchers from Tel Aviv University did something similar in 2017, while a researcher had made a fake video of Barack Obama in the same year.