Deepfake and Fake Videos: How to Protect Yourself

Deepfake and Fake Videos: How to Protect Yourself
© Provided by pestaola.gr

“Deepfake” videos come from the word “deep” from “deep learning” and “fake = false”.

“Deep learning” is an advanced Artificial Intelligence (“AI”) method that uses multiple levels of machine learning algorithms to extract progressively higher level capabilities from unstructured data, such as the human face.

Manolis Sfakianakis, President and Founder of the Institute of International Cybersecurity (CSI Institute), commented:

“Deepfake” is now becoming an increasingly powerful threat that flourishes mainly in countries where there is no clear legal framework. Such is its nature that can even “drive” people crazy unnecessarily and unprovoked, turning them into recipients of blackmail messages. And as “deepfakes” become more accessible, people will become more anxious because they feel another technological risk just around the corner. Here we will urgently need the immediate assistance of specialist and specialized companies to create an effective digital security program, which will identify and at the same time inform about findings concerning the upcoming new phenomenon. So this phenomenon comes to our country through applications that are known to all of us and the most important thing is that they have upset at least 100,000 internet users worldwide so far. I hope that a relevant legal framework will be established in our country immediately and that we will not rest on our laurels against this by the various social networking companies and the popular search engines.”

For example, an “AI” can collect data about your physical movements. This data can then be processed in order to create a “deepfake” video through a “GAN (Generative Alternative Network)”. This is another type of specialized machine learning system. Two neural networks are used to compete with each other to learn the features of a “training set” (for example, face photos) and then to create new data with the same features (new “photos”).

«Because such a network continues to test the images it creates in comparison to the “training set”, the fake images become more and more convincing; this makes the “deepfake” a more powerful threat”,

warns Vassilis Vlachos, Kaspersky Channel Manager for Greece and Cyprus.

«In addition, GAN” can falsify data other than photos and videos; in fact, the same “deepfake” machine learning and synthesis techniques can be used to falsify voices; it’s easy to see how digital criminals can use it to their advantage.”

How can we protect ourselves from “deepfake”?

Legislation has already begun to address the threats of “deepfake” videos. For example, in the state of California, two bills have been passed that make illegal aspects of “deepfake”; the “AB-602” banned the use of human image composition for pornography without the consent of the people depicted and the “AB-730” banned the exploitation of images of political candidates within 60 days of the election.

Digital security companies constantly come with more and better detection algorithms; they analyze the image of the video and identify the tiny distortions created in the “spoofing” process. For example, current “deepfake” composers shape a two-dimensional face and then distort it to fit the three-dimensional perspective of the video. Looking at where the nose points is a basic way of detecting.

The “deepfake” videos are still at a stage where you can spot the marks yourself. Look for the following features of a “deepfake” video, such as:

  • awkward movement
  • changes in lighting from one frame to another
  • changes in skin tone
  • strange or no blinking
  • lips that don’t synchronize with speech
  • or other digital objects in the image.

Vassilis Vlachos insists that good safety procedures are the best protection:

«Good basic security procedures are extremely effective in dealing with “deepfake”. For example, incorporating automated checks into any disbursement process would have stopped many “deepfake” scams and the similar. Everyone needs to be trained and well informed how to identify a “deepfake”, while companies need to ensure that employees know how “deepfaking” works and the challenges it can bring. They should also have good basic protocols based on the “trust but verify” rule. A skeptical attitude towards audio messages and videos does not guarantee that employees will never be deceived, but it can help you avoid many pitfalls.”

How convincing have “deepfakes” become?

The early “deepfake” videos seemed ridiculous, but the technology has evolved enough to make such media frighteningly convincing; one of the most notable examples of frighteningly convincing “deepfake” from 2018 was the fake Barack Obama talking about “deepfakes”. In mid-2019, we saw a short video of fake Mark Zuckerberg being curiously honest about his current state of privacy.

In October, research on a “bot” service that creates fake nudes revealed that the most urgent dangerous trend of “deepfakes” on the Internet is not misinformation; but revenge porn. The “deepfake” tracking company “Sensity”, formerly known as “Deeptrace”, revealed that it had discovered a huge business of disseminating nude images of women created by “AI” and, in some cases, underage girls. The service operated mainly in the encrypted messaging application “Telegram” using a “bot” that works with “AI”. Users were able to send to the “bot” the photo of a woman they wanted to see naked; then the bot created a fake naked body in which it added the original image of that woman.

“Sensitivity” reported last year that 96% of “deepfake” videos online were non-consensual pornography. “Sensity” CEO Giorgio Patrini told “Business Insider” that the percentage has not changed.

Source:

Λογότυπο pestaola.gr


Σᾶς ἀρέσει τὸ ἂρθρο; / Do you like this post?
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
(Συνολικές Επισκέψεις: / Total Visits: 25)

(Σημερινές Επισκέψεις: / Today's Visits: 1)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.