It is highly probable that while browsing the Internet, everyone of us has at some point stumbled upon a deepfake video. Deepfakes usually depict well-known people doing really improbable things – like the queen of England dancing on her table or Ron Swanson from Parks and Recreations starring as every single character in the Full House series. These two examples of AI-generated and at times highly realistic looking videos containing manipulated imagery are really easily spotted and they were never meant to be taken seriously in the first place. But the technology to produce such footage is already in wide-use and anyone with enough interest and time on their hands can try and create one. This is where the topic gets serious and potentially dangerous. So until recently it was really easy to spot an AI crafted video by being on the lookout for one of the following dead giveaways:
But as the AI models advance, these little glitches will no longer help us to tell the real deal from a fake. But first, let’s find out how are those videos actually created.
How are deepfakes made?
Not long ago, we discussed the role of generative adversarial networks (GANs) in the creation of fake imagery. Well, in the case of deepfake videos, first an artificial neural network (ANN), called autoencoder, analyses videos and photos of the subject in different angles and isolates the essential features it discovers. Parting from these features, the ANN would be able to generate new images of the subject. But as we need to swap the subject with another (or in this case, the face of the subject), we use another ANN to reconstruct our subject, an ANN trained on samples from the subject with whom we want our face exchanged. This other ANN then reconstructs the subjects face, mimicking the behavior and speech patterns the first ANN learned. Afterwards, a GAN seeks out flaws and improves and polishes the results into near perfection.
And here lies the problem of deepfake detection – since deepfakes are created using adversarial training, the algorithm creating the fakes will get better every time when introduced to new detection systems. It is a race that cannot be won, because the adversarial networks are designed to always improve each other.
Misuse of deepfakes and emerging problems
As with every invention, the generation of artificial images or speech can be a double-edged sword. Machine learning is getting increasingly better at everything it does and although right now, telling the real deal from the AIs work can be at times very easy, GANs are getting better all the time and it is only a question of time until there is no way to tell them apart just by looking or listening to them. We are talking about audio or video recordings that look very genuine but are not.
There are already been reported cases of frauds where computer generated media played a major role. One example is a company whose employee was scammed into wiring a considerable amount of money. He received a call, in which what seemed to be his superior instructed him to do so. He also received an email confirming this transaction. But little did he know that the voice he was hearing was not that of his boss, but a really good imitation, generated by scammers.
Another example of AI misuse and a growing problem is the creation of authentic looking, but fake pornography, where the victim’s face is used to generate fake nude images. This includes revenge porn as well as fake celebrity porn. The damage it may cause to the victims is obvious.
Moreover, there is the possibility of weaponizing deepfakes on social media by misinforming and manipulating the viewers. Imagine a viral video of a politician, saying things that he/she never said and manipulating the viewers into thinking the footage is real.
Deepfakes also pose a potential threat to identity verification technology, possibly allowing scammers to bypass biometric facial recognition systems.
This is why the deepfakes detection software has came to be of big interest.
The Problem With Deepfake Detection Models
AI researchers are doing their best to develop algorithms to spot deepfake videos. But this is a technically demanding and difficult challenge. Some of the interesting forgery detection models include:
So far, it seems we are on our way to win the war on deepfakes. But wait, there is a catch. As we said before, the deep networks responsible to generating this fake imagery can themselves be trained to learn how to avoid being detected. This leads to a cat and mouse kind of situation, where every time a new model for detection is presented, a better trained deepfake generator follows shortly after. An actual example of this is the model which detected fakes by assessing the subjects eye blinking patterns. Shortly after the paper describing this detection model was published, the deepfake models corrected this error.
The Deepfake Detection Challenge
Until recently there was a lack of big data set or benchmarks to train detection models. And we say until recently, because thanks to the Deepfake Detection Challenge (DFDC) organized by Facebook together with other industry leaders and academics, a huge dataset of videos (over 100,000) was shared publicly. Thanks to this dataset, participants of DFDC could train and test their detection models. More than 2,000 participants submitted over 35,000 models for the competition. The results were announced last year, and the winning model achieved a precision of 65%. This means, that 35% of the videos were marked as deepfakes even though they were not (a ‘false positive’ error). Let us be honest, these numbers are not too impressive…
DARPA’s SemaFor Program
DARPA, the US agency famous for innovative technologies development, decided to also jump on the deepfake detection train by launching a program called SemaFor (Semantics Forensics). It’s objective is to design a system that could automatically detect all types of manipulated media, by combining three different types of algorithms: text analysis, audio analysis and video content analysis. Their algorithms will be trained on 250,000 news articles and 250,000 social media posts, including 5,000 fake items.
Microsoft’s Video Authenticator
In September 2020, the tech giant Microsoft released a tool, designed to help distinguish fake videos by providing a numeric probability, the confidence score, that the media was manipulated by an AI. The tool will not be released to public directly, because then the deepfake creators could potentially use its code to teach their models to evade the detection.
Beyond deepfake detection
Because of the fact, that every time a new media manipulation detection method of is published, it is only a question of time when it will be surpassed by a better, smarter fakes-creating algorithm. This is why, in order to lower the risks associated with the spreading of forged multimedia, a more holistic approach needs to be taken. The solution seems to lie in a combination of:
The ability to detect fake multimedia is among one of the top challenges we are facing currently in the world of technology. Ironically, every time a new detection model is used published, it leads to improvement in the fakes generating models. In this way, we can expect to see far more believable and realistic deepfakes in the future. To fight against the misuse of such media, additional measures such as media authentication and media provenance need to be adapted.
Image by Gerd Altmann from Pixabay
Originally coming from a marketing background, decided to turn her life around and immerse herself into the wonderful exciting and most importantly – never boring world of technology and web development. Proud employee at MA-NO . Easily loses track of time when enjoying working on code. Big fan of Placebo, cats and pizza.
Recently a strange video published on YouTube caused a controversy – it was a funny take on Queen Elizabeth’s traditional Christmas message created by Channel 4, a British public-service television broadcaster. They…
The term Artificial Intelligence (AI), “the intelligence of the machines”, unifies two words that, a priori, could not be associated, since the term intelligence is attributed to the faculty of…
In March the TensorFlow team has released two new packages: facemesh and handpose for tracking key landmarks on faces and hands respectively. This release has been a collaborative effort between…
Latest news from Hi-Tech world
Having two WhatsApp accounts on the same Android or iOS cell phone is possible. To do this, your terminal must be a phone with Dual SIM compatibility, that is, by…
A few months ago we talked about certain Google metrics that were displayed in Search Console. The reason for writing another post on this topic is that Google has changed…
Cumulative Layout Shift, one of the new Core Web Vitals metrics, is the first metric that focuses on user experience beyond performance. Unexpected movement of web page content is a major…
Today, many users wonder what tools hackers use to look for different vulnerabilities on devices that are connected to the Internet. Normally, everyone uses specific tools, but there are search…
C/ Nuredduna 22, 1-3, 07006
Palma de Mallorca, Baleares
Telephone: + 34 871 738 867
mail: [email protected]
We use our own and third-party cookies to improve our services, compile statistical information and analyze your browsing habits. This allows us to personalize the content we offer and to show you advertisements related to your preferences. By clicking “Accept all” you agree to the storage of cookies on your device to improve website navigation, analyse traffic and assist our marketing activities. You can also select “System Cookies Only” to accept only the cookies required for the website to function, or you can select the cookies you wish to activate by clicking on “settings”.
Manage Consent Preferences
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be disabled on our systems. They are generally only set in response to your actions in requesting services, such as setting your privacy preferences, logging in or completing forms. You can set your browser to block or alert you to these cookies, but some areas of the site will not work. These cookies do not store any personally identifiable information
These cookies allow us to count visits and traffic sources so that we can assess the performance of our site and improve it. They help us know which pages are the most or least visited, and how visitors navigate the site. All information collected by these cookies is aggregated and therefore anonymous. If you do not allow these cookies to be used, we will not know when you visited our site and will not be able to assess whether it worked properly
These cookies allow the website to provide better functionality and customization. They may be set by our company or by external providers whose services we have added to our pages. If you do not allow these cookies to be used, some of these services may not function properly
These cookies may be set through our site by our advertising partners. They may be used by those companies to profile your interests and display relevant ads on other sites. They do not directly store personal information, but are based on the unique identification of your browser and Internet device. If you do not allow these cookies to be used, you will see less targeted advertising
This content was originally published here.