At the 8th Cyber ​​Security Weekend-META 2023, organized by Kaspersky in Kazakhstan this year, experts shared the developments in the Middle East, Turkey and Africa (META) region and the worldwide digital threat environment.

Kaspersky Expert Data Scientist Vladislav Tushkanov said that many fraudulent activities are carried out with deepfake and voice imitation applications, and various measures should be taken to prevent them.

Emphasizing that companies should take precautions in this regard, Tushkanov said, “There may be those who want to defraud you with voice imitation. Large amounts of money should not be given only by phone calls. There must be various protocols in order not to fall into such traps. People who try to deceive you with fake voices and images say ‘These issues over the phone. “I don’t want to talk. Give me your corporate e-mail address,” he said.


Some deepfake apps cannot be understood

Vladislav Tushkanov stated that some of the deepfake videos can be easily detected and made the following evaluations:

“The people in some deepfake videos never blink. We can easily understand them, but there are also some very well prepared deepfake videos. For example, the deepfake video made by Tom Cruise was really professional. The technology developed to create deepfake videos is developing, but the technology of deepfake videos is also developing. For this reason, it is not very healthy to rely entirely on technologies that can detect deepfakes.”


“400 thousand new malicious files are distributed over the internet every day”

Kaspersky Expert Data Scientist Tushkanov pointed out that there is so much information on the internet that it is impossible to check and verify.

Pointing out that machine learning plays a very helpful role in the fight against cyber attacks, Tushkanov said:

“Every day we are faced with a lot of phishing attacks. 400,000 new malicious files are distributed over the internet every day. No human can handle this many attacks. We conducted an experiment where ChatGPT was able to detect phishing attacks. ChatGPT successfully discovered the phishing attack. However, we can see that ChatGPT has made some mistakes. So although I don’t fully trust these systems, I think they have potential.”


$243k scam in the UK

The use of applications that can imitate voices in fraudulent activities in recent years has caused concerns.

The fraud case, revealed by The Wall Street Journal in 2019, is known as “the first major fraud case involving imitation”.

The CEO of an unnamed energy company headquartered in the United Kingdom, transferred $243 thousand to the fraudsters on the instructions given on the phone by the person he thought was his boss. The CEO thought he was talking to his boss on the phone. The scammers had cloned the boss’s voice through the voice imitation app.

Similar Posts

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir