returnreturn
Follina a silent Client-Side

By:
IDi Team

SHARE

Twitter Facebook linkedin
References

- 24 million euros stolen with
an artificial intelligence scam (El Confidencial),
https://www.elconfidencial.com/tecnologia/
- Messi Messages (PepsiCo Inc.)
- Sora (OpenAI)

Deepfake: Deception and 20 million euros lost

"An employee of a Hong Kong finance company recently transferred 23.7 million euros to what he believed to be his company's UK subsidiary." This is how the story in the newspaper El Confidencial begins. In a first approach to this paragraph we can consider that a lack of security awareness plans have been the main cause, again we see the exploitation of "The weakest link".

Reading on: "The scammers used Deepfake technology to impersonate the CFO and other colleagues in a video conference, Hong Kong police have reported." "(In the) video conference with multiple people, it turns out they were all fakes"

In the analysis of what happened, it is explained that an employee received a message by mail from the "financial director" of the UK subsidiary, this message requested a "secret" money transfer. The employee then, well-trained in the chance of being in the hands of a possible cybercriminal, became suspicious of the requested transaction and, of course, requested a call for confirmation. It is there that the person saw and conversed with the "CEO" and other employees who were also on the video call, was able to recognize the appearance and tone of voice of his colleagues. Once the operation was validated, more than 23 million euros were transferred.

Image of a meeting generated with a free service using a single prompt and without preparation


Deepfake

Deepfake is an advanced artificial intelligence technique that allows the creation or manipulation of multimedia content, such as videos or audios, to make it appear that a person is saying or doing something that never actually happened. Generally, using resources such as neural networks and deep learning algorithms, models are trained with large volumes of visual and auditory data to mimic the appearance, voice and gestures of specific individuals with a high degree of realism.



Challenge for organizations

From a cybersecurity perspective, Deepfakes represent a significant challenge due to their potential to spread disinformation, compromise the authenticity of information, and perform malicious activities, such as fraudulent activities like the one described above, but also with massive scope for manipulation of political and social events.

Effective detection of Deepfakes and education of users about their existence and associated risks are important points to mitigate their negative impact, although compensatory controls should also be considered to simply reduce the risk associated with this technology.

Turning to the risks arising from the misuse of these technologies we can find:

Impersonation: imitating the voice or image of key people within the organization, such as executives or system administrators, to authorize fraudulent transactions, disclose sensitive information or manipulate employees to perform compromising actions.

Disinformation and manipulation: Creating false content that appears authentic can damage the organization's reputation, manipulate market perceptions, destabilize company actions or spread falsehoods about products and services, affecting customer confidence and investor relations.

Spear-phishing: Its use in phishing campaigns represents a level of personalization and realism that can fool even technically sophisticated users, increasing the effectiveness of these attacks to obtain access credentials, financial information or confidential data.

Legal and compliance risks: The circulation of Deepfakes involving the organization or its employees may lead to legal implications, including litigation for defamation, copyright violations or non-compliance with privacy and data protection regulations.

Corporate espionage: In critical sectors or organizations working in areas of national interest, Deepfakes can be used by state actors or competitors for corporate espionage or even for destabilization or influence on democratic processes and political decisions.

Tampering with evidence and records: Deepfakes can be employed to alter visual or auditory evidence in legal or internal investigations, making attribution of responsibility and application of justice complex.

Impact on biometric authentication: As Deepfake techniques become more sophisticated, there is a risk that they could be used to circumvent security systems based on facial or voice recognition, compromising access control to critical information infrastructures.

Some conclusions

Awareness plans should include a focus on the misuse of Deepfake technology, the mentioned case is no longer a novelty, it is just one more that adds to other similar ones in the environment, we can know that its use is increasingly likely due to the large amount of technological resources that are at hand, it is not necessary to analyze large statistics to notice this when we have this type of technology at hand to send a greeting to our friends on behalf of Messi.

Online generator of personalized messages from Leonel Messi (Available February 2024)


Recently, OpenIA has shared progress on a new development that allows the generation of multimedia content from text instructions. Beyond its incredible breakthrough, we can note in context of this publication the associated risks.

Scene of a man reading a book sitting on a cloud generated with SORA (OpenAI)


All in all, we can say that to truly mitigate the risks associated with Deepfake technology, as for most risks, there is value in taking a multipronged approach that includes other elements as well, from updating security policies to investing in artificial intelligence solutions that can identify anomalies and Deepfake-specific signatures, the latter of which is not always within reach, but simple elements such as revising processes to include some form of additional validation or verification and authentication of information sources can be considered.

It is also important to mention a security culture within the organization that promotes vigilance and healthy skepticism towards suspicious content.

Collaboration with other companies and regulatory bodies to share knowledge and best practices can also help create a more secure environment against the threats that Deepfakes pose.