Good Articles to Share

Cybersecurity, deepfakes and the human risk of AI fraud

Tan KW
Publish date: Fri, 03 May 2024, 12:42 PM
Tan KW
0 435,511
Good.

On a March 2024 National Association of State Chief Information Officers call with both government and corporate IT leaders, an old security problem was highlighted that has evolved into a current top threat: Cybersecurity awareness training for end users is back near the top of government cybersecurity concerns, and we’ve seen this play out before. Or have we?

A new generation of AI-generated phishing attacks, arriving through emails, texts, voice messages and even videos, is targeting government organisations in unprecedented ways. These clever new cyber attacks are posing new challenges for organisation defenders because they are delivered without typos, formatting errors and other mistakes seen in past targeted phishing and spear-phishing campaigns.

Even scarier are the AI-generated deepfakes that can mimic a person’s voice, face and gestures. New cyberattack tools can deliver disinformation and fraudulent messages at a scale and sophistication not seen before.

Simply put, AI-generated fraud is harder than ever to detect and stop. Recent 2024 examples include fake messages imitating President Biden, Florida Gov. Ron DeSantis and private-sector CEOs. Beyond election and political impacts, a deepfake video of the CFO of a multinational company recently fooled staff into making bank transfers - leading to a US$26mil loss.

So how can enterprises address these new data risks?

Over the past few years, there has been an industry push to move beyond traditional security awareness training for end users and toward a more holistic set of measures to combat cyber attacks directed at people.

Put simply: Effective security awareness training truly changes security culture. People become engaged and start asking questions, they understand and report risks, and they realise that security is not just a workplace issue. It’s about their personal security and their family’s security as well.

The term that many are now adopting is “human risk management” (HRM). Research and consulting firm Forrester describes HRM as “solutions that manage and reduce cybersecurity risks posed by and to humans through: detecting and measuring human security behaviours and quantifying the human risk; initiating policy and training interventions based on the human risk; educating and enabling the workforce to protect themselves and their organisation against cyberattacks; building a positive security culture”.

So what does this mean for addressing immediate AI-generated deepfakes?

First, we must (re)train employees to detect this new generation of sophisticated phishing attacks. They need to know how to authenticate the source and content received. This includes showing them what to look for, such as:

* Inconsistencies in audio or video quality

* Mismatched lip-syncing or voice synchronization

* Unnatural facial movements

* Uncharacteristic behavior or speech patterns

* Source verification

* Enhancing detection skills

* Use of watermarks for images and videos

Second, provide tools, processes and techniques to verify message authenticity. If and when those new tools are not available, establish a process to ensure that employees feel empowered to question the legitimacy of the messages by going through a verification process that will be encouraged by management. Also, report deepfake content: If you come across a deepfake that involves you or someone you know, report it to the platform hosting the content.

Third, consider new enterprise technology tools that use AI to detect message fraud. That's right - you may need to fight fire with fire by using the next generation of cyber tools to stop these AI-generated messages in much the same way that email security tools detect and disable traditional phishing links and quarantine spam messages. Some new tools enable staff to check messages and images for fraud, if this cannot be done automatically for all incoming emails.

This new generation of cyber attacks using deepfakes to trick humans is essentially undermining trust in everything digital. Indeed, digital trust is becoming harder to obtain for governments and current trends are not encouraging - requiring immediate action.

As Albert Einstein once said, “Whoever is careless with the truth in small matters cannot be trusted in important affairs.”

 

 - TNS

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment