Good Articles to Share

Five AI-fueled social engineering risks to watch

Tan KW
Publish date: Tue, 05 Mar 2024, 03:34 PM
Tan KW
0 423,596
Social engineering is arguably one of the most potent and persistent forms of cybercrime. There’s a reason why social engineering is so popular with cybercriminals: Hacking people is a lot simpler than breaching software.
To hack software networks, one needs to understand the target environment and how to pry open weaknesses and uncover loopholes - which requires tech skills and resources. On the other hand, hacking humans simply requires basic knowledge of human nature - our susceptibility to greed, lust, curiosity, and impatience.
If you hack the right person, namely someone unaware of phishing lures and their telltale signs, you procure the keys to the kingdom, and your illicit intentions pass undetected.
Technology also plays a role. The more technology evolves, the more technology-dependent we become. Also, it’s become easier to deceive humans.
First it was email (phishing), then SMS (smishing), then voice (vishing), then social media, then QR codes (quishing). Social engineering has evolved cheek-to-cheek with technology.
The sudden wave of AI technologies has rolled in new heights of sophistication to these attack vectors. Let’s examine five upcoming AI developments and the implications they could have on social engineering scams:
1. Professionalised and personalised phishing at scale
According to research by Google Cloud, generative AI is already being used to develop advanced phishing attacks where misspellings, grammatical errors, and lack of cultural context are mostly nonexistent, making these phishes harder to identify and block. Moreover, using automation, attackers can personalise or customise phishing messages to make them appear more authentic and convincing.
2. Weaponisation of voice and video
AI technologies are enabling users to clone audio, superimpose faces on videos, and impersonate people. Persuasive attacks are being noted around the world where adversaries are cloning audio and fabricating virtual personas to swindle money from organizations via their own employees.
3. More contextualised attacks using MLLMs
Standard LLMs (large language models) process only text. MLLMs (multimodal large language models) present substantial benefits over LLMs because multimodal models can process and associate additional media such as images, video, audio, and sensory data. This enables AI tools to build a deeper awareness of context, leading to more intelligent responses, improved reasoning, and human-computer interactions. Attackers could soon harness MLLMs to create highly contextualised phishing messages, significantly boosting the efficacy of social engineering attacks.
4. Malicious applications of text-to-video technology
Text-to-video (T2V) is another emerging technology in the AI field. As the name implies, T2V enables users to create high quality visual content by simply providing text inputs. Such technology in the hands of threat actors could be dangerous, using it to fabricate false narratives (disinformation) and generate deepfakes at scale; to deceive people and organizations, and leverage for targeted social engineering attacks.
5. Emergence of AI technology as a service
Google's report predicts that AI tools will soon be offered as a service to assist other threat actors with their insidious campaigns. Malicious AI tools like FraudGPT have already begun surfacing on the dark web, empowering cybercriminals with crafting sophisticated spear phishing emails. As these AI technologies mature and become more accessible, less skilled bad actors will be able to deploy these tools, giving rise to a higher volume of AI-powered social engineering attacks.
How businesses can mitigate the risk of AI-generated social engineering attacks
Social engineering attacks are not exclusive to large enterprises. A worker at a business with fewer than 100 employees versus a larger business will experience 350% more social engineering attacks.
What's more, as AI technologies proliferate and businesses transact and interact more digitally than physically, such attacks will become commonplace. Here are best practices that can help mitigate this threat:
1. Improve awareness of AI risks: Through regular communication and reminders, employees must be made aware of emerging AI risks. Document AI risks in security policies so that workers understand how to recognise them, how to handle them, and who to can contact when they encounter a threat.
2. End-user training: The importance of regular (monthly) security awareness training cannot be emphasised enough. Deliver in-person training, give personalised coaching if needed, and run phishing simulation exercises to strengthen employees’ security skills and aptitude. The success and failure of social engineering attacks hinge on employees alertness and education.
3. Leverage tools and technology: While social engineering attacks are usually difficult to detect, organisations can implement controls to reduce the risk of identity theft and fraud. For instance, deploy phishing-resistant multi-factor authentication (MFA) to bolster authentication checks. Businesses can also consider employing AI-based cybersecurity toolsthat can inspect the meta-data of email messages to detect evidence of phishing attempts.
Social engineering is usually phase one in the cyberattack cycle. If organisations learn to harness human intuition developed through repeated phishing exercises, they will be able to detect and block an attack before it can cause material damage.
Along with cultivating the right instincts, it’s equally important for employees to be accountable and act responsibly in reporting suspicious items and incidents. To achieve this, organisations must endeavour to foster a healthy and supportive culture of cybersecurity.
 - TNS
Be the first to like this. Showing 0 of 0 comments

Post a Comment