Spear phishing attacks used to be limited to high-profile targets such as CEOs, politicians, and other influential individuals. These attacks required extensive research, preparation, and coordination, making them a resource-intensive endeavor for the attacker. However, the rise of artificial intelligence (AI) and the ubiquity of mobile malware have fundamentally changed the landscape of social engineering attacks. Today, very sophisticated spear phishing attacks can be created easily and highly-effective against millions of everyday individuals.
This blog addresses the topic of malware-supported, AI-generated spear phishing head-on and offers a technical solution to break the cycle of these attacks with minimal investment and effort.
Using Mobile Malware to Gather Data for the Attack
Mobile malware is ubiquitous and, as a result, is a terribly excellent source of data and control over victims in social engineering attacks.
Mobile malware, particularly overlays, keyloggers, RATs, Accessibility Malware, and mobile apps with VPNs, Remote Desktop Control, and EDR apps, can record users’ interactions and provide all the data an attacker needs to carry out a social engineering attack. What’s worse, these tools can also be used to perform pre-attack research on the victim and real-time monitoring of the victim during the social engineering attack. In one highly publicized example, malware installed on victims’ devices, combined with AI speech impersonation, allowed the attacker to interact with victims during a conference call.
New tools drastically reduce the effort needed for attackers to gather detailed personal information on the intended victim. For example, mobile app overlays are malicious screens that appear over legitimate apps, including fields and buttons in apps. Overlays can be used to trick users into interacting with the attacker, including entering sensitive information such as login credentials or payment details. Keyloggers, on the other hand, silently record every keystroke made on the infected device, capturing passwords, messages, and other private data. Likewise, once the malware data collection model has gathered the requisite baseline of data on any victim, control over the attack can be turned over to the call center to complete the attack.
Using Artificial Intelligence to Complete the Attack
AI technologies have significantly enhanced the believability of social engineering attacks. With AI, the effectiveness of social engineering attacks has skyrocketed.
For example, with AI, smishing (SMS phishing) attacks have evolved well beyond spam to be highly targeted, personalized, and convincing texts to everyday individuals. Moreover, these new AI-based SMS, Text, Direct Message, and other communications sound eerily human, mimicking regional speech patterns, avoiding grammatical mistakes, and using the data gathered from other sources. AI-generated smishing attacks often masquerade as trusted brands or businesses, making it challenging for recipients to distinguish between legitimate and malicious communications. In several publicized examples of social engineering attacks, the attack starts off with a text message from your bank that says, “Hi [Victim], We care about your security. Did you make this purchase?” and asks you to approve or deny the amount shown in the text.
Likewise, AI-based voice cloning has taken social engineering attacks to new heights. Using AI, attackers can create near-perfect replicas of anyone’s voice – eliminating any hint that the caller is not legitimate. This technology can be used to impersonate family members, colleagues, brand representatives, or even executives, adding a layer of credibility to these vishing (voice phishing) attacks. The ability to clone voices so accurately means that attackers can manipulate victims more effectively, often bypassing the initial skepticism that might arise with traditional phishing methods.
Finally, AI-powered chatbots can engage in real-time conversations with victims while they are inside a mobile app experience. For example, these AI-powered chatbots can be made to look like customer service representatives of a brand or service. These chatbots can reach out, answer questions, provide fake verification, and guide victims through phishing processes, making the scams more interactive and believable. The sophistication of AI chatbots can trick even the most cautious individuals into divulging sensitive information.
Fighting Social Engineering at a Technical Level
By the time an AI-powered social engineering attack, such as a voice clone or smishing campaign, is launched against the victim, the attacker has most – if not all – of the data needed to carry out the attack. At this point, the attacker needs the victim – the human – to act on behalf of the attacker. The victim is at an informational disadvantage and the attacker uses the data to make the attack more believable.
All cyber practitioners advocate ongoing, continuous, security training to ensure consumers and employees remain vigilant to these types of attacks. However, AI makes the line between real and fake, legitimate and malicious, thinner and thinner by the day. It’s doubtful that brands and enterprises can rely on security awareness training as a first line of defense for much longer.
At Appdome, we believe that brands and enterprises should fight social engineering at a technical level. By detecting the methods that attackers use to collect data and control the user, their mobile application, or their mobile device, stopping social engineering attacks becomes remarkably easier. Moreover, using the data from a layered mobile defense model in the mobile app, brands and enterprises can create, alter, or adjust the user experience to break the cycle of manipulation and control over each victim.
Social engineering scams and attacks assume that humans are the weakest link in the cyber-defense model. But, what if these same humans were armed with data about the malware and technical methods being used against them in the moment of the attack? What if, armed with this data and threat-aware, responsive workflows in mobile applications, they were given time to think, compare, and consider their actions before taking them? If this were to happen, they just might end up being the strongest link in the cyber-defense model.
As powerful as AI has become, the human brain still outpaces it.
Conclusion
This democratization of spear phishing means that the threat is no longer confined to the upper echelons of society. Everyday individuals—regardless of their professional or social status—are now at risk. Attackers can easily automate the process of gathering data, targeting, and carrying out social engineering attacks, making everyone a potential victim. Brands and enterprises have a way to detect and use the data on the malware and technical methods of control attackers use in social engineering attacks. Armed with this data, brands and enterprises can transform humans into the strongest link in defeating social engineering attacks at scale.