Skip to content
Artificial Intelligence and Organized Crime Sitting In a Tree…

Artificial Intelligence and Organized Crime Sitting In a Tree…

K.I.S.S.I.N.G. First came love, then came marriage, then came the baby in the baby carriage! Sucking his thumb, wetting his pants, doing the hula – hula dance! And the BABY is a Boy!

The Yahoo Boys.

Artificial Intelligence and Organized Crime Sitting In a Tree…The Yahoo Boys are a notorious group of cyber criminals operating out of West Africa, primarily Nigeria. While most scammers try to stay under the radar, the Yahoo Boys are brazen – they openly advertise their fraudulent activities across major social media platforms like Facebook, WhatsApp, Telegram, TikTok, and YouTube.

An analysis by WIRED uncovered a vast network of Yahoo Boy groups and accounts actively sharing scamming techniques, scripts, and resources. There are nearly 200,000 members across 16 Facebook groups alone, not to mention dozens of channels on WhatsApp, Telegram, TikTok, YouTube, and over 80 scam scripts hosted on Scribd. And this is likely just scratching the surface.

The Yahoo Boys aren’t a single organized crime syndicate, but rather a decentralized collective of individual scammers and clusters operating across West Africa. Their name harks back to the notorious Nigerian prince email scams, originally targeting users of Yahoo services. But their modern scamming operations are vast – from romance fraud to business email compromise and sextortion.

The scams themselves are getting more psychologically manipulative and technologically advanced. Classic romance scams now incorporate live deepfake video calls, AI-generated explicit images, even physical gifts like food deliveries to build trust with victims. One particularly disturbing trend is the rise in sextortion schemes, with cases linked to dozens of suicides by traumatized victims.

Artificial intelligence (AI) is being exploited by cybercriminals such as the Yahoo Boys to automate and enhance various aspects of social engineering scams.

Here are some ways AI is being used in social engineering attacks:

1. Natural Language Generation: AI models can generate highly convincing and personalized phishing emails, text messages, or social media posts that appear to come from legitimate sources. These AI-generated messages can be tailored to specific individuals or organizations, making them more believable and increasing the likelihood of success.

2. Voice Cloning: AI can be used to clone or synthesize human voices, allowing scammers to impersonate trusted individuals or authorities over the phone. This technique, known as voice phishing or “vishing,” can trick victims into revealing sensitive information or transferring funds.

3. Deepfakes: AI-powered deepfake technology can create highly realistic video or audio content by manipulating existing media. Cybercriminals can use deepfakes to impersonate individuals in video calls or create fake videos that appear to be from legitimate sources, adding credibility to their social engineering attempts.

4. Sentiment Analysis: AI can analyze the language, tone, and sentiment of a victim’s responses during a social engineering attack, allowing the attacker to adapt their approach and increase the chances of success.

5. Target Profiling: AI can analyze vast amounts of data from various sources, such as social media profiles, public records, and online activities, to create detailed profiles of potential victims. These profiles can be used to craft highly personalized and convincing social engineering attacks.

6. Automated Attacks: AI can automate various aspects of social engineering campaigns, such as identifying potential victims, generating and sending phishing emails or messages, and even engaging in real-time conversations with targets.

While AI can be a powerful tool for cybercriminals, it is important to note that these technologies can also be used by security researchers and organizations to detect and mitigate social engineering attacks. However, the ongoing advancement of AI capabilities poses a significant challenge in the fight against social engineering and requires vigilance and continuous adaptation of security measures.

Insidious Meets Prolific

What makes the Yahoo Boys particularly insidious is their bold presence on mainstream social platforms. They use these as virtual “office spaces,” sharing step-by-step scripts, explicit images and videos of potential victims, fake profiles, even tutorials on deploying new AI technologies like deepfakes and voice cloning for their scams. It’s a massive con operation happening in plain sight.

Despite social media’s stated policies against fraud and illegal activities, the companies have struggled to keep up with the Yahoo Boys’ prolific output. Although the major platforms removed many of the specific groups and accounts identified by WIRED, new ones continue popping up daily, exploiting gaps in moderation and content policies.

Cybersecurity experts are sounding the alarm that social platforms are providing safe harbor for these transnational cyber criminal gangs to recruit, share resources, and execute increasingly sophisticated frauds with global reach and real-world consequences. While the “Yahoo Boy” monikers imply a relatively harmless group of young tricksters, the reality is a vast and dangerous network of techno-savvy con artists causing significant financial and psychological harm on an industrial scale.

Law enforcement and the tech giants are struggling to get a handle on this viral scamming epidemic. As new AI capabilities get folded into the Yahoo Boys’ arsenal of malicious tools and tactics, the need for a coordinated global crackdown is becoming more urgent. No longer just a nuisance of sketchy email schemes, this criminal community represents an escalating threat operating in the open on our most popular social media platforms.

I personally am getting ready to crawl under a rock, and maybe move into a cave deep in the woods of Montana to escape the onslaught of artificial intelligence scams. But maybe you are tougher than I am. If you are, I suggest adhering to these tips:

Here are 11 tips to protect yourself from AI-powered social engineering scams:

1.      Be wary of unsolicited communication, even if it appears to come from a trusted source. Verify the authenticity of the message or request through official channels. You know, pick up the phone. Send them a text message. Meet them in person.

2.      Enable multi-factor authentication for your accounts and devices to add an extra layer of security beyond just passwords. This has nothing to do with artificial intelligence scams. You should just do it because it makes you a tougher target.

3.      Keep your software and operating systems up-to-date with the latest security patches to mitigate vulnerabilities that could be exploited. Same, just do it.

4.      Be cautious of urgent or high-pressure requests, as these are common tactics used in social engineering attacks. This goes for all social engineering scams.

5.      Scrutinize the language and tone of messages for inconsistencies or anomalies that may indicate AI-generated content. If you feel your blood pressure going up, it’s fraud. It’s always fraud.

6.      Verify the authenticity of voice calls or video conferences, especially if they involve requests for sensitive information or financial transactions. Again, pick up the phone, be persistent, meet them in person and verify the authenticity not just by yourself, get others involved.

7.      Be skeptical of overly personalized or tailored messages, as AI can analyze your online presence to craft convincing lures. Every communication from a scammer is designed to get you to trust them. Do everything in your power to be skeptical.

8.      Educate yourself and stay informed about the latest AI-powered social engineering techniques and scams. Yeah, just read my newsletter. I’ll keep you up to speed.

9.      Implement robust security measures, such as email filtering, web content filtering, and endpoint protection, to detect and block potential threats. Your IT people should have systems in place. But even those systems can be compromised by human hacking.

10.  Report any suspected social engineering attempts to the relevant authorities and organizations to help identify and mitigate emerging threats. Those relevant authorities start with your internal people.

11. Cyber security awareness training educates employees about threats, best practices, and their role in protecting company data and systems. It reduces human error, promotes a security-conscious culture, mitigates risks, and enhances an organization’s overall cyber resilience.

By staying vigilant, verifying information, and implementing appropriate security measures, you can significantly reduce your risk of falling victim to AI-powered social engineering scams.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.

CALL US: (800) 658-8311