San Francisco, CA – October 16, 2024 A new AI-powered scam is targeting Gmail users. These aren’t your everyday run-of-the-mill phishing emails. The attacks leverage artificial intelligence (AI) to conduct highly convincing scams. This sophisticated con begins with fake Gmail account recovery notifications, followed by AI-generated phone calls to reinforce the deception. Even experienced users have been caught off-guard, highlighting the advanced nature of these phishing tactics. The scam was uncovered by Microsoft solutions consultant Sam Mitrovic earlier this month. In a recent blog post, he wrote about the phishing scam advising users how to protect their personal data.
How the Scam Works
The attack is initiated with an email that mimics Google’s account recovery system, prompting users to act quickly. It begins with an unexpected Gmail account recovery notification. If the initial phishing attempt is ignored, the scam escalates with an AI-generated call from an official Google number, impersonating Google staff to convince victims that their account is compromised. These voice calls use AI to sound realistic and urgent, heightening the pressure on users to reveal personal information.
This multi-step approach makes the scam highly effective, blending traditional phishing with cutting-edge technology to deceive users into divulging passwords or other sensitive data. The phishing attacks also spoof legitimate phone numbers from Google’s support site. When the Google Assistant call comes in the voice is very realistic, and typing sounds can be heard in the background. The “security expert” from Google will tell the user there has been suspicious activity on the account and can even engage in conversation. The AI-generated voice can even send the user an email upon request originating from a Google domain. In Mitrovic’s case, the caller ID showed an Australian number and appeared as “Google Sydney” on his device. A polite, American-sounding voice is the icing on the cake of the realistic AI scam call.
Mitrovic did some digging, and upon closer inspection of email headers, he discovered that the scam is using Salesforce CRM to spoof the domain and send emails over Google servers. Salesforce allows admins to customize sender information to anything they want, and send these emails through Gmail servers. He also found a user that fell victim to that new scam on Reddit.
Y Combinator CEO, Garry Tan, made a PSA on X with an example of how the scam attempt starts:
Public service announcement: You should be aware of a pretty elaborate phishing scam using AI voice that claims to be Google Support (caller ID matches, but is not verified)
DO NOT CLICK YES ON THIS DIALOG— You will be phished
They claim to be checking that you are alive and… pic.twitter.com/60zeuS2lL8
— Garry Tan (@garrytan) October 10, 2024
Who Is Behind It?
Though specific perpetrators have not been named, the scam reflects a broader trend in cybercrime where criminals are increasingly using AI to augment phishing schemes. The FBI has sounded alarm bells about the rising use of AI in scams, enabling criminals to automate and scale social engineering attacks with unprecedented precision.
Impact and Risk
Given Gmail’s massive user base of over 2.5 billion, the potential scale of this scam is alarming. While it’s unclear how many users have been directly impacted, the attack serves as a reminder that even tech-savvy individuals are vulnerable to these advanced tactics.
Protecting Yourself
To mitigate the risks, Google and security experts recommend you complete the following steps:
- Enable Two-Factor Authentication (2FA): This provides an extra layer of security, requiring a code in addition to your password.
- Verify Communications: If you receive a suspicious call or email, log in directly to your Google account or contact Google through official support channels instead of engaging.
- Monitor Account Activity: Regularly check for unauthorized access in your Gmail settings.
- Use Google’s Security Tools: Take advantage of Google’s Security Checkup to identify and fix any vulnerabilities.
- Red Flags of Urgency: Scammers rely on urgency to force mistakes. Take your time to verify email addresses and requests for sensitive information.
Google’s Response
On October 9, 2024 Google launched the Global Signal Exchange (GSE) initiative, a collaboration with industry partners to combat such AI-driven threats by sharing intelligence signals in real-time, aiming to safeguard users against evolving phishing tactics.
This collaboration involves the Global Anti-Scam Alliance (GASA) and the DNS Research Federation, combining expertise from these organizations to detect fraudulent activities more efficiently.
The GSE serves as a centralized platform that allows participants to share and receive scam signals in real time, leveraging Google Cloud’s AI capabilities to spot patterns and intelligently match fraud signals. Initially, the platform has focused on sharing data from Google Shopping URLs flagged under scam policies, but the scope will soon extend to other areas of Google’s ecosystem. The goal is to disrupt scam operations before they can cause significant harm across the internet.
This initiative builds on Google’s broader efforts, such as the Cross-Account Protection service, which safeguards billions of users by sharing alerts about suspicious activities. With GSE, the aim is to streamline intelligence-sharing among companies, governments, and civil society to fight bad actors effectively and protect consumers from fraud.
As scams grow more sophisticated with the integration of AI, initiatives like GSE are essential for faster detection and coordinated action to prevent further harm across platforms. However, by staying vigilant and leveraging the simple protective measures outlined above, users can reduce the risk of falling victim to advanced phishing attacks.
Related: