Learn

Personal Resource Center

How Criminals Can Use ChatGPT for More Effective Phishing

target icon
Generative AI-empowered phishing is harder to detect than traditional phishing messages.

In a more sophisticated workaround, BBC researchers used a new ChatGPT feature intended to allow users to build their own AI assistants to build a phishing assistant less constrained by ChatGPT rules.8 The BBC team asked ChatGPT to design an AI bot called "Crafty Emails" that could craft text using "techniques to make people click on links or and download things sent to them."

According to the BBC report, the result was "highly convincing text for some of the most common hack and scam techniques, in multiple languages," created in seconds.


State Rankings: Fraud and Other Reports.

Source: Federal Trade Commission

It's easy to imagine how these bots help cyber criminals create a high volume of phishing emails and texts with ease. In addition to quantity and simplicity, ChatGPT also assists with accuracy. Spelling and grammatical errors were once a key red flag for identifying a phishing email, but the chatbot virtually eliminates typos.6

Beyond misuse of ChatGPT itself, criminals have also developed their own, illegal tools based on the mainstream chatbot. FraudGPT and WormGPT are two known names of such AI-enabled tools designed specifically to facilitate cybercrime.9


How To Protect Yourself From AI-Powered Phishing

Generative AI-empowered phishing is harder to detect than traditional phishing messages. A few of the tried-and-true methods for identifying them simply no longer apply. These include:10

  • Poor grammar and spelling. As noted above, ChatGPT provides clean, accurate copy.
  • Suspiciously generic copy. AI-generated content can be more personalized to the recipient and, therefore, seem more credible. It can also use recent news events or other contextual information to validate the sense of urgency around responding.

Those facts make the items remaining on the phishing-detection list even more important. These phishing red flags remain, even in AI-generated phishing messages:

  • The sender's email address does not match the organization's website. 
  • The email is unexpected or unsolicited.
  • The website addresses in the email look suspicious. (Be sure to hover over the address without clicking to preview it.)
  • Generic greetings, like "Dear customer," continue to be a red flag.

Only one phishing protection technique can truly protect you against phishing attempts, ChatGPT-generated or otherwise:

  • Never respond to or click on a link in any unexpected email or text directly. If the organization is one you trust, look up its correct website, locate its contact information, and reach out to he business yourself.

As cybercriminals' tools evolve, so must everyone's vigilance against phishing and other digital scams. If you assume anything could be suspicious, you'll be better positioned to spot red flags and protect yourself and your finances.

Important disclosure information

This content is general in nature and does not constitute legal, tax, accounting, financial or investment advice. You are encouraged to consult with competent legal, tax, accounting, financial or investment professionals based on your specific circumstances. We do not make any warranties as to accuracy or completeness of this information, do not endorse any third-party companies, products, or services described here, and take no liability for your use of this information.

  1. Amanda Hoover, "Kids Are Going Back to School. So Is ChatGPT," Wired, published August 2023, accessed May 22, 2024. Back
  2. Bryan Robinson, "Will ChatGPT Lead to Extinction or Elevation of Humanity a Chilling Answer," published June 9, 2023, accessed May 22, 2024. Back
  3. Internet Crime Complaint Center, "Internet Crime Report 2023," FBI, published March 6, 2024, accessed May 22, 2024. Back
  4. Amanda Hetler, "What is ChatGPT?" TechTarget, published December 2023, accessed May 22, 2024. Back
  5. ChatGPT, "Usage policies," OpenAI, published January 10, 2024, accessed May 22, 2024. Back
  6. David Gewirtz, "6 things ChatGPT can't do (and another 20 it refuses to do)," published February 16, 2023, accessed May 22, 2024. Back
  7. Aaron Drapkin, "11 Convincing ChatGPT and AI Scams to Watch out for in 2024," published January 15, 2024, accessed May 22, 2024. Back
  8. Joe Tidy, "ChatGPT tool could be abused by scammers and hackers," BBC, published December 6, 2023, accessed May 22, 2024. Back
  9. Julien Lacombe, "The Dark Side of ChatGPT: How Criminals are Using Large Language Models," LinkedIn Pulse post by Fraud, Risk and Compliance, published February 16, 2024, accessed May 22, 2024. Back
  10. Dean Levitt, "How to spot AI phishing attempts and other security threats," Paubox, published May 09, 2023, accessed May 22, 2024. Back