How Corporations Can Combat Generative AI-Driven Fraud

Artificial intelligence (AI) scams are among the most damaging threats to a company’s financial health and customer relationships. Fraudsters are further exploiting the technology to create even greater deception. In December 2024, the FBI warned of increasingly sophisticated schemes that use generative artificial intelligence (gen AI) – capable of learning patterns from large datasets and producing variations – for more realistic text, audio and images in the commission of financial crimes.1
“Imposter” or “impersonation” scams were the number one fraud type among Federal Trade Commission consumer complaints in third quarter 2023.2 Charity, romance and grandparent scams are commonly targeted at consumers, as well as banking, IRS and tech support schemes. However, impersonation scams aren’t exclusive to consumers. With gen AI-based fraud becoming more useful and prevalent in committing business fraud, organizations must learn how to detect the nuances and proactively respond to attacks.
What is generative artificial intelligence?
Traditional artificial intelligence follows rules-based or supervised learning to analyze patterns and make highly accurate predictions or decisions needed for complex tasks. These include automation and optimization, forecasting, fraud detection, research, and maintenance planning. Traditional AI’s focus isn’t generating new content.
Gen AI leverages a host of technologies including deep and machine learning (ML), natural language processing (NLP), neural networks and large language models to create new and unique content based on training gleaned from evolving predictive algorithms. Gen AI can also produce software code and is scalable, making it highly effective in launching targeted, wide-ranging social engineering attacks that are believable and difficult to detect.
How do fraudsters commit AI scams?
AI helps criminals replicate the people and organizations victims trust and is used in various forms of deceptive content within AI scams to obtain sensitive data from corporations. However, the technology can also help prevent this type of fraud.
-
Phishing attacks appear to be from legitimate sources.
Fraudsters send emails or social media messages that appear to be from a reliable source, such as a coworker, manager or brand when committing phishing attacks. The messages replicate the impersonated source’s writing style and tone to create a false sense of security. The communications often contain malware or links to capture sensitive information.Gen AI can help prevent phishing attacks. For example, software uses ML and NLP to detect suspicious behavior or language, filtering out potential scams. Consistent employee training can also help team members identify phishing attacks and recognize the ways attackers try to evade prevention mechanisms.
-
Fraudulent advertisements and listings promote fake products or services.
An online source that appears to sell a reputable merchant’s or legitimate brand’s product or services could, in fact, be a fraudulent ad or listing. The criminals want to steal money or data or infect your organization’s devices with malware. Projections suggest the global cost of fraudulent advertising will nearly double from $88 billion to $172 billion by 2028.3Misspellings in messages and links are common in fraudulent ads and listings. Train employees to recognize and report suspicious content. Anti-malware software can help reduce risk if an employee clicks on a suspicious link.
-
Voice cloning replicates human speech.
Fraudsters only need three seconds of audio to replicate someone’s voice.4 This may be one of the most compelling reasons AI voice cloning is increasing. This is a type of “deepfake” that uses ML, NLP and natural language generation to recreate a human voice — often one the intended victim recognizes. Voice cloning programs can mimic a popular voice, such as a U.S. president, and control what it says. ML can also enable real-time processing, so the voice clone responds as if in a normal conversation. Sometimes, cyber criminals use voice cloning to impersonate an employee’s manager or a close family member.Fortunately, there are ways to detect and prevent deepfake attacks. Some AI-driven software solutions claim to detect cloned voices with 98% accuracy.5 Employee training programs should teach staff to listen for unnatural phrasing or be extra cautious when a caller urgently requests financial information or payments.
A multimodal approach is most effective in combating emerging AI fraud risk
In 2023, U.S. organizations lost $12.3 billion to gen-AI fraud.6 The technology is expected to escalate fraud losses to $40 billion by 2027, a compounded growth rate of 32% .7
Corporations must prioritize fraud prevention to reduce the risk of financial losses and reputation. According to experts, a multimodal effort with tasks ranging from increasing awareness to implementing preventive technology and fraud prevention policies is most effective.8
-
Foster company-wide awareness.
Every network endpoint and employee is vulnerable to generative AI fraud. Fostering company-wide awareness will enable staff members to contribute to fraud prevention strategies. Corporate leaders should keep employees apprised of the data, tactics and rationale driving fraud prevention planning to further insulate the organization from risk. -
Establish sound generative AI fraud policies.
Companies should implement policies that position employees to practice safe behaviors and contribute to detection efforts. Core functions of a corporate AI fraud prevention policy include roles and responsibilities, restrictions, best practices and training.- Define roles and responsibilities for generative AI fraud training, reporting and response.
- Establish best practices for hardware and software use.
- Restrict access and permissions for sensitive databases and programs to only a “need-to-know” basis.
- Detail ongoing training requirements.
-
Implement strong identity verification tools.
Requiring users to verify their identities is a best practice that is effective in preventing impersonation schemes. Multi-factor authentication (MFA) requires two or more pieces of “proof,” such as passwords, plus tokens or PINs, for example. Real-time checks, in which users provide personally identifying data like photos, are also popular. Biometric scans read fingerprints or facial features to grant access and are also used in MFA. -
Integrate generative AI fraud detection techniques.
A well-known Shakespearean quote suggests “fighting fire with fire.” Gen AI enables organizations to collect and analyze data, as well as identify suspicious language, giving them an advantage in combating advanced attacks.ML is proficient in pattern recognition, while NLP helps computers understand human language. Together, these tools make generative AI fraud detection easier. They use historical data for model training and use that information to find deviations from expected patterns, which is indicative of fraudulent activity.
Deep learning, a subset of ML, further enhances detection with convolutional neural networks (CNNs) that can recognize subtle fraud indicators and recurrent neural networks (RNNs) that can analyze multiple transactions that occurred at various times.
As gen AI fraud increases, so are elimination efforts. Researchers recently announced they’ve developed an AI system that can detect accounting fraud across industries and supply chains. Through ML and graph theory, FraudGCN analyzes financial data patterns between organizations, auditors and industry peers to predict potentially fraudulent activity.9
Synovus can help prevent AI scams.
We recognize the increasing threat of AI-driven fraud. For more information on how our experienced banking and fraud professionals can help your organization detect and prevent AI scams, complete a short form and a Synovus Treasury & Payment Solutions Consultant will contact you with more details. You can also stop by one of our local branches.
Fraud and Risk Management
How to Prevent Phishing and Other Business Fraud
Fraud and Risk Management
BEC Fraud: The Rising Threat in Your Inbox
Fraud and Risk Management
Electronic Payments More Secure Alternative to Paper Checks
-
Digital Transformation Strategy: Key Steps to Future-Proof Your Organization
Ninety percent of all organizations are engaged in digital transformation. Business leaders see technology as a necessary driver for competition and growth.
-
Customer Webinar: Empower & Protect Fraud Education
As fraud becomes more prevalent and sophisticated, organizations must protect themselves from evolving risks.
Important disclosure information
This content is general in nature and does not constitute legal, tax, accounting, financial or investment advice. You are encouraged to consult with competent legal, tax, accounting, financial or investment professionals based on your specific circumstances. We do not make any warranties as to accuracy or completeness of this information, do not endorse any third-party companies, products, or services described here, and take no liability for your use of this information.
- Federal Bureau of Investigation, Public Announcement, “Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud,” December 3, 2024 Back
- Federal Trade Commission, “FTC Consumer Sentinel Network,” November 8, 2024 Back
- Statista, “Estimated Cost of Digital Advertising Fraud Worldwide in 2023 and 2028,” April 12, 2024 Back
- McAfee, “Artificial Voice Scams on the Rise with 1 in 4 Adults Impacted,” May 2, 2023 Back
- PR Newswire, “New Software Identifies AI Voice Cloning with 98% Accuracy as a 27.3% Growth in Voice Cloning Market Poses Increasing Political and Business Risk,” October 29, 2024 Back
- Deloitte Center for Financial Services, “Generative AI is Expected to Magnify the Risk of Deepfakes and Other Fraud in Banking,” May 29, 2024 Back
- Ibid Back
- ResearchGate, “The AI Revolution in Financial Services: Emerging Methods for Fraud Detection and Prevention,” May 2024 Back
- PYMNTS, “New AI System Aims to Detect Financial Fraud Across Corporate Networks,” September 6, 2024 Back