Bad actors are using Gen A.I. to fraudulently open new accounts using investors’ identities.

Grygo is the chief content officer for FTF & FTF News.
FINRA, the self-regulatory organization for broker-dealers, recently issued a guidance about how generative artificial intelligence “is making it easier for bad actors to fraudulently open new accounts using investors’ identities and to improperly gain access to investors’ accounts.”
Investment fraud that uses A.I. “is on the rise” because Gen A.I. “is a type of A.I. that can create new content based on a user’s prompts. For example, tools using Gen AI technology can write essays and computer code; generate realistic images, audio and video; drive chatbots that interact with humans; and perform more functions than database searches that simply return results in response to word or information queries,” FINRA warns.
“Fraudsters are using Gen A.I. to exploit traditional identification (ID) verification processes and commit new account fraud and account takeovers in multiple ways” such as the following:
- Social Engineering: “Social engineering involves tricking or manipulating targets into giving away sensitive information or allowing remote access to their computer.” Fraudsters might use the technology to “analyze social media activity to create highly personalized phishing emails that could lead you to fraudulent websites embedded with malicious links;”
- Voice Clones: “With Gen AI, fraudsters need only three seconds of audio of you speaking to create a credible-sounding imitation, or voice clone. Using this voice clone, they might persuade you to grant access to your accounts, for example, by impersonating a loved one of yours in distress or under financial duress;”
- Fake ID Documents: “Fraudsters can use Gen A.I. to create convincing fake ID documents — such as driver’s licenses or professional credentials — that might also incorporate A.I.-generated images. They can use these documents to verify identity to fraudulently open a new account or to take over an existing account;” and
- Deepfake Selfies: “Some firms have incorporated requests for selfie photos and videos into their customer-verification process. Fraudsters can take images from investors’ social media and use Gen AI to create deepfakes to get around these security checks.”
Broker-dealers can tell their clients to protect themselves by doing:
- Choose strong passwords: “Create complex passwords that are unique to each account, and avoid using easily guessable passwords and security questions;”
- Use a password manager: “A password manager is a software application that securely stores and manages your login credentials, including usernames and passwords, for various online accounts. The main benefits of using a password manager include secure storage, unique and complex passwords, and centralized management;”
- Enable multi-factor authentication (MFA): “Enabling MFA on all of your accounts adds an extra layer of security beyond just a password by requiring you to also enter a code that’s sent to your mobile phone via text message or through an authentication application to your account. Choose to have MFA codes or messages sent to your phone rather than via email (if given a choice). Ignore or reject all MFA codes and messages that you weren’t expecting, and then follow the steps in the next bullet;”
- Protect against impersonation scams: “If you receive a security-related message over phone, text or email claiming to be from one of your service providers, don’t respond with identity-verifying information or click on links in case the message is a phishing attempt. Instead, contact the firm directly through its website or by a phone number provided on an account statement or its website. This will help you ensure that the company or individual you’re communicating with is legitimate.”
The full alert is here: https://shorturl.at/dW2VP
Need a Reprint?