Generative AI is making old scams feel new again — not by inventing brand-new crimes, but by smoothing out the parts that used to trip criminals up. When messages are fluent, confident, and tailored, it gets harder to tell whether you’re talking to a real person or someone running a script at scale.
That shift is showing up most clearly in romance fraud and professional impersonation. In both cases, the playbook is familiar: build trust fast, move the conversation somewhere private, and then apply pressure, whether that’s emotional urgency or a promise of expert help.
AI can translate smoothly, keep tone consistent over long threads, and remove the “tells” that used to flag many scams, like awkward phrasing or sudden shifts in tone. The result is more volume, more polish, and less friction for a bad actor trying to keep multiple conversations moving at once.
OpenAI’s threat intelligence findings outline how scammers are using ChatGPT as a kind of accelerator for that playbook, according to Business Insider. In one romance-scam example described in the report, actors used AI-generated materials and messaging to support a fake “luxury” dating setup, then pushed targets to Telegram, where “tasks” or “missions” escalated into larger payments.
The move off-platform is often the hinge point. Inside a major app, there are at least some guardrails: rate limits, fraud detection, reporting tools, and moderation. Once you’re in a private chat, the scammer controls the environment and the pace, and the platform’s safety features largely stop applying. If a new match insists on switching apps early, treat it as a reason to slow down and verify who you’re dealing with.
The same report also described clusters of accounts that presented themselves as law firms, individual attorneys, and, in some cases, US law enforcement, including requests to generate credibility signals such as a fake New York State Bar Association membership card. The danger here is less about a scammer sounding “smart” and more about them looking legitimate long enough to get money, personal data, or compliance from someone who’s already stressed.
If someone you met online is reluctant to do a simple real-time verification step, treat that as a signal, not a quirk. A quick video call, a live selfie with a specific gesture, or a short voice check can puncture an AI-polished persona instantly.
For legal claims, keep the verification independent. Don’t rely on links, badges, or “directories” the person provides. Look up the attorney through a state bar directory you find yourself, and call a firm using a phone number you sourced from a trusted listing, not the person’s email signature. If the conversation jumps quickly to payment, especially via gift cards, crypto, or odd “processing fees,” step back and verify before you send anything.
AI can write convincing words. Your job is to confirm the intention behind them.
Also read: a fake Gemini AI chatbot used in a crypto scam is a useful reminder of how quickly fraudsters copy trusted brands.

