The shift to digital tax filing is complete: 94% of individual federal income tax returns were filed electronically in 2022, according to Pew Research. As AI tools become more widespread, many are turning to chatbots like ChatGPT for help, but cybersecurity experts warn that this convenience comes with significant risks. Roughly 30% of Americans now say they’ll use AI for tax prep, and nearly half trust its advice—especially younger taxpayers and men, according to recent surveys.
However, these general-purpose chatbots are fundamentally different from the AI tools offered by established tax services like H&R Block. “You don’t want to be using chatbots as your tax consultants,” warns Abhishek Karnik of McAfee’s threat intelligence team. “They’re not the experts.”
Why Chatbots Are Appealing, Despite the Danger
The surge in chatbot use is driven by several factors. The IRS’s discontinued Direct File program, combined with complex tax laws, has left many confused. People seek easy explanations, and AI appears to deliver: “Many individuals see chatbots as an easy way to translate confusing guidance into plain language,” explains Christopher Caen, CEO of AI cybersecurity firm Mill Pond Research. Rising costs for professional help and growing comfort with AI in daily life also play a role.
But this convenience hides real dangers.
The Security Risks of Sharing Tax Data with AI
The primary concern is data security. General-purpose chatbots are not designed to handle sensitive financial information securely. Uploading personal documents to platforms like ChatGPT makes them vulnerable to exposure through platform breaches, malicious browser extensions, or compromised devices. Even publicly shared prompts can leak sensitive details.
Experts also warn of phishing scams masquerading as AI tools and spoofed tax sites enhanced by generative AI. “When it comes to taxes, less data shared is always the safer approach,” Caen emphasizes. This includes avoiding uploading full tax forms, Social Security numbers, bank details, employer information, or addresses. The fate of this data is uncertain: “We don’t know where this information is eventually ending up… It’s going somewhere.”
Why ChatGPT Struggles with Taxes
Beyond security, ChatGPT’s accuracy is questionable. LLMs often make mistakes in calculations, misunderstand tax brackets, and pull outdated information. “In general, you can’t trust the output,” Karnik says bluntly. The IRS won’t accept “the AI told me so” as an excuse for errors.
How to Use AI Responsibly (If You Must)
The experts recommend seeking professional tax help whenever possible. If that’s not feasible, practice good digital hygiene: use secure platforms, enable multifactor authentication, and avoid accessing financial tools on public networks.
The best approach is to treat AI as a research assistant, not a consultant. Use it to explain deductions or filing steps, but never provide personal tax details. Then, verify the information with a human professional.
In short: AI can guide your questions, but it shouldn’t replace expert advice when it comes to your taxes.
