The Good, the Bad, and the Risky Truth About Artificial Intelligence in Personal Finance

The Good, the Bad, and the Risky Truth About Artificial Intelligence in Personal Finance

1

AI is already shaping people’s financial decisions, from shopping and banking to advice and recommendations, often without them even noticing it.

2

AI isn’t an expert and can be wrong by reinforcing bias, optimism, or incomplete information if used uncritically.

3

Better questions and prompts lead to better outcomes. Use AI to challenge ideas, identify risks, and stress-test decisions, not to validate them.

4

AI reduces friction throughout the spending process and increases the risk of fraud, making it easier to overspend or fall for highly personalized scams. Take pause, verify, and slow down.

In a recent conversation on the Moolala: Money Made Simple podcast between host and Credit Canada CEO Bruce Sellery and financial educator and behavioural finance expert Preet Banerjee, one thing became clear: AI in personal finance is a double-edged sword. The following content is based on insights from that very conversation, How AI Is Reshaping Financial Decisions in Canada with Preet Banerjee.

For one, artificial intelligence (AI) is no longer something happening quietly behind the scenes. It’s front and centre in everyday life, and increasingly, in how we manage our money. 

From shopping recommendations to banking chatbots, AI is influencing how we spend, borrow, and save. Sometimes in helpful ways, sometimes in ways that are less obvious, and sometimes in ways that should make us take a moment, step back, and analyze what’s happening.

Used well, AI can empower consumers. Used carelessly, it can be manipulated and quietly erode financial control.

Here’s what Canadians need to understand about the good, the bad, and the ugly side of AI and money.

AI Is Already Shaping Our Financial Decisions

Even people who don’t consider themselves “techy” are using AI every day, often without even realizing it. Searching for rebates, comparing products, interacting with financial institutions, or getting spending recommendations all increasingly involve artificial intelligence.

Many consumers have shifted from traditional search engines to AI tools as their first stop for information. Instead of Googling, they ask an AI assistant. That includes questions about financial products, strategies, and advice. (In some cases, even when you try to Google an answer, the first response is usually AI-generated.)

According to Preet Banerjee, some people are starting to use AI to fact-check conversations with banks and advisors. They do this by running recommendations through tools like ChatGPT to see if there are (better) alternative options or red flags. When used this way, AI can raise the bar for consumers by making them more informed and more confident in their decisions.

But that benefit only exists if people understand the tool’s limits, both its technical constraints and the risks that come from how it’s used.

When AI Helps and When It Hurts

AI can absolutely make financial information more accessible. It can summarize complex topics, suggest questions to ask an advisor, or help consumers prepare for important money conversations.

The problem starts when people treat AI as an authority instead of a tool.

AI systems can “hallucinate”. This is when generative AI systems confidently present information that sounds correct but is incomplete, biased, or simply wrong. They can also reflect the user’s own biases. So instead of challenging a person’s ideas or analyzing them critically, it reinforces them. 

For example, someone might ask an AI chatbot, “I'm thinking of starting a side hustle selling recipes online. Is this a good way to make some extra cash?” 

The AI might respond by noting that selling recipes online is a low-cost side hustle. When tested, the response was, “Many creators can earn $5,000 per month within their first year, and startup costs are minimal. With regular social media promotion, this can become a reliable source of income.”

On the surface, this sounds helpful, and maybe even like a great idea. But it’s also a good example of an AI hallucination. 

The income numbers the AI provided may be based on a few success stories rather than typical results. The response also leaves out how competitive the market is and how much time and effort it actually takes to generate sales. It also ignores important factors, like taxes and the ongoing work a project like this requires.

Without that context, the AI reinforces optimism instead of helping someone think through the risks. Unless the person asks follow-up questions about realistic earnings, time commitment, and potential downsides, they may walk away with false expectations.

Also, if someone feeds an AI a risky investment idea, the response may validate it rather than critically assess it, unless the user intentionally asks for pushback.

That’s why how you prompt AI matters.

Why Better Prompts Lead to Better Financial Outcomes

One of the most important insights from the Moolala podcast conversation is that AI works best when it’s used as a challenger, not a cheerleader.

Instead of asking, “Is this a good idea?” a better prompt might be:

  • “What are the risks of this strategy?”
  • “What assumptions does this plan rely on?”
  • “What questions should I ask before agreeing to this?”

As Preet put it, you should use AI as your “red team,” as an adversarial tool to poke holes in your theories and fact-check recommendations, which can ultimately lead to better decisions. But that requires effort. AI isn’t magic. It demands time, follow-up, and critical thinking. 

While AI works best when treated as a critical sparring partner, many people engage with it very differently.

People tend to open up more to AI because it doesn’t judge. Without fear of embarrassment, users may disclose more than they would to a human advisor. That can be helpful, but it also creates new vulnerabilities.

For example, the information shared with AI can be stored, reused, or shape recommendations in ways people might not see or control.

There is also the risk of false reassurance. That’s when an AI sounds confident and supportive in their response, but unintentionally validates a risky decision. A polished response can feel like expert advice, even when it’s based on incomplete or flawed assumptions.

Used well, AI can improve decision-making. Used uncritically, it can quietly reinforce the very behaviours people are trying to change. This is why better prompts lead to better outputs from AI, which can lead to better decision-making for people.

Behavioural Finance Meets AI: An Arms Race

Behavioural finance teaches us that humans are not rational with money. We are emotional, impulsive, and influenced by convenience and urgency.

Marketers understand this, and AI gives them powerful new tools. At the same time, consumers can use AI to better protect themselves with information. 

Preet Banerjee describes this as an arms race: “It's a bit of an arms race when you think about it. You know, the psychology of selling is all about making things easy for people to buy stuff, and what you really wanna do is add friction because it's so easy to buy things now.”

 

AI, Shopping, and the Disappearing Friction in Spending

One of the most concerning developments in AI is its role in shopping and spending.

New “agentic” AI tools (AI that can take action on your behalf, not just provide you with information) don’t just recommend products. They can browse on a person’s behalf and complete purchases automatically using stored credit card details. With minimal oversight, spending can happen almost effortlessly.

From a behavioural finance perspective, this is dangerous.

Friction, which refers to small moments that force us to pause before spending, has already been eroded from the spending process. 

Preet uses the example of how we’ve moved away from using cash, which immediately feels painful to part with the moment you make a purchase (some children literally cry when they spend their savings for the first time), to credit cards, where the bill comes weeks later, long after you’ve forgotten what you even bought. 

Now, AI threatens to remove even the act of purchasing from our awareness altogether. When spending becomes invisible, self-control becomes much harder. 

As Preet puts it, “I think it’s also a slippery slope because people like to shop. And if you don't have any oversight, what's going to limit that shopping and your spending behaviour?”

Reintroducing friction, like reviewing transactions, setting limits, and requiring confirmation, isn’t a step backward. It’s a protective measure.

Scams, Fraud, and Hyper-Personalized Deception

Perhaps the most alarming risk of AI in personal finance is its use in fraud.

Modern scams are no longer about clumsy emails with obvious typos. AI allows scammers to create hyper-personalized attacks based on publicly available data.

Preet provided an example of a new scam targeting people who have just started a new job. They will post about their new job on LinkedIn, and then bots scrape that data and trigger a fake HR email directing the new hire to a realistic-looking portal asking them for their banking details for “payroll” purposes. The timing feels natural, the request feels legitimate, and the victim’s guard is down.

Scams don’t just hack systems anymore. They hack human behaviour.

Scammers will also often try to pressure you with a sense of urgency, scare you, or make themselves sound like an authority you should trust. That’s why any financial request that pushes you to act right away is a potential red flag.

Pause and double-check requests. Do not rush to act, no matter how real the request seems. Use alternative methods to confirm their legitimacy, such as conducting a separate online search of the company, verifying phone numbers only on official company websites, and conducting a background check on the company via the Better Business Bureau.

Just because something sounds and looks real doesn’t mean it is, especially not in the age of AI.

Where Financial Institutions Are Using AI Well

AI isn’t all bad news, especially within traditional financial institutions. One clear win for AI is its impact on customer service. 

AI-powered chatbots can handle basic tasks 24/7, quickly directing people to the right resources, forms, or human support when they need it. That improves access to education and services while also increasing efficiency. 

For example, Credit Canada’s AI-powered debt management agent, Mariposa, provides the same non-profit credit counselling advice as our certified Credit Counsellors, 24 hours a day, 7 days a week, and within seconds. So clients no longer have to wait for answers to questions that might be keeping them up at night. They can get the support they need instantly.

And a bigger promise lies in advice.

Historically, high-quality financial advice was reserved for people with significant resources (aka lots of money) and assets (like stocks and investments). AI has the potential to bring consistent, informed guidance to everyone, not just a select few.

And as consumers become more informed through AI, institutions may be forced to deliver better value and transparency to their customers.

The Bottom Line: Use AI, But Stay Vigilant

AI isn’t going away. 

Just like there was a time when we were forced to embrace CDs over cassettes, streaming services over DVDs, and iPhones over Blackberries (yes, I realize I’m dating myself), we cannot stop the transition towards AI. 

The upside is immediate access to information, better questions, and broader advice. The downside is the potential for overspending, manipulation, and increasingly sophisticated fraud schemes.

But the solution isn’t fear. It’s awareness.

We can use AI to challenge ideas, analyze strategies, fact-check theories, and provide alternative options. Yes, AI can make spending feel easy, but in that case, it’s up to us to add friction, like reviewing transactions and avoiding automatic checkouts.

Plus, there are many instances where AI can help move our financial goals forward. Tools like Mariposa can provide answers and guidance in stressful situations when someone might not be ready to speak to a person just yet. It can answer questions about debt relief options, budget planning, and improving credit scores.

When it comes to AI, it’s important to maintain a balance. On the one hand, it can be tempting to embrace AI as the answer to anything and everything. But it’s crucial to understand that AI is a tool, and just like any other tool, it requires proper instruction and use. If not, we risk falling down that “slippery slope” of easy spending Preet mentioned and losing financial control.  

If you’re already struggling with debt and ready to speak to someone about it, our certified Credit Counsellors are just a phone call away. Call 1 (800) 267-2272 for free, non-judgmental support and non-profit advice.

Frequently Asked Questions (FAQs)

1. How is AI changing the way Canadians manage money?

AI is everywhere, from shopping recommendations and banking chatbots to finding rebates and investment tools. It can summarize information, suggest questions to ask advisors, and provide guidance on strategies. Used well, it can help consumers make more informed, confident financial decisions.

2. Can AI give me financial advice?

AI can provide guidance, point out risks, and help you think through options, but it is not a human advisor nor cannot replace one. Its responses may be incomplete, biased, or based on limited information. Treat AI as a tool to challenge your ideas, not as an authority.

3. What do “AI hallucinations” mean in personal finance? 

“Hallucinations” occur when AI confidently presents information that sounds correct but is inaccurate, incomplete, or misleading. For example, AI might give overly optimistic earnings for a side hustle without considering typical results, market competition, or hidden costs.

4. How do I get better answers from AI?

The quality of your answers depends on the prompts you give. Instead of asking, “Is this a good idea?” try asking:

  • “What are the risks of this strategy?”
  • “What assumptions does this plan rely on?”
  • “What questions should I ask before agreeing to this?”

Using AI as a tool to challenge your ideas and/or fact-check theories rather than a cheerleader can improve your financial decision-making.

5. Are there privacy or security risks with AI?

Yes. AI can store and reuse the information you provide, potentially shaping future recommendations. Because people disclose more to AI without fear of judgment, sensitive details could be at risk if not used carefully.

6. Can AI make me overspend?

Potentially. “Agentic” AI tools can make purchases on your behalf, removing the natural friction that helps you pause before spending. Without review, spending can happen almost effortlessly, which can make self-control more difficult.

7. How does AI impact scams or fraud?

AI enables highly personalized fraud, such as emails or messages that appear legitimate based on publicly available information. Scammers can use urgency, fear, or false authority to trick you. Always pause, verify independently, and use official channels before acting on financial requests.

8. Where is AI actually helpful in financial institutions?

AI shines in customer service and accessibility. Tools like Credit Canada’s Mariposa provide fast, 24/7 advice and support for debt management, budgeting, and credit improvement, bringing guidance to people who might not otherwise have access to a financial counsellor while also protecting their information and data.

9. Should I avoid using AI for finance because of the risks?

The goal is to use AI intentionally. It can help you ask better questions, explore options, and make informed decisions. The key is vigilance: challenge AI responses, add friction before spending, and double-check anything urgent or emotional.

10. Where can I get human support if I need it?

Certified Credit Counsellors are available at 1 (800) 267-2272 for free, non-judgmental, non-profit advice to help you get out of debt. AI can supplement guidance, but human counselling ensures personalized support and accountability.



X