Sift’s Digital Trust Index: AI, Fraud, and the Confidence Paradox

  • Post author:
  • Post category:
    gladiator

Sift’s Q2 2025 Digital Trust Index should disturb us. The bad guys are getting better at using genAI to scam us. We’re getting worse at spotting it. And we’re becoming less concerned that it’s happening at all.

In the age of AI—when our data is oil, gold, and NVIDIA GPUs rolled into one—we’re lackadaisical about how we share it with genAI tools. Over half shared their phone numbers, email addresses, or their physical addresses with genAI tools. One third shared personal financial information.

Yet, consumer anxiety around AI fraud dropped from 79% in 2024 to just 61% today. But at the same time, 27% of those targeted by AI scams were successfully defrauded—a 42% YoY increase. People feel safer even as risk accelerates.

That’s the confidence paradox.

Sidebar: Survey Sampling

In my UCLA MBA classes, we talk a lot about what makes research credible. When it comes to surveys, we discuss questions like, Who are you surveying? Why? Why should they care? How are you measuring completeness? What are the hypotheses? 

So, I’d be remiss not to scrutinize this survey’s methods.

The report doesn’t disclose how Researchscape handled weighting, quotas, or panel balance. All we know is the sample size (1,033 adults), demographic (18+ across the US), method (online), and timeframe (May 2025). It’s unclear if the sample was random.

Assuming random sampling, the findings would carry an error rate of roughly ±3% at a 95% confidence level, offering a directional snapshot of the general population.

However, even without knowing the details of the sampling, the numbers in Sift’s survey are too big to dismiss.

Fraud Exploits Trust Gaps

When engagement gets reduced to transactions, fraudsters thrive. Every cart abandoned, every one-time passcode entered, every password reset is a crack to pry open. When brands treat messaging as transactional, trust doesn’t grow. The customer doesn’t stop to think: “My bank would never send me a random link!” Or, “FedEx wouldn’t text me a package notification.” Because the expectation of trust was never set in the first place.

Fraud, in other words, isn’t a back-office problem. It’s a frontline engagement failure.

Fraud Exploits Habits

The report makes clear that our own habits are part of the problem. 70% of consumers say scams have become harder to identify in the past year. 78% admit to opening AI-generated phishing emails, and 21% clicked on malicious links. A third of consumers (33%) say they’re confident they can spot AI scams, yet one in five fell for phishing in the past year (I suspect the actual number is higher). Our own overconfidence is making us vulnerable to scams.

Generational differences deepen the point: younger, digitally fluent consumers report higher confidence but also fall victim at higher rates. Older cohorts, less confident but more cautious, are understandably safer.

Partnering to Protect

Through pages of sobering stats, the survey provides hope. 

Blocked scam content increased by 51% from Q1 2024 to Q1 2025, and adoption of fraud prevention is poised to spike 80% by the year’s end. Together, these signals suggest progress is possible when defenses scale alongside threats.

What it also tells us is that fighting fraud is as much about personal responsibility as it is about data-sharing sophistication.

First-Party Data as Defense

Brands that can recognize the real customer—through behavioral signals, context, and digital body language—can spot when something’s off. 

Sift’s Identity Trust XD framework leans into this idea, linking user activity across industries, devices, and geographies. It’s a reminder that fraud prevention is inseparable from knowing your customer well.

Zero-Party Data and Collaborative Fraud Prevention

Consumers are willing to help fight fraud when the purpose is clear. 

Half of consumers (51%) say they’d opt in to securely share their own data with trusted fraud-prevention networks, provided it’s used exclusively for that purpose. That willingness turns zero-party data into a shield, making  it easier to identify real users and harder for fraudsters to hide.

Finally

AI is making fraud faster, cheaper, and more scalable. The response can’t just be better tech. There has to be a shift: re-centering engagement on trust, grounding fraud defense in first- and zero-party data, and recognizing that responsibility doesn’t sit with one side alone.