Back to AI & Tech

How people are using AI while trust lags

Source reportMethodology

Overview

AI use has moved into everyday information tasks, but trust has not caught up with adoption.


About 44% of adults say they use ChatGPT, and 42% say they use AI for personal tasks. At the same time, 51% say AI can be trusted to provide correct information only some of the time.

Stacked breakdown

51% say AI can be trusted to provide correct information only some of the time.

How much of the time do you think you can trust artificial intelligence (AI) to provide correct information?

None of the time
11.6%
Some of the time
50.8%
Most of the time
28.6%
Just about always
6.4%
Always
2.6%

2025 · base n 1,000 · +/- 3.5%

policy

View source data

Additional supporting data from this section.

Topline

42% say they use AI for personal tasks.

Do you use artificial intelligence (AI) to get information about any of the following

  • Personal tasks 41.9%
  • I do not use artificial intelligence 38.9%
  • Health 25.8%
  • Entertainment 25.8%
  • Work related tasks 24.7%
  • Sports 13.9%

2025 · base n 1,000 · +/- 3.5%

policy

View source data

AI is useful before it is deeply trusted

Roughly 51% say AI can be trusted to provide correct information only some of the time. Another 12% say none of the time.

That limited trust coexists with practical use. About 42% say they use AI for personal tasks, 26% for entertainment, 26% for health, and 25% for work-related tasks.

Topline

44% say they use ChatGPT, while 38% say they do not use AI chatbots.

Which of the following Artificial Intelligence (AI) chatbots do you use?

  • OpenAI / ChatGPT 43.6%
  • I do not use AI chatbots 38.3%
  • Gemini 26.9%
  • Microsoft Copilot 16.9%
  • Grok 6.9%
  • Something else 4.6%

2025 · base n 1,000 · +/- 3.5%

policy

View source data

ChatGPT is the most common chatbot named

ChatGPT is the most commonly reported chatbot, with 44% saying they use it. Gemini follows at 27%, and Microsoft Copilot at 17%.

A large share remains outside chatbot use: 38% say they do not use AI chatbots.

Adults draw limits around synthetic people and government replacement

Nearly half of adults choose the least comfortable rating when asked about generative AI using synthetic people as part of a survey group.

Replacing federal employees with AI systems also faces more opposition than support. About 42% say it would lead to worse public services, and 34% say it would make government less responsive.

Methodology

Full methodology
Mode
Verasight panel recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting
Population
US adults age 18+
Field dates
2025-04-09 → 2025-04-15
Base (unweighted)
1,000
Margin of error
+/- 3.5%
Module
policy
Sponsor
Verasight
Weight variable
weight
Weighting targets
age, race/ethnicity, sex, income, education, region, metropolitan status

Sources

[5]
  • 01
    How much of the time do you think you can trust artificial intelligence (AI) to provide correct information?Anchors the topic in limited trust in AI accuracy.reports.verasight.io/reports/verasight-mpsa-omnibus-survey-2025-026
  • 02
    Which of the following Artificial Intelligence (AI) chatbots do you use?Shows which chatbots adults report using.reports.verasight.io/reports/verasight-mpsa-omnibus-survey-2025-026
  • 03
    Do you use artificial intelligence (AI) to get information about any of the followingShows practical AI use for personal, work, health, entertainment, sports, and politics information.reports.verasight.io/reports/verasight-mpsa-omnibus-survey-2025-026
  • 04
    On a scale from 1 to 5 (1 lowest to 5 highest), to what extend do you feel comfortable with generative artificial intelligence (AI) using synthetic “people” to be part of a survey group?Adds a boundary condition around comfort with synthetic survey participants.reports.verasight.io/reports/verasight-mpsa-omnibus-survey-2025-026
  • 05
    Do you support replacing federal employees with Artificial Intelligence systems?Adds a public-sector replacement frame where opposition is more common than support.reports.verasight.io/reports/verasight-mpsa-omnibus-survey-2025-026

Citation

Verasight MPSA Omnibus Survey #2025-026, fielded April 9-15, 2025, N=1,000 US adults age 18+, +/- 3.5%.

https://reports.verasight.io/reports/verasight-mpsa-omnibus-survey-2025-026#how-much-of-the-time-do-you-think-you-can-trust-artificial-intelligence-ai-to-provide-correct-information

Verasight survey methodology

How Verasight conducts surveys.

This page describes the Verasight general survey contract, separate from how the Data Library packages it. Each wave's specific field dates, sample sizes, and module breakdown are listed in that wave's report.

Mode
Verasight panel recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting.
Population
US adults age 18+.
Sample design
Surveys are run as omnibus or single-topic waves. Omnibus waves are split into modules with their own respondent set, typically around one thousand respondents per module.
Field window
Each wave specifies its own field dates. Most omnibus waves field across roughly two weeks.
Weighting
Per-module weighting to CPS targets including age, race and ethnicity, sex, income, education, region, and metropolitan status.
Partisanship benchmark
Pew Research Center's NPORS benchmarking surveys, three-year running average.
Vote benchmark
2024 presidential vote population benchmarks.
Margin of error
Typically about plus or minus 3.4 to 3.6 percent per module at standard module sizes. Question-level MoE is recomputed when a base shrinks materially below the module baseline.
Reporting
Every wave is published as a standalone report at verasight.io/reports with full instrument and methodology.
Transparency
AAPOR transparency standards.

Wave-specific methodology, full weighting variable lists, and verbatim instrument text live in each report at verasight.io/reports.