Back to AI & Tech

How AI use is spreading while trust trails

Source reportMethodology

Overview

AI use is already broad, but the public is not treating adoption as the same thing as trust.


About 63% of adults say they used AI tools in the past month. At the same time, 50% disagree that tech companies can be trusted to develop AI responsibly, and 60% say they are not confident regulators can keep up with AI rules.

Stacked breakdown

63% have used AI tools in the past month.

Have you used artificial intelligence (AI) tools in your work or personal life at all in the past month?

Yes
63.4%
No
36.6%

2025 · base n 1,509 · +/- 3.1%

AI Adoption Survey July 2025

View source data

Recent AI use is common

About 63% of adults say they have used AI tools in their work or personal life in the past month.

Use is not equally routine, though. The largest frequency groups are split between a few times a week at 22% and never at 22%.

Topline

86% have heard of ChatGPT, and 80% have heard of Google Gemini.

Which of the following AI tools have you heard of?

  • ChatGPT 85.6%
  • Google Gemini 79.6%
  • Microsoft Copilot 60.6%
  • Grok 33.1%
  • Deepseek 28.5%
  • Claude AI 18.9%

2025 · base n 1,509 · +/- 3.1%

AI Adoption Survey July 2025

View source data

ChatGPT and Google Gemini lead the tool set

ChatGPT is the most recognized tool at 86%, with Google Gemini close behind at 80%.

Recent use follows the same pattern. About 46% used ChatGPT in the last month, and 40% used Google Gemini. Planned use in a separate wave also starts with those two tools.

Stacked breakdown

50% disagree that tech companies can be trusted on AI responsibility.

I trust tech companies to develop AI responsibly

Strongly agree
6.7%
Somewhat agree
17.3%
Neutral
26.3%
Somewhat disagree
24.4%
Strongly disagree
25.3%

2025 · base n 1,509 · +/- 3.1%

AI Adoption Survey July 2025

View source data

Additional supporting data from this section.

Stacked breakdown

79% favor independent safety tests for powerful AI models.

Do you favor or oppose requiring AI developers to pass independent safety tests before releasing powerful new models to the public?

Strongly favor
52.6%
Somewhat favor
26.3%
Somewhat oppose
5.0%
Strongly oppose
2.5%
Not sure
13.7%

2025 · base n 1,509 · +/- 3.1%

AI Adoption Survey July 2025

View source data

Adoption does not mean institutional trust

Trust is weaker than use. About 50% disagree that tech companies can be trusted to develop AI responsibly, compared with 24% who agree.

Adults still want guardrails: 79% favor independent safety tests before powerful new models are released.

Confidence in regulators is also limited. Roughly 60% are not very confident or not at all confident that regulators can keep pace with AI rules.

Methodology

Full methodology
Mode
Verasight panel recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting
Population
US adults age 18+
Field dates
2025-07-30 → 2025-08-04
Base (unweighted)
1,509
Margin of error
+/- 3.1%
Module
AI Adoption Survey July 2025
Sponsor
Verasight
Weight variable
weight
Weighting targets
age, race/ethnicity, sex, income, education, region, metropolitan status

Sources

[7]
  • 01
    Have you used artificial intelligence (AI) tools in your work or personal life at all in the past month?Shows that recent AI use is already common.reports.verasight.io/reports/ai-adoption-survey-august-2025
  • 02
    Roughly how often do you use AI tools?Adds frequency context, including the large group that never uses AI tools.reports.verasight.io/reports/ai-adoption-survey-august-2025
  • 03
    Which of the following AI tools have you heard of?Shows ChatGPT and Google Gemini leading both awareness and recent use.reports.verasight.io/reports/ai-adoption-survey-august-2025
  • 04
    Please indicate how much you agree or disagree with the following: - I trust tech companies to develop AI responsiblyAdds the clearest trust gap around tech-company responsibility.reports.verasight.io/reports/ai-adoption-survey-august-2025
  • 05
    Do you favor or oppose requiring AI developers to pass independent safety tests before releasing powerful new models to the public?Pairs support for safety testing with lower confidence in regulator capacity.reports.verasight.io/reports/ai-adoption-survey-july-2025
  • 06
    Which of the following AI tools have you used in the last month?reports.verasight.io/reports/ai-adoption-survey-august-2025
  • 07
    How confident are you that government regulators can effectively enforce AI rules and keep pace with new technologies?reports.verasight.io/reports/ai-adoption-survey-july-2025

Citation

AI Adoption Survey August 2025, fielded September 3-8, 2025, N=1,519 United States adults, +/- 3.3%.

https://reports.verasight.io/reports/ai-adoption-survey-august-2025#have-you-used-artificial-intelligence-ai-tools-in-your-work-or-personal-life-at-all-in-the-past-month

Verasight survey methodology

How Verasight conducts surveys.

This page describes the Verasight general survey contract, separate from how the Data Library packages it. Each wave's specific field dates, sample sizes, and module breakdown are listed in that wave's report.

Mode
Verasight panel recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting.
Population
US adults age 18+.
Sample design
Surveys are run as omnibus or single-topic waves. Omnibus waves are split into modules with their own respondent set, typically around one thousand respondents per module.
Field window
Each wave specifies its own field dates. Most omnibus waves field across roughly two weeks.
Weighting
Per-module weighting to CPS targets including age, race and ethnicity, sex, income, education, region, and metropolitan status.
Partisanship benchmark
Pew Research Center's NPORS benchmarking surveys, three-year running average.
Vote benchmark
2024 presidential vote population benchmarks.
Margin of error
Typically about plus or minus 3.4 to 3.6 percent per module at standard module sizes. Question-level MoE is recomputed when a base shrinks materially below the module baseline.
Reporting
Every wave is published as a standalone report at verasight.io/reports with full instrument and methodology.
Transparency
AAPOR transparency standards.

Wave-specific methodology, full weighting variable lists, and verbatim instrument text live in each report at verasight.io/reports.