Overview

U.S. adults use chatbots for some political information tasks, but discomfort is stronger when AI is used to help choose a candidate.


Roughly 56% of adults say they never use chatbots to inform themselves about politics. About 63% say they are not comfortable using an AI chatbot to help decide which candidate to vote for in the upcoming 2026 midterm elections.

Stacked breakdown

63% are uncomfortable letting AI help with vote choice.

How comfortable would you be using an AI chatbot to help you decide which candidate to vote for in the upcoming 2026 midterm elections?

Very comfortable
11.3%
Somewhat comfortable
25.5%
Not very comfortable
25.3%
Not at all comfortable
37.9%

2026 · base n 1,000 · +/- 3.4%

Module 1: Technology, Finance, & Media

View source data

U.S. adults are uncomfortable with AI-assisted vote choice

About six-in-ten U.S. adults (63%) say they are not comfortable using an AI chatbot to help decide which candidate to vote for in the upcoming 2026 midterm elections.

The largest single response is complete discomfort. Nearly four-in-ten adults (38%) say they are completely uncomfortable with using a chatbot this way. Another 25% say they are somewhat uncomfortable, 25% are somewhat comfortable, and 11% are very comfortable.

Topline

56% never use chatbots for political information.

When using AI chatbots to inform yourself about politics, what best describes your reason for doing so?

  • I never use chatbots to inform myself about politics 55.6%
  • Fact-checking something I saw online or heard from someone 24.3%
  • Learning more about a current issue that is ongoing 22.6%
  • Processing or understanding recent events that happened in the past week 13.4%
  • Gathering information to win an argument with someone else 7.9%

2026 · base n 1,000 · +/- 3.4%

Module 1: Technology, Finance, & Media

View source data

Adults report little political chatbot use

Roughly 56% of adults say they never use chatbots to inform themselves about politics.

Roughly 24% say they use chatbots to fact-check something they saw online or heard from someone. Another 23% use them to learn more about an ongoing current issue. Some 13% use them to process recent events from the past week. About 8% use them to gather information to win an argument with someone else.

Crosstab view

Adults 65 and older are less comfortable with AI vote choice, 22% vs. 48%.

How comfortable would you be using an AI chatbot to help you decide which candidate to vote for in the upcoming 2026 midterm elections? · by age_bucket

Measure 30-4965+
Comfortable using AI for vote choice 47.6% 21.7%
Never use chatbots for political information 47.4% 64.5%

2026 · base n 1,000 · +/- 3.4%

Module 1: Technology, Finance, & Media

View source data

Older adults are less comfortable with AI vote choice

Adults ages 65 and older are less likely than adults ages 30 to 49 to say they are comfortable using an AI chatbot to help decide which candidate to vote for, 22% vs. 48%.

Roughly 65% of adults ages 65 and older say they never use chatbots to inform themselves about politics, compared with 47% of adults ages 30 to 49.

Among partisans, 59% of Republicans, 63% of Democrats, and 67% of independents say they are not comfortable using an AI chatbot to help decide their 2026 midterm vote.

Stacked breakdown

80% of adults are concerned about AI bots answering policy and business surveys.

How concerned are you that AI bots, rather than real people, are answering surveys used to inform government policy and business decisions?

Very concerned
37.9%
Somewhat concerned
41.6%
Not very concerned
16.6%
Not at all concerned
3.9%

2026 · base n 1,000 · +/- 3.4%

Module 1: Technology, Finance, & Media

View source data

Concern about AI bots reaches policy and business surveys

Eight-in-ten adults (80%) say they are concerned that AI bots, rather than real people, are answering surveys used to inform government policy and business decisions.

This includes 38% who are very concerned and 42% who are somewhat concerned. Roughly one-in-five adults (21%) say they are not concerned.

Stacked breakdown

45% disagree and 23% agree on treating harmful political AI as autonomous speech.

To what extent do you disagree or agree with the following statement: "When an AI system generates false and harmful political content, it should be treated as an autonomous speaker under freedom of speech principles rather than as a tool used by humans."

Strongly disagree
27.0%
Disagree
12.6%
Somewhat disagree
5.8%
Neither agree nor disagree
31.4%
Somewhat agree
8.8%
Agree
8.4%
Strongly agree
6.0%

2026 · base n 1,000 · +/- 3.4%

Module 1: Technology, Finance, & Media

View source data

Adults reject autonomous-speech treatment for harmful political AI

Roughly 45% disagree that false and harmful political content generated by AI should be treated as autonomous speech under freedom of speech principles rather than as a tool used by humans.

Another 23% agree that harmful political AI should be treated as autonomous speech, while 31% neither agree nor disagree.

Methodology

Full methodology
Mode
Verasight panel recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting
Population
US adults age 18+
Field dates
2026-03-06 → 2026-03-16
Base (unweighted)
1,000
Margin of error
+/- 3.4%
Module
Module 1: Technology, Finance, & Media
Sponsor
Verasight
Weight variable
weight
Weighting targets
age, race/ethnicity, sex, income, education, region, metropolitan status

Sources

[4]

Citation

Verasight Client Omnibus Survey #2026-044, fielded March 6-16, 2026, N=1,000 US adults age 18+, +/- 3.4%.

https://reports.verasight.io/reports/omnibus-2026-044#q-1-1

Verasight survey methodology

How Verasight conducts surveys.

This page describes the Verasight general survey contract, separate from how the Data Library packages it. Each wave's specific field dates, sample sizes, and module breakdown are listed in that wave's report.

Mode
Verasight panel recruited via random address-based sampling, random person-to-person text messaging, and dynamic online targeting.
Population
US adults age 18+.
Sample design
Surveys are run as omnibus or single-topic waves. Omnibus waves are split into modules with their own respondent set, typically around one thousand respondents per module.
Field window
Each wave specifies its own field dates. Most omnibus waves field across roughly two weeks.
Weighting
Per-module weighting to CPS targets including age, race and ethnicity, sex, income, education, region, and metropolitan status.
Partisanship benchmark
Pew Research Center's NPORS benchmarking surveys, three-year running average.
Vote benchmark
2024 presidential vote population benchmarks.
Margin of error
Typically about plus or minus 3.4 to 3.6 percent per module at standard module sizes. Question-level MoE is recomputed when a base shrinks materially below the module baseline.
Reporting
Every wave is published as a standalone report at verasight.io/reports with full instrument and methodology.
Transparency
AAPOR transparency standards.

Wave-specific methodology, full weighting variable lists, and verbatim instrument text live in each report at verasight.io/reports.