Methodology

Many surveys, one approach

All of YouGov's surveys are run with consistent methods and values.

YouGov conducts surveys for a wide variety of clients. Clients have discretion over if and how the data collected in their surveys is published. Our high methodological standards ensure the reliability and validity of the full range of diverse surveys we conduct for clients using a variety of survey designs and data processing methods tailored to their needs.

YouGov's own editorial staff carry out surveys themselves, with the goal of creating and sharing accessible and neutral public surveys that inform our readers about where Americans stand on the issues of the day. We aim to be transparent in disclosing information on the surveys published by our team, including by sharing results, survey dates, sample sizes, weighting processes, margins of error, and other details of our findings.

How we conduct surveys online

The development of online polling has expanded the ability to gather timely and representative public opinion data. YouGov has been at the forefront of developing this new form of polling since 2000, when it began conducting online surveys in the UK. In 2006, YouGov expanded to the U.S., and now does surveys in over 50 countries around the globe.

All our surveys are conducted online and can be taken on a phone, tablet, or computer — when and where respondents choose. While anyone can join our panel, we choose which panelists we invite to take each survey, and have rigorous processes in place to ensure the quality of responses included in the final results. Our approach enables versatility for the survey maker and survey taker beyond what is easy or feasible over the phone. Surveys can include a variety of question types, as well as images, audio, video, and explanatory text to provide context or gather reactions.

YouGov operates its own panel, providing us with direct management over the sampling process and ensuring a high standard of data quality. When a new panel member joins, they provide demographic information that helps us understand their background. We focus on building long-term relationships with our panelists, which allows us to gather data without repeatedly asking for the same information. This approach saves time, reduces respondent fatigue, and enables us to track changes in their views and behaviors over time.

Our online panel uses what's known as nonprobability sampling, which allows us to collect data quickly and cost-effectively from specific groups of interest. This differs from probability sampling, in which all people have an equal chance of being selected into a panel. To ensure our findings are representative, we invite a representative set of panelists to take each survey and apply statistical weighting to adjust for differences between the sample and the target population.

How we recruit and reward panelists

YouGov has a large panel of Americans who take part in our surveys, and anyone who is an adult living in the U.S. is eligible to sign up to participate. Panel members are recruited from a variety of sources, including through standard advertising and strategic partnerships with a broad range of websites. To ensure we accurately represent everyone in the country, we recruit participants from a wide range of backgrounds and offer surveys in multiple languages, including Spanish. Although our surveys are limited to people with internet access, that includes over 95% of Americans.

For completing a survey, respondents are awarded points that eventually can be exchanged for a small amount of money — or the equivalent — at various vendors, which are chosen based on feedback from our panelists. While monetary rewards are one reason participants sign up to take surveys, many people who participate say they're motivated by a desire to contribute to research and have their voice heard.

How we choose who is invited to take in a survey

We start by deciding whose opinion, behavior, and characteristics we're trying to estimate with each survey. This is usually either all U.S. adults or all U.S. adult citizens, but could also be subsets of Americans — for example, registered voters, or adults under 30. We then calculate certain basic demographic and political information about the group we are studying, using government data and other sources. This includes characteristics such as age, gender, race, education, and voting behavior. Because we collect this information on respondents when they join our panel, we are able to invite survey participants who, as a group, match the characteristics of the population of interest on the basis of the factors we target.

When, as is the case for many surveys, we have more than enough panelists who meet the criteria we've identified, we use other factors to decide which respondents to invite. Among the factors are how recently they've taken any survey, how recently they've taken one in the same category, their preference for survey frequency, and their prior rates of response to invitations.

​​We determine the sample size for each survey based on the level of precision needed and whether we plan to analyze results for specific subgroups within the population being surveyed. For general population surveys, we typically aim for a sample size of 1,000 to 2,000 respondents, which provides a good balance between statistical reliability and resource efficiency. Most of our surveys are designed to be completed in less than 20 minutes. The time it takes to field a survey is typically 1 to 5 days and depends on the target audience and the complexity of the survey.

How we weight survey results

After our surveys finish, we use a process called weighting to make our results more accurately reflect the population we intend to represent. This method gives more or less weight to each respondent based on their demographic characteristics — such as age, gender, race, and presidential vote. The share of respondents with these characteristics is compared to benchmarks from sources such as the U.S. Census and election results. When a survey has a higher or lower share of people with a certain characteristic than the population we are studying, we adjust respondents' weights — and their responses' influence on the results — accordingly. Our weighting process also takes into account multiple characteristics simultaneously — for example, race and education — to increase our sample's accuracy in reflecting the way demographic characteristics intersect in the real world.

How we uphold the quality of our results

To ensure that our data accurately reflects American opinion, we employ a strategy that includes monitoring, testing, and refinement. Our approach involves a dedicated team and a set of procedures to catch respondents who are misrepresenting themselves or providing otherwise unreliable data. It starts when a person joins our panel: We require all panelists to activate their account via email; we then run checks on their IP address and verify email addresses. Among the techniques we use is what we call a response quality survey, which gauges the reliability of responses by comparing them against known or highly predictable information about the panelists. Additionally, we use data about respondents' devices and locations to detect misrepresentation.

In addition to our verifications of the identity of our panelists, all of our surveys incorporate various measures — including whether respondents complete the survey too quickly, answer in inconsistent ways, or repeatedly give the same answer to similar, consecutive questions — to detect and disqualify potentially fraudulent responses. Respondents failing these quality control checks are removed from the final sample, and those who repeatedly fail are excluded from the panel altogether. With the same goal of protecting the integrity of our data, our aim for our questionnaires is that they be neutral and easy to understand. We also often randomize the order of questions and response options to reduce bias.

How we protect the privacy of panelists

Respondents have the following rights to help protect their privacy and control of their data: They choose the survey invitations they accept and the questions they answer. They can request that we not sell or share their personal data to our clients. (We never sell personal data to data brokers.) They can be notified which categories of personal data we collect and the purposes for which data is being used. They can request a copy of the data that we hold about them. They can ask us to correct any inaccurate data about them. They can request that we delete the personal data we hold about them. They can opt out of cookies, including ones used for targeted advertising and tracking.

Occasionally clients might want more details about respondents, such as contact information to ask them follow-up questions. It is up to panelists whether to provide this kind of information; it is never required.

When we report findings, they are aggregated to a degree that protects any individual respondent from being identified. For sensitive questions we often provide a response option such as "prefer not to say"; skipping also is often provided as an option. See our research privacy and cookies notice and our consumer health data privacy policy.

How we calculate the margin of error

YouGov reports the margin of error on each survey to show the range that the shares of answers to our survey questions would likely fall between if we had talked to everyone in the country instead of just a sample. For example, if a survey with a margin of error of ±3% finds that 45% of Americans approve of the president, we can be reasonably confident that approval in the population would fall within 3 percentage points in either direction: between 42% and 48%.

A survey’s sample size can substantially alter its margin of error, and a bigger sample size generally is better. As sample size increases, the precision of our estimates increases, and the margin of error decreases. For example, while a survey of 1,000 people may have a margin of error of roughly ±4%, increasing the sample size to 2,000 will lower the margin of error to about ±2% or ±2.5%. However, there are diminishing returns to increasing sample size. The margin of error we report only applies to the whole sample. When examining subgroups with smaller sample sizes, the margin of error will be higher.

Consumers of polling data should always account for the margin of error. Without this measure of uncertainty, it is difficult to know when differences in opinion between groups are significant. But a survey’s margin of error only describes its sampling variability — the values we can reasonably expect our estimates to fall between, given random sampling deviations in who is interviewed. All surveys can be subject to other sources of error related to how questions are worded, the differences in propensity to take surveys among groups of potential respondents, and respondents answering questions to present themselves in a more favorable light rather than truthfully. And while weighting can help address imbalances in certain demographic variables, it cannot account for characteristics not included in weighting.

How we use modeling to project votes

Our approach to estimating the vote — including in the 2024 election — is based upon a multilevel regression with post-stratification (MRP) model. We have used this approach successfully in past elections in the U.S. — including in the 2020, 2018, and 2016 elections — and elsewhere. It uses a statistical model to predict votes for everyone in the national voter file, whether or not they belong to YouGov’s panel. Interviews with our panelists are used to train a model that classifies people as likely to vote for a particular candidate (or to not vote) and then this model is applied to the entire voter file. We then aggregate these predictions — in what is referred to as post-stratification — to estimate votes for all registered voters. The model has three stages: (i) estimate the likelihood of voting; (ii) conditional upon voting, what is the probability of voting for a major-party or third-party candidate; and finally (iii) predict support for each candidate.

Surveys on our website

Some survey results are published automatically to our website, including:

  • The Daily Questions: On most weekdays, we ask a large sample of panelists at least one set of three topical questions. Anyone can answer the Daily Questions and immediately view live, unweighted results from people who have answered thus far. By the next weekday, final, nationally representative results are published on our website. These only include responses from people who answered the questions as part of a traditional YouGov survey — and not people who opted in to the survey through the website or app. As with other surveys, responses are weighted to be representative of U.S. adults. For these surveys the weighting is based on age, gender, race, education, and political party.
  • Trackers: The data displayed in trackers comes from regular tracking surveys conducted by YouGov. Each survey includes a representative sample of respondents — typically at least 1,000. Respondents were selected from YouGov’s opt-in panel using sample matching. A random sample (stratified by gender, age, race, education, geographic region, and voter registration) was selected from the most recent American Community Survey. The sample was weighted according to gender, age, race, education, 2020 election turnout and presidential vote, baseline party identification, and current voter registration status. Demographic weighting targets come from the 2019 American Community Survey. Trend lines are computed using a LOESS smoother to reduce random fluctuations in the data due to sampling variability.