Please read. Significant change on the site that will affect compatibility [ Dismiss ]
Home ยป Forum ยป Author Hangout

Forum: Author Hangout

Averaging

PotomacBob ๐Ÿšซ

Some organizations, 538 and RCP among them, do not conduct polls themselves but compile many polls conducted by others and then do some sort of averaging.
For example, if Candidate A is ahead in one poll by 2 points and Candidate B is ahead in another poll by 10 points, then the "average" poll has Candidate B ahead by 4 points, although no individual poll said that.
It is obvious that both the 2 points behind or 10 points ahead cannot both be right.
For those who know enough about polling or statistics to have an informed opinion, is averaging a legitimate mathematical way of determining what is correct - or is it just another imaginary number?

Ernest Bywater ๐Ÿšซ

Polling is just a specialised form of surveys and they have the same issues. If I want to conduct a poll that shows candidate A well ahead of candidate B, I simply got to the campaign HQ of candidate A and poll the people in the office and those who visit the office. Bingo, poll shows candidate A has a 95% following.

awnlee jawking ๐Ÿšซ

@PotomacBob

For example, if Candidate A is ahead in one poll by 2 points and Candidate B is ahead in another poll by 10 points, then the "average" poll has Candidate B ahead by 4 points, although no individual poll said that.

That assumes the two polls are of identical size.

A similar technique is used in medicine for meta- or cohort studies. It's as good as its weakest contributor.

AJ

Replies:   PotomacBob
PotomacBob ๐Ÿšซ

@awnlee jawking

That assumes the two polls are of identical size.

What assumes? The 4-point average or something else?

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@PotomacBob

The 4 point average. With different sized polls, a weighted average would be better.

AJ

LupusDei ๐Ÿšซ

@PotomacBob

I'm not really versed in all the math and methodology involved, but in my profane understanding it is indeed believed to be "better" at least somewhat.

Well, if all the polls you average are using the exact same methodology, and were conducted at the same time, the resulting number would be legitimate new result of simply larger sample and thus the "correct" one, and would even enjoy (much) smaller margin of error.

In reality the polls are each done by different methods at different times, at least some pollsters have known partisan biases and so forth. You may end up averaging the only accurate poll with bunch of bullshit pulled out of someone's ass. So, a simple mechanical averaging, while still in theory may give you a somewhat more trustworthy number than any single poll could, doesn't fully make sense.

538 try to account for all that with a known systematic bias adjustment and also assigning weights to individual polls by historical accuracy of the pollsters and age of the poll, and sometimes drop polls out for suspected fraud. (For example, they just dropped out last Trafalgar polling, not so much for being exceptionally weird even by usual Trafalgar standards, but for suspected data manipulation; on what Trafalgar responded by cutting public access to data of that poll, so...)

Such a number isn't quite an average, and possibly better be called projection, but if all done right it in theory can be more "true" than even pure average (all that is public on their site, with some digging). 538 then further adjust that number by their own data points to arrive at the figure they then use to assess their probabilistic forecast. And, yes, that whole process has quite few degrees of freedom that may introduce a bias. (With, while would normally assumed to be Democratic for the personalities of the staff there, was regarded to be at least a little pro-Trump by the assumptions inherent in their fundamentals. That component though is quickly losing weight at this late phase.)

RCP does more plain pure average. However, they do at least some cherry picking of polls to average, and their methodology to do that isn't at all clear. There normally is assumed at least light conservative bias, but it seems this cycle they are going all out into Trumpland, including phasing out aging good polls showing Biden lead more quickly than probably is reasonable. It still may be useful as kind of worst case average (from a Biden supporter viewpoint).

Replies:   PotomacBob  REP
PotomacBob ๐Ÿšซ

@LupusDei

(With, while would normally assumed to be Democratic for the personalities of the staff there, was regarded to be at least a little pro-Trump by the assumptions inherent in their fundamentals. That component though is quickly losing weight at this late phase.)

Not quite sure who you're referring to here. The 538 Staff? The pollster's staff? Somebody else?

Replies:   LupusDei
LupusDei ๐Ÿšซ

@PotomacBob

Not quite sure who you're referring to here. The 538 Staff? The pollster's staff? Somebody else?

There are reasons for assuming 538 could be "left" leaning. But it could just be the "liberal bias of reality."

However, the organization they are part of do supposedly have "left" bias, and quite few Trump supporters assume that as default. At the same time, 538 got quite a bit of beating from real lefties for even giving those almost 1 in 3 chances to Trump in 2016 they did (before he indeed won, of course), and they are softer criticized this year for giving Trump implied incumbency advantage and expecting tightening of the race in the end (what doesn't seem to be happening, despite RCP possibly artificial attempts to make it look like that it indeed is).

REP ๐Ÿšซ

@LupusDei

Well, if all the polls you average are using the exact same methodology, and were conducted at the same time, the resulting number would be legitimate new result of simply larger sample and thus the "correct" one, and would even enjoy (much) smaller margin of error.

Not true.

Assume two polls - A and B. They were conducted using the same methodology, and were conducted at the same time.

Poll A indicates 60% of its responders were in favor and 40% opposed. Poll B indicated 40% was in favor and 60% opposed. The average of the percentages indicates 50% were in favor and 50 percent opposed.

However, your criteria overlooked one very important thing - the number of responders in each poll.

Assume Poll A surveyed 10,000 people and Poll B surveyed 1,000 people. Combining the number of people in each poll that were in favor of the issue and those opposed gives you 11,000 people surveyed - 6,400 (58%) were in favor and 4,600 (42%) were opposed.

So, considering the number of people for and against the issue gives very different percentages (58% and 42%) than averaging the 2 polls' percentages (50% and 50%).

Replies:   BarBar  LupusDei
BarBar ๐Ÿšซ

@REP

REP is correct.

A simplistic averaging of poll results might be misleading, but a weighted averaging of polls where the weighting is based on the size of the poll would result in a more accurate prediction with a lower margin of error.

Other complicating factors would also have to be watched out for. For example, if some company had polled the same individuals 3 times over a 6 month interval to see if their probable vote changed, that would have to be taken as one poll and not 3.

LupusDei ๐Ÿšซ

@REP

I didn't emphasize the size of the polls as most are usually less or around thousand, so you can assume it was buried inside the "methodology" requirement. All else being exactly equal, you could probably indeed combine the polls like that and the size would matter. However how little I know about statistical theory, the sample size isn't all that critical as long as it is representative. So, with two pollsters claiming adequate representation (and/or weighing) those two polls could indeed be averaged together as equal although I too would tend to think that the larger one might be more accurate and thus receive more weight, but probably not necessarily ten times the weight of the smaller poll.

Replies:   bk69  REP
bk69 ๐Ÿšซ

@LupusDei

However how little I know about statistical theory, the sample size isn't all that critical as long as it is representative.

However, the sample is only really representitive of the smallest group it fits. In most cases, this translates to "this is what those people with listed phone numbers who were willing to spend time answering a survey, and live in {specific area] think"

To have a accurate poll for politics, you first need to know what the voter turnout is gonna be. Then you need to poll about 30 individuals of each combination of gender,location,age,net worth,party affiliation,ethnicity,educational background. Then you have enough data to assume normal distributions, so you multiply each group's split by the percentage of the people who'll turn up to vote that each group represents, add those all up, and you should almost certainly know how the popular vote will turn out. Once you figure out how the dead people from Chicago plan to vote, that is.

REP ๐Ÿšซ

@LupusDei

you can assume it was buried inside the "methodology" requirement.

the sample size isn't all that critical as long as it is representative.

I consider methodology to be very different from sample size, and would never consider it part of the methodology used to conduct a poll.

There are over 300 million people in the US. I fail to see how a sample size of 1,000 could be representative of a nationwide poll. For a sample size to be representative, it has to large enough to provide an adequate number of people for each position being sampled. It also has to be large enough to sample the demographics of the population affected by the subject be sampled.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@REP

There are over 300 million people in the US. I fail to see how a sample size of 1,000 could be representative of a nationwide poll.

Some thoughts on a back of the envelope calculation for an adequate sample.

At least one person from each state, that's 50.

Urban people vote different from rural people. Make that 2 from each state. 100

People in DC get to vote for president but DC is all urban 101.

Male/female 202.

Age: here it starts to get harder, how many ways to cut age brackets. We are talking about election polling so cut off the bottom at 18. I'll say 18-30, 31-50, 5-65 and 66+. That's 4 age brackets: 808.

Income. I'll make it simple Pick a percentage for +/- median income and under over and the middle for three income brackets. 2424

Military/Civilian government employee/private sector/retired(or on gov assist) 9,696

Race: How many ways to cut this? I'll pick White/Black/Hispanic/Nat Amer/Asian/Mixed. 58,176

Religion: Protestant|Catholic|Jewish|Islamic|Atheist|Other
349,056

I think people can see where this is going.

Replies:   bk69
bk69 ๐Ÿšซ

@Dominions Son

It gets worse.

You're only grabbing a single (possible) respondant for each demographic, so you have no idea if that individual is representative of that demographic group. Multiply by 30 and you can assume normal distribution, which means the vote split is actually representative. Although you still have the issue that you've only got the public opinion of those people willing to answer polling questions. So no indication of what the opinion of those who wouldn't answer, plus no way to determine the outcome of an election since you don't know what voter turnout from each demographic is likely to be.

Dominions Son ๐Ÿšซ

@bk69

It gets worse.

I know. I stopped having established a basic pattern.

In a non-homogeneous population, 1% of the target population might get you a truly representative sample, if you sample selection methods aren't biased.

Tiny samples work for QC testing for a mass production process because the units produced are supposed to be homogeneous.

They don't work for opinion polling of humans.

Replies:   bk69
bk69 ๐Ÿšซ

@Dominions Son

Yeah, but my point was that even having a representative sample is useless for predicting election results because a)the population you're measuring is "people who'll answer polling questions" not "the entire population of the country" and b)you need to know voter turnout data a priori to even determine who'll win the popular vote. But you then need to look at the state by state data to determine how the EC breakdown will go.

richardshagrin ๐Ÿšซ

@bk69

you can assume normal distribution,

"No matter how many observations you have (what the n is) you can't guarantee the sampling distribution will be normal (Joliffe, 1995). You would need to know the rate of recording these observations to know what the distribution is. This is true for both distributions and summed Poisson distributions. Even with a very large n, it might not be enough to bring the sum total lambda (from the combined Poisson distributions) high enough to approximate a normal distribution.

The CLT can't save everything
The graph below shows a simulated Poisson distribution with 1,000,000 random values, ฮป=1. Even with a n of 1,000,000, the data is heavily right-skewed. If the rate (ฮป) is low enough, no sample size can guarantee a normal distribution. This holds true for summed Poisson distributions.

The same is true for a binomial distribution. When P=.01 (the event happens 1 in every 100 trials), the distribution is heavily right skewed. So, the distribution of the statistic of a relatively unlikely event with two potential answers doesn't approximate a normal distribution. This is also true when n=100. Therefore, it shouldn't be analysed using statistics that assume a normal distribution. This clearly shows the CLT doesn't apply without reservations."

Statistics don't lie, but actuaries can give you any answer you want, by using statistics.

Replies:   bk69
bk69 ๐Ÿšซ

@richardshagrin

Statistics don't lie, but actuaries can give you any answer you want, by using statistics.

One of the universities I went to, the psych department offered their own statistics class. The math department accepted that class as at least equivalent to their own offering, but the psych department didn't accept the math department's version of the class as equivalent. So it seemed interesting (particularly since it was worth more credits). One of the things that stuck out was the claim that - when sampling a human population (and remembering that your sample is representative only of the smallest possible identifiable population) once your sample size is at least thirty you can assume that whatever binary response will be roughly normally distributed WRT the actual population (ie: ask 30 people "are you a Ashkenazi Jew?" and the odds of the proportion of the actual population you've sampled answering the same way is normally distributed around your response rates).

Switch Blayde ๐Ÿšซ

@PotomacBob

is averaging a legitimate mathematical way of determining what is correct

When I went for my army physical, my blood pressure was checked twice. One person found it high; the other found it low. So they averaged it out to normal.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Switch Blayde

When I went for my army physical, my blood pressure was checked twice. One person found it high; the other found it low. So they averaged it out to normal.

Presumably only one of the testers was wearing a white coat ;-)

AJ

ystokes ๐Ÿšซ

As I have said before the issue I have with polls is when they say that asking 1,600 people means that is what the county, state or country thinks.

BarBar ๐Ÿšซ

Also averaging 3 polls taken from a specific state would not make it any better a predictor of overall USA results than one poll from that state. All it would do would make it a more accurate predictor for that state.

Replies:   LupusDei
LupusDei ๐Ÿšซ

@BarBar

Nobody average together state and national polls, to my knowledge. They do it by state, and national polls in they own category. Although I think 538 had tried to infer national margins by combining state polling and then comparing that to national polls as an analysis excercise, once or twice.

BarBar ๐Ÿšซ
Updated:

Having said all that, I believe the accuracy of polls have been declining over the last decade or so, due to the methodology. The failure of the polls to predict Trump's win 4 years ago was not an isolated incident across advanced democracies of the world.

If the pollsters rely on dialing landlines from published directories, then they are excluding a larger and larger proportion of the populations who are opting out of having a landline and only own a mobile phone.

Also a larger and larger proportion of the population are sick of being polled and decline participation. As that proportion increases, they become harder to predict for.

Replies:   Dominions Son  LupusDei
Dominions Son ๐Ÿšซ

@BarBar

I believe the accuracy of polls have been declining over the last decade or so

The accuracy of polls has never been what the pollsters think and/or claim it is. Dewey Beats Truman anyone?

LupusDei ๐Ÿšซ

@BarBar

While a lot of that is true, and truly representative sample is practically impossible now, polling remains rather accurate if isn't even improving actually.

Even the infamously "wrong" polling of 2016 were, by polling standards quite accurate: it had only about 2% systematic error across many polls (believed to be caused mostly by unexpected turnout and voting consistency of non-college white men, and thus those were weighted "incorrectly" in hindsight) at declared margin of error of arousal 4% (with is the usual they give). And even historically, they weren't at all bad. (That analysis is done and redone many times in many places.) They did call the winner "wrong" but as far mathematical accuracy goes, that is immaterial. Within margin of error of parity there just is that uncertainty.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@LupusDei

Even the infamously "wrong" polling of 2016 were, by polling standards quite accurate:

No, what you describe is precision, not accuracy. They aren't the same thing.

Replies:   PotomacBob
PotomacBob ๐Ÿšซ

@Dominions Son

Dominions Son
10/30/2020, 6:34:19 PM

@LupusDei

Even the infamously "wrong" polling of 2016 were, by polling standards quite accurate:

No, what you describe is precision, not accuracy. They aren't the same thing.

For those of us who don't understand the significance of the distinction, could you please explain.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ
Updated:

@PotomacBob

For those of us who don't understand the significance of the distinction, could you please explain.

I'll give you an example using marksmanship.

Fire 10 rounds at a target.

Precision = The size of the grouping, how close all the bullet holes are to each other.

Accuracy = the distance between the center of the grouping and the center of the bulls-eye.

Polls like to claim a 4% margin of error, but that's not the whole story.

1. They never tell you how they arrived at that margin of error, and yes it matters.

2. There should be a confidence interval supplied with that.

95% is the typical confidence interval used in most statistical work.

What that means is that, assuming they did everything correctly (and that's a pretty big assumption), there is a 95 percent probability that the real answer is within +/- 4% of the answer they got.

What happens in the other 5% of cases? The real answer is out side of that +/- 4% and it could be anywhere outside that +/- 4%.

Replies:   PotomacBob
PotomacBob ๐Ÿšซ

@Dominions Son

Maybe you didn't notice, but your reply cut off in mid-sentence.

awnlee jawking ๐Ÿšซ

@PotomacBob

your reply cut off in mid-sentence

Perhaps the prison let him out early for good behaviour.

Naaaaaaah ;-)

AJ

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@awnlee jawking

Perhaps the prison let him out early for good behaviour.

My sentence got overturned. :)

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Dominions Son

My sentence got overturned. :)

Grammarly does that! ;-)

AJ

Dominions Son ๐Ÿšซ

@PotomacBob

Maybe you didn't notice, but your reply cut off in mid-sentence.

I did notice. And I couldn't remember where I was going with that last sentence so I removed it.

rustyken ๐Ÿšซ

Accurate polling results are dependent on honest answers which are dependent on the phrasing of the question. So from my perspective it is best to not place much value on the results.

ystokes ๐Ÿšซ

But do they take into account people lying on a poll?
One rumor out there is that for what ever reason a number of Trump supporters don't admit to supporting him.

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@ystokes

for what ever reason a number of Trump supporters don't admit to supporting him.

There are areas where open Trump supporters have gotten death threats.

Replies:   ystokes
ystokes ๐Ÿšซ

@Dominions Son

There are areas where open Trump supporters have gotten death threats.

There is a ad of a GOP candidate's rally where one of his supporters holding up a sign right near him that says "Vote red or wind up dead."

Replies:   Dominions Son
Dominions Son ๐Ÿšซ

@ystokes

There is a ad of a GOP candidate's rally where one of his supporters holding up a sign right near him that says "Vote red or wind up dead."

There is a big difference both qualitatively and legally with a generalized treat not aimed at a specific person and specific and particular threats targeting a specific person and their family sent to their home.

Back to Top

Close
 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.


Log In