Interesting poll.

A couple of things I've learned from 538 is to look at margin of error, and whether it's registered voters or likely voters. This particular poll is at 3.6% and is registered voters. It's also a phone poll; with so many people on cell phones not answering, I think those are going to have more and more trouble over time.

I'm not 100% sure, but since those where-Amash-pulls-voters-from numbers are below the margin of error, they're essentially meaningless. Especially since the overall N=739 means that the 5% number is only 37 voters - shouldn't the subset have a larger margin of error than the overall poll? I think it does but my statistics knowledge is pretty weak.

All this is not to pick on you, @maizeman; I bet you know all of the above. It's more that your post was a good jumping off place to the points above that I felt like making.

No offense taken (although I do appreciate the disclaimer!).

I think I have a few answers, but up front I'll agree with you that this is a single poll and we shouldn't read too much into it until we have more data. But going from zero empirical data to a little is still pretty fun. That said:

1) My understanding is that likely voter filters tend to work better close to an election and not so well months in advance, so they're rare this far out and more common closer to the election.

2) Poll response rates have gone up a lot during the lockdown. This is particularly true for groups that tend to be hard to reach (and hence have very high error rates) like young people and people who only have cell phones. Before pollsters might have to dial on the order of 20-40 numbers to get one person to talk to them, and the people who answered tended to be a very nonrandom sample of the population. So polls are keeping cheaper to do (fewer calls for the same number of datapoints) and sampling the a wider range of the population.

3) The error rates for small subgroups

*as a percentage of those small subgroups are higher*, but smaller as a percentage of the total respondents.

Let's say there are exactly 39 Amash supporters in the poll and 23 switched from Biden, 8 switched from Trump, and 8 from undecided. We'll ignore the undecided and treat this as a binary choice from-Biden and from-Trump (so 23 and 8, 31 total datapoints). In this case 74% of voters who switched to Amash came from Biden, and a 95% confidence binomial interval is 55%-88% of voters who switch to Amash came from Biden rather than Trump. That's a

**huge** margin of error as a percentage of total Amash supporters, but it works out to +/- 1% of the total poll respondents, much smaller than the margin of error who is leading in the overall Trump/Biden race.

This is still simplified since is assumes the pollster isn't over or under weighting certain responses to fit their model of the electorate (which all pollsters do), but hopefully it gives a sense of why small subgroup analyses can both have greater uncertainty than the overall poll while still having less uncertainly when expressed as a percentage of total poll respondents.*

*A lot of this is cribbed from arguing with people back when Andrew Yang was pulling 3-4% polling results -- and many other democrats were consistently polling 0-1% -- who would say that since the poll has a 3-4% MoE, Yang's support was statistically no different from zero.