Confirmation bias? Their conclusion is still supported by the data-- far too many people are guessing the rule before getting a "no" result. This is true regardless of what the exact "rule" is (rising numbers vs rounded numbers rising).
Will anyone here admit to guessing without getting "no" answer? If so can you tell us about it?
I'm not saying that they are wrong. I'm saying that using their puzzle as a way of demonstrating that their conclusions have merit is in and of itself confirmation bias. They
believed it would work, they
constructed it to work, and when it did work they didn't bother to see if it was a problem with the puzzle or if what they were seeing was good data.
So the experiment they concocted to demonstrate confirmation bias is an experiment that is
itself afflicted with all kinds of confirmation bias.
I was amused is all. Confirmation bias is a thing, no argument here. It isn't limited to laypeople either.
But that really isn't what that article is about.
If I were peer reviewing the experiment, I'd have made the following recommendations:
1. You need to randomize the starting prompt. Your choice of numbers will subject you to a confirmation bias. Preferably the rule for the puzzle will automatically generate 3 numbers that fit the rule. As it is being administered on a computer there is no rational reason to not do this.
2. You need to randomize the formula. Choosing one and having every participant try to guess that one is not going to be relevant to your thesis. You need at least four, preferably more, and they need mathematical variability. Consult a mathematician for possible issues with mathematically literate folks taking your test.
3. You need a control. Barring that you need an antithesis test. How do the results compare when almost all initial guesses would be a No? How about when there is an actual reward for getting it correct? Consider mturk with a $10 award to the first 100 people to guess the right answer.
So the puzzle in that article was not about science, it was about providing positive feedback that confirmation bias is a thing. Which isn't science, it's confirmation bias!
I've given a few minutes thought to how to present it such that 50% of the initial guesses return a yes, and 50% a no. That would provide a much better data set, as you could follow up with how many people think they nail it after their first strings of yes, vs. how many keep going until they get more no. I think limiting it to numbers in itself is going to cause problems. A lot of us have had quite a bit of math training, so simple rules may tend to have that first guess come out a yes most of the time regardless.
So, yea. The data from the puzzle is total garbage, it illustrates nothing like what the article is claiming. The claims in the article happen to be supported by other, independent, superior data.
There is an outside possibility, and just throwing it out there, that confirmation bias isn't necessarily a bad thing. It's not good either. It's just there. It's a thing to be aware of. I think that's what the point of the article was. So it was deeply amusing to me how unaware it was of how crappy their puzzle was. They missed an opportunity to be meta.