From that great thread:
psteinx wrote:
"I'd be interested in seeing a huge grid of analysis - overlaying many lag periods and different buffer sizes. Then I'd like to see the results for different markets, time frames, and perhaps different assumptions about transaction costs and taxes. Results would include simple returns, risk-adjusted returns, number of trades, and perhaps a few other things.
I'm sure if you backtest enough strategies and tinker with the variables enough, some would look good.
And in fact, it's possible that there's a bit of merit to these kinds of strategies, provided your costs are low enough, because from various things I've read, there is a bit of momentum to the stock market, short term.
But I wonder how robust these strategies are, on the whole. If we backtest and find that a 200 DMA with a 1% boundary looks good, but that the same thing with a 2% boundary does not, or that using a 180 DMA yields substantially different results, then the whole exercise becomes a bit suspicious. In theory, by varying a number of the key parameters, you could come up with thousands or perhaps even millions of potential strategies. If 70% of strategies with reasonable parameters work, then I'd be somewhat interested. If only 15% of them work, and somebody is cherry picking the most successful from within that 15% to advocate the strategy, then I'd be very skeptical. Of course, many folks might have different thresholds/standards for saying something "works", as well."
That pretty much sums it up for me. The stock market is *random*, folks. You can't "beat" random with a formula. Luckily, it trends up, so if in doubt, invest.
-W