Yes. It's like switching from C to F in how you report temperatures and complaining that it's far hotter since the change. ;)
That's a great way of putting it. If you don't mind I might borrow that, as these CAPE discussions seem doomed to continue to come up.
Looking before and after 1997, the modal CAPE value has gone from ~18 to ~25.

Including dividends, I make out the average S&P return to be about 10.1%/yr historically.
The four failure years had a CAPE of 21+.
Historically years with CAPEs under 21 would experience an average return of 11.1%/yr for the following ten years.
But for years with CAPEs over 21 the average return would only be 5.4%/yr for the following ten years.
It'll make a lot simpler for all of us if we use CAGR instead of average returns. Ideally also inflation adjusted, but if you'd rather correct for that separately it can be made to work. "Average" returns are incredibly misleading.* The long term inflation adjusted CAGR of the stock market is about 6.9% (or 9.1% without inflation).
*Year 1: -50%, year 2: 100%. Average return 25%/year, actual return over two years 0%.
So if the CAPE today is 21+ and history tells us we should expect a 5.4%/yr return for the next ten years, we might severely understate the failure rate when we use a simulator that uses all historical return sequences and thereby gives the portfolio a 10.1%/yr average return for the next ten years.
Since it is impossible to know what will happen over the next 10 years, it becomes difficult to come to conclusions about 30-yr periods. I think it would be better to simulate underperformance for the next ten years followed by a 20 years of historical returns.
I think this might be one of the fundamental points that you're misunderstanding about historical backtesting and withdrawal rate strategies generally: volatility and sequence of return risk is the major driver of portfolio failures, not lower overall returns. This may also be why you're essentially talking past a lot of folks on the board.
I'm going to illustrate this, but I want to preface this with the disclaimer that what I am about to do is NOT a good way to go about calculating the risk of retirement strategies and I'm only doing it to illustrate some properties of the math involved in these calculations.
Let's consider two scenarios: 1) regular historical backtesting, starting with a $1.2M portfolio invested entirely in stocks, and taking out $4k every month (equivalent to a 4% withdrawal rate). 2) same as above, but investments earn a flat (and low) rate of return for the first 10 years, then switch over to historical data.

So you'll notice a couple of things. First of all failures are still associated with the same two historical time frames (although the actual FIRE dates that fail are 10 years earlier than they were under normal historical scenarios). Second, a decade of low returns reduces the best case outcomes by tens of millions of dollars. Finally, I played with what the fixed low rate was going to be in the second scenario. The first scenario with monthly data gave a failure rate of 2.2%. To get close to the same failure rate (2.0%) in the second scenario, I set the fixed return rate of investment return of the first decade at 2.8%/year.
Taking this to its logical extreme, as I believe someone else already pointed out above, if we keep a fixed low rate of return for 30 years, you only need 1.3% per year to avoid running out of cash.
Also, because this also comes up a lot when people start thinking about this: No, it also doesn't work to just subtract a fixed percentage of annual return from each year for the first ten years. If you'd like to discuss why that is we can, but this post is already quite long.
TL;DR: Hard coding low fixed returns early in retirement actually skew predictions of FIRE successes to be more optimistic than the real historical return data suggests. This produces misleadingly optimistic forecasts, which is why anyone reading this thread for its intended purpose and not as part of the current mess should should skip over this whole post.