I suppose to the extent that your yearly expenses are attributable to historically-known factors, those factors should be incorporated into the simulation for ultimate accuracy. We just enter "$20,000" for our yearly expenses because it's easy. "100 lbs. of beef, 500 gallons of gasoline, 100GB of mobile data, interest payments on a $200k loan, etc" would actually be better, since our desire is not really to make a particular amount of dollars disappear, but to acquire products and services we need in our lives.
I disagree. We primarily
are interested in merely determining the likelihood of success of a plan to make a particular amount of dollars disappear, taking into account the overall inflation rate but not the inflation rate as it pertains to a particular product or service. When I run a cfiresim simulation, I want to find out the historical success rate of withdrawing $20k per year, and
not the historical success rate of withdrawing an amount of money sufficient to purchase $20k worth of beef as of the date I run the simulation. If the price of beef for whatever reason fluctuates in a non-correlated way with the overall inflation rate, I might decide to substitute more chicken for beef, or vice versa.
Take the hypothetical example of someone whose entire living expenditures does and will always consist of nothing but purchases of large quantities of pocket calculators. When this person uses cfiresim to plan his retirement, should cfiresim take into account the actual historical cost of pocket calculators (which, I'm assuming, was extraordinarily higher in today's-dollars in 1965 than today)?
However, in the special case of leveraged-investing-via-mortgage, 100% of the expenses are easily attributable to a knowable factor, and furthermore, that factor is much more likely to be correlated with other cFIREsim factors than the price of beef is.
I don't see leveraged-investing-via-mortgage as a special case. Yes, the
reason the mortgage's amortization is what it is is because of the interest rate environment at the period start-date when the mortgage is obtained. But that doesn't change the fact that the annual withdrawals, although
in fact being used to service the mortgage, could just as easily be used to purchase beef, or pocket calculators.
I would say more that I'm offering the hypothesis that prevailing mortgage rates have a correlation with future market returns, and plugging that data into cFIREsim would be an informative exploration of that hypothesis.
I mean, this is the reason why we (I think?) like historical simulators like cFIREsim vs. Monte Carlo simulations, because Monte Carlo simulations ignore the possibility of equity returns, bond returns, interest rates, and inflation being interrelated. The Fed (these days) directly connects inflation to interest rates, interest rates directly affect bond returns, and bond returns theoretically and indirectly affect equity returns. None of those links are ironclad, and the gross effect is pretty unpredictable, but the ability to at least recognize any interrelation is one big thing that differentiates cFIREsim from Monte Carlo simulations.
Say 4%-or-lower mortgage rates actually existed in only 20% of cFIREsim's start years. If it showed that the success rate of the leveraged-investing-via-4%-mortgage plan was 50% when looking at only those years, vs. the 96% success rate when you use 4% for all years, would you still say that the leveraged-investing-via-4%-mortgage plan has a 96% chance of beating the pay-off-your-mortgage plan? (for the record, if historical mortgage data was used, my random-ass guess is that the cFIREsim success rate would only be reduced to 80% or something; I'm not at all saying that the leveraged-investing-via-4%-mortgage plan is the wrong choice, just that it's possible that "there's a 96% chance of it being the right choice" could be overstating it a bit).
I'm sure it's true that the number of leveraged-investing-via-mortgage success cases among only the subset of historical cases with a low-interest-rate-environment starting point is lower than 96% (bo knows, please bring cfiresim back online so we can stop hypothesizing!), but that calls into question not just the approach of using the overall historical leveraged-investing-via-mortgage success rate to predict one's current likelihood of leveraged-investing-via-mortgage success, but the entire approach of using overall historical success rates to predict one's current likelihood of success, which we have been doing for some time (in this thread, for example:
http://forum.mrmoneymustache.com/ask-a-mustachian/firecalc-and-cfiresim-both-lie/), and which is why you borrowed from Churchill to astutely observe that it's the worst approach we have, but better than all the rest.
And yes, this is similar to the recent thread on CAPE and SWRs where I made the possibly-heretical admission that I feel the odds of success for someone retiring at today's high CAPE-levels may be slightly lower than the overall cFIREsim success rate indicates.
Most of us (myself included) share your schizophrenia about simultaneously believing in the inability to "time the market" or even "time your retirement" while also worrying about current pessimistic market indicators. That is the reason we, on the one hand, speak about the absurdly ridiculous margins of safety built into our retirement plans (judging by historical SWR analysis) and, on the other hand, continue to worry that signs point to our own situation being like one of the historical 5%, or 2%, or 0.01% failure cases. If we truly, honestly believed that the actual odds of success of our own retirement plans, with absolutely no corrective action or other external levels of safety margin needed, were greater than 95 out of 100, would any of us really have any legitimate concerns about the safety of the approach or succumb to OMY syndrome?