Hi all,
I wanted to start a new thread for this. In another thread people are discussing FIRE failures and asking if there is a rule of thumb to determine if one is in trouble.
I have been wondering about this as well and had an idea that ended up producing a simple rule of thumb for my specific circumstances that might be able to be generalized. I'll describe how I came up with the rule of thumb and then give the rule of thumb that I came up with.
1. I started with the FIREcalc graph that plots the portfolio balances over time - year of retirement (1 through n) on the x axis, and portfolio value on the y axis, with one line for each starting retirement year. (I like cfiresim better, but I couldn't find a way for it to give me all of the raw data I wanted, so I used FIREcalc instead.)
2. For the particular set of numbers I entered, there were 90 data series, of which 36 ended up failing (falling below 0).
3. I then imagined the graph from step 1 with only these 36 failing sequences on it.
4. I then imagined a new data series which was equal to the highest portfolio value of all of the failing series for each year of retirement (1 through n). In other words, for the particular FIREcalc run that I did, it was possible for an unlucky retiree with this much money to still ultimately fail. Call this the "unlucky line".
5. For each year of retirement (1 through n), I then counted how many of the 54 successful data series were below the "unlucky line". These are situations where the retiree looks like they could be in trouble but their lucky future returns ended up rescuing them. Call this the "lucky count".
6. As you might expect, in the first few years of FIRE, the "lucky count" is pretty high because all of the series are jumbled together, but towards the end it is quite low because the series start to separate themselves. What was surprising to me was that for pretty much the last half of this FIREcalc data, the "lucky count" was basically zero.
What this means is that -- for the particular dataset I looked at -- for a 40 year FIRE period, if at the 20 year point I had less than my starting amount then I had over a 90% historical probability of failure AND if at the 20 year point I had more than my starting amount then I had a 100% historical probability of success.
I like the symmetry that the picture becomes clear halfway through the FIRE period, and also that the unlucky line pretty much happens to pass through the original portfolio amount at that halfway point.
The rule is not particularly sensitive to the year or the amount. So for example, at 16 years, the percentages are 90% and about 95% respectively. At 23 years, if you're less than 75% of your initial amount, the percentages are 100% and 100% respectively.
It does look like it is still quite possible to be lucky up through 15 years into the FIRE period. In that case there were 17 sequences that were below the "unlucky line" that eventually recovered. (I.e., the "lucky count" was 17.)
All of the above analysis is on my particular numbers for my particular FIREcalc inputs (which were for a 40 year retirement, 85% equities, .05% expense ratio, and a few years of higher than 4% spending followed by ~37 years of ~4% spending). I am not sure to what degree it can be generalized, but I thought I would throw it out there for discussion.
2Cor521
People like Pfau have attempted to mathematically analyze it, so reading his stuff would be a good place to start, if you want to get beyond a gut-level "hmm, maybe I should pare back spending" feeling.
I'm not aware of any guidelines that have come out of the research to help early retirees gauge whether or not their portfolio performance is on a successful trajectory. People like Pfau are generally focused on determining the withdrawal rate that can be expected to be sustainable in anything but the worst-case and near-worst-case scenarios, or developing more dynamic withdrawal strategies that can protect against downside risk and/or allow retirees to take advantage of upside outcomes. Has anyone put forward a history-based rule of thumb (akin to the 4% rule) that tries to quantify the "gut feeling"--something like: if you're using a 4% WR and your portfolio's average performance over the first W years is less than X% then you need to drop to a Y% WR until the portfolio recovers by Z%?