I guess each company can design their MC simulations using whatever rules they want. But I always assumed they just took the average rate of return and the standard deviation and randomly calculated a return for each year in the simulation time frame.
Monte Carlo simulations are, by definition, reshuffled resamplings of existing data pools. They are not randomly generated data based on statistical descriptions.
If you see someone talk about Markov Chain Monte Carlo simulations, then your interpretation is much closer to what they're doing. Markov Chains sample defined probability distributions, rather than data pools. Despite often seeing them billed as an "advancement" on Monte Carlo sims they're really a much simplified version, designed to answer a different sort of question. You commonly see Markov Chains MC sims used when the ordering of the results is less interesting than just generating an answer space, for example in purely theoretical math problems where people really have no clue what's going on behind the scenes.
I'm a physical scientist, and I spent many years learning about and then implementing various types of models about the physical world. I rarely used Monte Carlo simulations because they're numerically inefficient if your goal is to find the right answer, rather than understand why the answers look a certain way, and I never once had to implement a Markov Chain MC simulation as part of my professional modeling career. In my world, if you already know the probability distribution required to set up a Markov Chain, then you either already have an answer you believe in or you're using circular logic to pretend the answer you want is the correct one.
For government work, subsampling your parameter space never flies. You churn through them all and publish the giant matrices, or if you can't get the computer time for that then you find a way to reparameterize your problem into something you CAN solve completely. We usually considered Monte Carlo sims a toy for people who like to poke around interesting problems, rather than a tool for setting policy. I'm sure I just offended an academic somewhere.
In the context of Vanguard's market simulations, Monte Carlo is a gimmick used to fabricate false credence. The entire data pool of stock market history is less than 100k days long and all they're doing is shuffling those days (or months or years) into a few thousand other possible orders to see what they look like. But market histories are a well linearized problem, so it's not like it matters all that much whether your hypothetical history goes 4%, 8%, 10% or 10%, 8%, 4%. Over the entire history of the market, the answer is not going to be off by a million percent at the end. As we've previously discussed in this thread, there are much larger uncertainties about the future of our stock market than can possibly be captured by just shuffling the historical monthly returns to see how they might have added up differently. Their Monte Carlo simulations don't account for the 2057 robot uprising, or the 2083 asteroid impact that wipes out Earth and leaves only Mars to shoulder the burden of paying your dividends.