The analysis doesn't really work that way. Obviously less sequence and parameters runs faster then more, but you really just need to test stuff.
I usually try 10 million generations sampling every 1000. Yes, it may burn in faster, but with 10 million you usually know if it's finished or not, but not always. You also should run it multiple times with different starting seeds.
In reality if you don't have many gaps 5 million generations is probably safe if your really hurting for CPU power, but BEAST2 runs really fast on my average laptop.
Basically you can modeltest for best partitioning scheme and models, but you really should run stuff and then need to see how each individual dataset performs.
Hey Brian, Thanks for your answer. I have actually been working on this for weeks and just today found what you say is indeed painfully correct. I was able to produce a very nice tree for 49 haplotypes with 80 million/ 80K MCMC parameters. This after 3 runs and confirming ESS in tracer. So if anyone is following, yes trial and error. Lots and lots of trials and errors.
Are you just doing a tree or a molecular dating tree? You might try Mr. Bayes or even RAxML for a ML tree. I had a dataset with ~100 haplotypes, 180 sequences that use to take a massive amount of generations to burnin, but the new Mr. Bayes typically burns in very fast (10-20 mil generations).