It is generally accepted that the existing limit theory, which would deal exclusively with "infinitesimal number forms", may operate in one of following two ways:
(1) During the whole quantitative cognizing process to infinitesimal number forms, no one dare to say “let them be zero or get the limit or get the standard number”... So, the infinitesimals in the calculating operations would never be too small to be out of the quantitative cognizing process and the quantitative cognizing process would be carried our forever. This situation have been existing in mathematics since antiquity, only in Zeno’s time he created his Zeno’s Paradox to criticize this unscientific phenomenon bravely and sharply. As we know, all cases of such operations have formed a huge suspended Zeno’s Paradox Family (including the newly discovered "Harmonic Series Paradox").
(2)During the whole quantitative cognizing process to infinitesimal number forms, someone suddenly cries “let them be zero or get the limit or get the standard number”... So, all in a sudden the infinitesimals in the calculations become too small to stay in the quantitative cognizing process, they should disappear from (be out of) any relating quantitative cognizing process immediately. This situation have been existing in mathematics since antiquity, only in Berkeley’s time he created his Berkeley’s Paradox to criticize this unscientific phenomenon bravely and sharply. As we know, all cases of such operations have formed a huge suspended Berkeley’s Paradox Family.
When using the above first method of cognition to the “whether 0.9999999… equals to 1” argument, one is assured and bold with justice to get the result of 0.9999999… equals to 1
When using the above second method of cognition to the “whether 0.9999999… equals to 1” argument, one is assured and bold with justice to get the result of 0.9999999… does not equal to 1
So the debates of “whether 0.9999999… equals to 1” can go on endlessly and fruitlessly, but only with a variety of wordplay.