Is there any method/framework/guideline to determine cut-off threshold value when we try to eliminate low important terms based on the weights generated by tf-idf?
I advice you to take a look to the Zipf curve, it is used in IR field to indicates what parts of the terms must be eliminated in the indexation process (where the rare terms and the very repeated terms are eliminated)
if you are working on a domain-spacific dataset, it will be a good idea to use the Zipf curve, as suggested by Mhamed, or go further and use the Vergne method (Découverte locale des mots vides dans des corpus bruts de langues inconnues, sans aucune ressource, Vergne2004), as I did for terminology extraction task. In this case, I combined the frequency and tf-idf, and considered the top-30% of aech list as good candidated. I evaluated it on two comparable corpora in chemistry and telecommunication in French and English languages. The frequency of informative words gave average precision of 81,5% in French and 51% in English; the tf-idf 75,5% in French and 70,5 % in English.
if you work on a multi-domain or general language dataset, it is more tricky and I'd follow Chris' advice