Do you mean the proportion of missing data that is acceptable for a study? I think you need to determine the type of missing data. There three types of missing data:
1-Missing completely at random (MCAR)
2-Missing at Random (MAR)
3-Missing Not at Random (MNAR)
You may check the following reference:
Dong Y, Peng CY. Principled missing data methods for researchers. Springerplus. 2013 May 14;2(1):222. doi: 10.1186/2193-1801-2-222. PMID: 23853744; PMCID: PMC3701793.
Hey – hold on a second. Before you throw away data, investigate.
Think of missingness as interestingness! Rather than just regarding missingness as a nuisance – which it may be – you also need to ask yourself why a lot of data are missing. For example, no data on smoking in a patient's chart leads to the question of what kind of doctors don't ask about or record smoking, and what kind of patient is ignored. Missing patient chart data on sexual side effects of medications can reveal a lot about our biases – it tends to be missing in older patients, women, patients with mental health problems and with learning disability. So sometimes the reasons for the missingness may be more interesting than the actual data themselves.
Missing data on ratings scales can uncover blocks of question that simply don't apply to a significant number of people, like questions that assume that they are employed. I encountered the amusing case of a scale meant to evaluate the burden of hernias on the person's daily life. It had an item about finding it difficult to climb stairs because of the hernia. You can imagine how that item applied to patients in rural Malawi!
So before you write off the missing data, try and find out why it's missing. There may be a story in that.
If you have a lot of missing data and you intend to impute it, then you have a lot of reading to do. As Abubakr Al-shoaibi points out, you need to understand the mechanism(s) by which your data went missing.