It is obvious from basic statistical theory that larger samples lead to analytical outcomes that are likely to be more ‘accurate’ in the sense of providing a result that is closer to the true population mean. However, a sample of 1000 is actually acceptable.
Everything will depend on what type of analysis we want to do, if it is institutional, of a person, of a magazine, of a service, of a period of time or a group of years, etc. The sample must be calculated based on what we want to demonstrate or obtain as a result and the objective to be covered, that is, if we want to highlight the value of the service, or if we want to show the impact of an institution or the visibility of a researcher. What if we understand that we do not have to combine databases for studies, that is, if we only use Web of Science or Scopus or Google Academic or Dimensions, we only have to do the individual analysis by pass or system and then if we want to make comparisons between them. .
The minimum sample that is needed is at least 5 years of analysis and 70% of the production that is sought to be evaluated for the analysis or all of the users in the case of a service or all the resources recovered for a study of historical bibliometrics.
As such there is no such guide or formula about the size of sample, if it is very less sample below 50 publication , i will suggest you, to do literature review of the topic, and if it is more than that then you can go for bibliometric or scientometric study