What sample size are we talking about? A too big sample size gives you easily too many false significant results and you have to make a Bonferroni correction. On the other hand, too small sample sizes give you false non-significant results. We were taught ages ago that n=30 is normally distributed and we in Finland used to calculated with smaller sample sizes, in Sweden where they had more money for research, they recruited big sample sizes but the research was not necessary better. It depends on your research question. If you are studying a rare disease you cannot expect to have many cases. You can calculate your sample size needed with free web calculators.
In finite populations, a finite population correction (fpc) factor is often used in design-based estimation, and an equivalent method used in model-based estimation with regards to summations used, so that as you approach a census, the relative standard error of any estimated total approaches zero. This just means that the error due to sampling approaches zero, However, nonsampling error, such as measurement error and frame error, can be much bigger than sampling error, That is why samples can sometimes be more accurate than censuses, especially when data are collected on a frequent basis, where more measurement error may occur. In establishment surveys, the smallest respondents often provide the least reliable data.
If some carefully, and less frequently collected census data can be used as auxiliary data, then model-assisted design-based sampling can do very well. These auxiliary data are called 'regressor' data, in a model.
If you are talking about a great many small samples from a great many relatively small populations, as can happen in official statistics, then effective tradeoffs in the sample sizes of each will require a great deal of experimentation.
"Finite populations mean a limited size population. Sometimes, a limited population is very large, so it may be treated as an infinite population, for statistical inferences. In statistics, the population size may not be known. The assumption of infinite or finite population is important. If a survey is being conducted in a completely random manner, the same person could be surveyed twice. The chance of this occurring diminishes as the population increases. "
The question can be answered only when you specify "how small?". In finite sampling, depending on nature of the population(whether it is homogeneous or heterogeneous) various techniques exist for drawing the samples. One can always decide for optimal size. In field surveys or social surveys, usually if sample size is very large, occurrence of non sampling errors will me more resulting in spurious results.
Luis, I think the answer to your question will depend on the kind of analysis you will make. If you are going to do descriptive analysis, what you get is exactly what you have (from your sample, no matter how small or large it is).
If you are going to perform inferential analysis, smaller sample sizes will lead you more likely to insignificant results (sample data more likely not observable in the population represented).
One relatively easy to obtain, somewhat old but very excellent book, covering much of finite population statistics (but little on prediction, except around page 158, I think), is as follows:
Cochran, W.G.(1977), Sampling Techniques, 3rd ed., John Wiley & Sons.
More recently (second edition in 2010), Sharon Lohr has an excellent, easy to read book.
Regarding my mention of the fpc in an answer above, attached is a link.
Basically, totals, etc, can be estimated for a finite population with varying degrees of accuracy, depending upon stratification, sample and population sizes within strata, inherent variance, and bias, often relating to nonsampling error as much or more than sampling error.
If you want to know more about the mechanisms that produced your finite population, then you get into the concept of a "superpopulation."
Note that most or all 'formulas' you will likely find for estimating these totals, variances, etc, implicitly assume only sampling error - which of course does not reflect reality.
So ... from above ... in answer to "Do small samples affect the accuracy of the data in finite populations?" the answer is "Yes. But there are other factors to consider, notably, variance." Well, I assume you mean, does 'small' n generally lower accuracy of the resulting aggregate data to be reported (say, in Official Statistics). Generally. But it can improve accuracy, if you obtain better quality data by not trying to collect too much, especially on a frequent basis. Sometimes a relative standard error (RSE) can actually be improved by collecting less data, and doing it well, whereas a textbook will generally say a larger sample always means a lower RSE. That does not always happen.
I do not know the precise sense of your question and the field of applicability. However, if we take as reference the fields of control and discrete dynamics the answer is Yes. If you want a discretized system to have a close response to the original continuous- one, it is good to adapt the sampling period ( namely, the time interval in-between consecutive samples) in the sense that if the response ( or solution) changes fast the sampling period is decreased while if the changes in the response are low , the sampling period can be increased. It is necessary also to keep some appropriate domains to choose the time-varying sampling period so as to respect the stability and bandwidth constraints and , in some cases, to keep the ( time-varying , adapted) sampling period around some appropriate nominal ( constant) period which be appropriate for the concretre application.