First, the size of the search space grows exponentially. For each dimension d, there are two extremes, so there are 2^d "corners" of the search space. For d = 1000, there are 2^1000 ~= 10^9 corners/quadrants to the search space. For a typical function evaluation limit of 10,000*d = 10^8, there aren't even enough function evaluations to sample each quadrant.
Second, there are scaling issues. Assuming that population size is also a function of dimensionality, what is the convergence rate as a function of population size -- general experimental evidence suggests it's super-linear, but are there any more precise measurements on this? So, let's estimate quadratic to cubic growth here.
Third, there are function evaluation issues. Non-separable benchmark functions which use rotation matrices are quadratic in cost. This seems like a minimum for any interesting real-world problems.
The last two issues for population-based metaheuristics lead to a d^4+ growth, so of course population size and generation counts are limited. However, these limitations occur in the context of a search space growing in size exponentially. To get any convergence, we accept premature convergence in these large complex search spaces.
Is there a conceptual limit for how much we can actually explore? For function evaluations (FE) 2^d? How much greater does FE have to be than 2^d to achieve a meaningful search/sampling of a search space?