For example we choose the swarm size generally in a trial and error fashion for a basic PSO. but if we go for higher dimensions of swarm size how can we find the optimal swarm size for that application leaving out method of knowing out of the output
Most likely, there is no precise and universal answer. Minimal population size may be found for classical Genetic Algorithms (see my paper "Biology, Physics, ...), but PSO is different - there are no generations here, just calculation's steps. Also the danger of the so called premature convergence seems less important in PSO than in GA. Therefore the optimal population would be such that the number of steps multiplied by population size squared is minimal. Why squared? That's because you need to compute N(N-1)/2 ~ O(N2) (for N > 1, of course) interactions between agents to complete each step. This suggests to keep the population as small as possible. So, the question is rather how small can be the population to be still called swarm. I think that n < N < 2n, where n is the search space dimension, is the safe choice. For n < 10 I would fix N at some value, say a dozen or more. However, I cannot offer any better foundation for this rather obvious hint.
We've found that PSO performs quite well with constant population size as a function of problem dimension. Also, due to the growth in convergence times, it becomes computationally prohibitive to use a population size larger than the problem dimension for n > 50.
Unfortunately, there are no universal rules to setting the size, and only experimental results give an idea of the behavior of the algorithms since it is problem dependent;I would recommend you to begin with a small number of particles and then simply just increase the size and analyse the results.
I reffers you also to this paper:
Investigating Smart Sampling as a population initialization method for Differential Evolution in continuous problems
de Melo, Vinícius Veloso; Botazzo Delbem, Alexandre Cláudio (2012)
The question is really whether your problem has computationally expensive function evaluations or not. If so, try any of the parameter settings found to be effective for similar problems / setups. If not, use offline parameter tuning, and you'll probably see a significant performance improvement.
It is a rule of thumb with heuristic optimization algorithms to start with 10 times the number of search variables in case of small problem dimensions (I would say n < 10). However, it is a design problem which depends on the nature, nonlinearity, and complexity of your objective function and search space. it also depends on the inherent feature of the very PSO algorithm you use. In short, nothing but trial and error starting by a swarm size of 10*n.