A possible explanation is that variance components can be correlated. A model Y=G+E would have var(Y) = var(G) + var(E) + 2*cov(G, E). That is usually not the case between G and E, but the more random effect you have (and more unbalanced the data is), more likely you will have some covariance among your terms.
The use of kernels will shrink var(G) even further, making space for that variance to be captured by other terms - that is, using a G matrix you will be conditioning the genetic term to capture the additive genetics only since G express the linear relationship among individuals. For that reason, the way you define your kernel has a large impact on your variance components as well. For example, a Gaussian kernel will certainly capture more variance than a linear kernel.
---
Another (possible but less likely) explanation is algorithm convergence. ASReml search for the variance components that maximize REML using the the average-information algorithm - which is a Newtonian algorithm that can easily get trapped in local maxima (sub-optimal variance components that yield a likelihood better than values in the neighborhood, but not necessarily close to the true values).
---
If you are confident your model is correct, without terns that saturate the model and confounding effects, then ASReml can be the problem. You can try R packages other than ASReml. Outside R, consider trying REMLF90.
----
If you are not interested in variance components but rather on breeding values, you can treat your random effects (other than genetics) as fixed, or use a predefined lambda (say 0.1 for environment components, 1 for the genetic component, which assumes h2~0.5).