I did not really follow your explanation, so I'll just try to answer the question as written:
"What is standard error of prediction from linear regression, with known SE for y-values?"
That looks to me as if you probably meant you had the standard deviation of y, called sigma. Otherwise, if you had a standard error of say a mean, that is a function of sample size. But you did say something about a sample size in your explanation, so I am not certain. But that would still require knowledge of sigma.
The estimated standard error of a prediction error is based on a sigma, but not of the population of y, but instead on the residuals, or for weighted least squares (WLS) regression, the estimated sigma is the estimated standard deviation of the random factors of the estimated residuals.
So my thought is that you have confused sigma for the y-value population with sigma for the residuals of a regression, which help you find the standard errors of the prediction errors for y given x.
For more on the case of WLS regression, especially for regression through the origin for simple regression cases, see
Thanks, Jim . Your article is informative, but my regression line does not go through the origin, the dependent variable is normally-distributed (by the Shapiro-Wilks test) and its variance is constant (rvariance,mean = +0.251, F1,30 = 2.02, p=0.165).
I need to know which of the 32 values of the dependent variables is significantly larger or smaller than the value predicted from regression on the independent variable, which is also normally distributed. These "off-line" values (if any) are for interesting varieties of barley. Naturally I shall use Bonferroni correction to avoid excessive optimism!.
Unlike in conventional methods, the variance of the dependent variable has not been calculated from Sy,x. I hope the problem is of interest: if needed I can send further details.
The standard error of prediction using simple linear regression has up to now been taken to be the residual standard deviation, on the basis that this was an estimate of the standard deviation of the "error process" which produced deviations of individual points away from the line.
The square of sigma for the estimated residuals is only part of the estimated variance of the prediction error. You still need the variance from the model. See page 5 in https://www.researchgate.net/publication/261586154_Using_Prediction-Oriented_Software_for_Survey_Estimation/stats. An expression for the estimated variance of the prediction error for linear regression is shown there, and attached here.
Back to the original question, it says "What is standard error of prediction from linear regression, with known SE for y-values?" I think that the main point to consider here is that the predictions are for y given an x-value (in the simplest case). The unconditional distribution of y, for the y-data, is different from the conditional distribution of y|x (i.e., y given x). So you don't need to concentrate on the "...SE for y-values." That is for the unconditional distribution of the y-data. Predictions are for y conditioned on x. That has a different distribution. Often that distribution has larger sigma associated with larger predictions. (Conditioned on x is the simplest case. We are really looking at y or estimated residuals conditioned on the predicted y, as explained later.)
I worked with a great deal of data with distributions which were highly skewed, both for x and for y. But the conditional distribution of y|x is not the same. The estimated residuals are of interest here. For a given x, the sigma of the estimated residuals shows how y varies for the given x, under the given estimated model. You still need to look at the variance from the model regression coefficients. The estimate residuals generally have larger sigma associated with larger predicted y, but you can factor the estimated residuals into a random factor, and a nonrandom factor which is used to arrive at the regression weights. The random factor often has a nearly 'normal' distribution, but this is not crucial. However, please note that highly skewed data can have a linear relationship with nearly 'normally' distributed random factors of estimated residuals. Thus the conditional and unconditional distributions of y can be extremely different.
Actually, one should only look at estimated residuals given predicted y, not x, unless you have a ratio estimator, meaning a simple regression with no intercept term. (You should only use an intercept term, as in the case of additional predictors (independent variables) when justified. If you would expect y to be zero when the independent variables are all zero, do not use an intercept term.) In a graphical residual analysis, you plot predicted y on the x-axis, and either y or estimated residuals on the y-axis. If you have a ratio estimator, where the form is then y = bx + e, than you could put x on the x-axis because putting the predicted y, i.e. bx, on the x-axis will not change the analysis, as b is a constant.
.....
So, the estimated standard error for the y-values is not the same as the square root of the estimated variance of the prediction error. I think the latter is what was really wanted in the original question, not the former.