That's correct. In any measurement or numerical evaluation, there is always uncertainty associated with the measurement. This uncertainty arises due to the variability caused by the measurement process or by the approximation to a mathematical model, such as in the case of a least squares approximation.
I think that the “uncertainty of the esimate” is more correct wording since the uncertainty relates here to the result of the evalluation.
In this case “uncertainty” is used in a general (statistical) meaning, not as a “measurement uncertainty”, so that there is little sense to contrast “uncertainty” and “confidence interval” as is done in the heading.
This is a separate issue how to calculate the confidence interval for a specific probability distribution function.
In any case, the confidence interval is an expression of uncertainty in esimation and you don't need to oppose it with the uncertainty ("Uncertainty versus confidence intervals").
a) I agree that with "standard deviation" people typically(!) refer to the spread of data in a sample and that this not the same as an uncertainty for an estimate.
b) but just to make it simple, I will assume all assumptions are met etc and CLT holds, blablabla, so that we may give an uncertainty e.g. for the mean value, (or the measurement of mu, to be more precise) i.e. the standard error of the mean. This uncertainty is often described as "standard deviation of the samplig distribution" (which we do not have at hand itself, unless in simulation studies, of course). Therefore, if the context is clear, a "standard deviation" may be also be connected to uncertainty.
c) Now confidence intervals: since confidence intervals may incorporate the sampling distribution and in turn the sampling distribution incorporates the standard deviation of the sample, there is at least an indirect relationship. But maybe this is what you already meant with "so there is no direct relationship between them". I would agree, but there IS a relationship.
The confidence interval does not reflect the uncertainty of a single numerical estimate but rather the uncertainty associated with the estimator as a random variable. An estimator is a mathematical function of the sample used to infer a population parameter. Since it depends on the sample, it is a random variable with its own probability distribution, which determines its variability. The confidence interval is constructed using the distribution of the estimator, not the distribution of a single point estimate. It measures the uncertainty in estimating the true population parameter and thus accounts for the variability of the estimator across different samples. Therefore, the uncertainty represented by the confidence interval stems from the distribution of the estimator, reflecting how the estimate would fluctuate with multiple samples.
Now confidence intervals: since confidence intervals may incorporate the sampling distribution and in turn the sampling distribution incorporates the standard deviation of the sample, there is at least an indirect relationship. But maybe this is what you already meant with "so there is no direct relationship between them". I would agree, but there IS a relationship.
Certainly, the estimate is a number that represents our best approximation of reality. The difference between the true value and our obtained estimate is reflected in the uncertainty, which comes from the variability of the estimate across different samples. While the estimate itself is a fixed number, it will represent the true value only if we associate it with an interval that accounts for the estimator’s variability, based on a specified probability. This interval gives a likely range where the true value lies, acknowledging that the estimator fluctuates across different samples.