One of the problems with the K-S test is that, out of the box, it is often overly sensitive to departures from the specified distribution for many typical purposes, so it is less useful that it might appear unless your sample is quite large and the distributional assumptions are very closely met - more closely met than you might otherwise need to justify a given model or inference framework. Having said that, all you need to do is reconstruct an ideal distribution based on your parameters, and calculate the number of your total observations that you would expect to fall in each of the 10 deciles of your ideal distribution. Make a 2 x 10 frequency table from this, where the other row is the observed frequency from your sample that falls into each of those ideal 10 deciles. Do a chi-squared test of association on that 2 x 10 table of counts. If the p-value of the Chi-square statistic (with 9 d.o.f.) is less than 0.05 then you have strong evidence that the distribution is not your ideal or target distribution. Beware though that this test is quite sensitive, often too sensitive for the typical use it is put to. If you do it this way you can reduce the sensitivity by collapsing deciles into, say, quintiles (5 bins instead of 10) and running a new chi-squared test.
Im my research we use samples of fatigue tests on materials. There is a limitation in sample size because these samples have high cost, that's why we just have n=20. Searching in literature I've found:
Weibull models / D.N. Prabhakar Murthy, Min Xie, Renyan Jiang.p. cm. – (Wiley series in probability and statistics) pg 91 has:
"The w2 test has the advantages of being easy to apply and being applicable even when parameters are unknown. However, it is not a very powerful test and is not ofmuch use in small or sometimes even modest size samples."
After that, Do you think that would be better to use Kolmogorov-Smirnov?
Your question seems to be a combination of two questions:
1) Given two parameter sets, which one is better with respect to the KS test statistic? The first one gives a value of 0.0796, the second one 0.0921 (this is then reflected in the higher p-values for the first one (0.891) compared to the second (0.765) using the KS distribution, but this is not required in this case).
In this sense the first parameter set is the better one. This looks a bit puzzling at first, as the second set is the ML estimator, but keep in mind that different optimization criteria will lead to different parameter estimations.
2) How to do a KS test, if the parameters are fitted to the distribution? The question addressed is whether the data is coming from a Weibull distribution. If one uses fitted parameter values, the standard p-values of the KS distribution do not apply and one needs to use simulations. The result will depend in principle on the parameter estimation procedure used (MLE with bias correction in the paper mentioned above).
Using LcKS (Lilliefors corrected KS) from the "KScorrect" package in R, one gets a p-value of 0.35, so the hypothesis that the data is coming from a Weibull distribution is not rejected.
A final comment: Given the uncertainties of the estimated parameters I am not sure, whether the difference between the two is of any relevance.
I made reading and observed that compare the estimators of the same probability distribution is recommended to leave constant alpha and beta and run Monte Carlo simulation and analyze the bias and variance of MSE.
Seeing his last comment I see that it is not necessary to compare the same probability distribution with estimates so close using using Kolmogorov-Smirnov.
Purpose of the goodness-of-fit is to the distribution or model with smallest values. It implies that the distribution is better/efficient than other competing distributions.