I mean computing the maximum likelihood estimates of the parameters of a new distribution including the bias and MSE of the estimates after generating uniform random samples using the new distribution quantile function
Well, the term "quantile function" defines the cumulative distribution curve, doesn't it? Then you have created (e.g.) 100 samples from this distribution, each with n = (e.g.) 35 simulated observations. This gives you 100 simulated studies with their means and SDs [if the cumulative curve is defined by spaced quantiles, such as ... q40%, q50% (the median), q60%, ..., then a little linear interpolation is needed]. You know the true mean etc. [again perhaps with a little imprecision due to need to interpolate]. This define the average deviation in the mean points across the 100 sample, and the average squared deviation. This is so simple that I still believe that I have misunderstood you question!?
NB!: the words "maximum likelihood" and "parameter" don't make sense here because you are not starting out with a parameterized MODEL but with a known distribution, specified in terms of quantiles. - Again I may have misunderstood your problem.
If I understand the question, you performed a simulation with certain number of replicates and various sample sizes, trying to evaluate the performance of the maximum likelihood method for estimating the parameters your new distribution, right?
The whole idea of estimation (including max. likelyh.) rests on having a model, i.e., a FAMILY of distributions, unfolded by one or more parameters across a parameter space. If you have only ONE SINGLE specified distribution on your agenda, there is nothing to estimate and no maximum likelihood formula to prove and no maximum likelihood estimator to be characterized with respect to bias and MSE.
For this reason the question you have raised is incomplete and requires some additional information to be answerable.
So you must be assuming a model, perhaps simply a displacement in terms of a paramenter that specifies the location.