On Bias - I do not see how you can do this as bias is the difference on average between the true parameter and the estimate and unless you have simulated the data you will not know this.
The same with the variance of an estimator - you need to know the true value of the estimate.
If you are doing simulations then material on properties of estimators will explain this - the Mean Square error combines bias and precision (variance) into one overall measure
see http://www.ms.uky.edu/~mai/sta321/mse.pdf
https://en.wikipedia.org/wiki/Mean_squared_error
But this is not a thing you can do with just a single fit to real observed data.
On Bias - I do not see how you can do this as bias is the difference on average between the true parameter and the estimate and unless you have simulated the data you will not know this.
The same with the variance of an estimator - you need to know the true value of the estimate.
If you are doing simulations then material on properties of estimators will explain this - the Mean Square error combines bias and precision (variance) into one overall measure
see http://www.ms.uky.edu/~mai/sta321/mse.pdf
https://en.wikipedia.org/wiki/Mean_squared_error
But this is not a thing you can do with just a single fit to real observed data.
Here is the Abstract which will give you some ideas
ABSTRACT
One of the most common questions about logistic regression is “How do I know if my model fits the data?” There aremany approaches to answering this question, but they generally fall into two categories: measures of predictive power (like R-square) and goodness of fit tests (like the Pearson chi-square). This presentation looks first at R-square measures, arguing that the optional R-squares reported by PROC LOGISTIC might not be optimal. Measures proposed by McFadden and Tjur appear to be more attractive. As for goodness of fit, the popular Hosmer and
Lemeshow test is shown to have some serious problems. Several alternatives are considered.
Article Mallow's Cp for Selecting Best Performing Logistic Regression Subsets
Have a look at the following software package e.g;, "Medcalc". It Includes the statistics of a logistic regression model. Don't invent hot water again. I think the software offers what you need. The package was developed for application in biomedical research and biology and for life sciences in general.
See the manual at: https://www.medcalc.org/download/medcalcmanual.pdf
The output of Medcalc for a logistic regression model is:
Sample size and cases with negative and positive outcome
First the program gives sample size and the number and proportion of cases with a negative (Y=0) and positive (Y=1) outcome.
Overall model fit.
The null model -2 Log Likelihood is given by -2 * ln(L0) where L0 is the likelihood of obtaining the observations if the independent variables had no effect on the outcome.
The full model -2 Log Likelihood is given by -2 * ln(L) where L is the likelihood of obtaining the observations with all independent variables incorporated in the model.
The difference of these two yields a Chi-Square d statistic, which is a measure of how well the independent variables affect the outcome or dependent variable.
If the P- value for the overall model fit statistic is less than the conventional 0.05 then there is evidence that at least one of the independent variables contributes to the prediction of the outcome.
Cox &Snell R2 and Nagelkerke R2 are other goodness of fit measures known as pseudo R-squareds. Note that Cox & Snell's pseudo R-squared has a maximum value that is not 1. Nagelkerke R2 adjusts Cox & Snell's so that the range of possible values extends to 1.
There is a propensity regression model as well (for dose-reponse analysis). The probit regression procedure fits a probit sigmoid dose-response curve and calculates values (with 95% CI) of the dose variable that correspond to a series of probabilities. For example the ED50 (median effective dose) or (LD50 median lethal dose) are the values corresponding to a probability of 0.50, the Limit - of -
detection (CLSI, 2012) is the value corresponding to a probability of 0.95.