In Bayesian network, what is the common model performance measure matrix (is it kullback-Leibler divergence)?
In Bayesian network, is it advised to use (non ordered) Categorical variables in there categories form or as one encode form?
Am preparing a simple report about Bayesian network results, what is common to include in BN results reports? DAG, conditional probabilities tables for each node and kullback-Leibler divergence result?
Do random forest and XGBoosting require scaling or standardising data?
Does Bayesian network requires scaling or standardising data?
Shaima Mohammed Alghamdi (1) KL divergence score is used to evaluate the closeness of the obtained BN parameters to the true parameters. This assumes that you already know what the true BN parameters are. In the case of the BN tree (DAG), you can use metrics like Hamming distance to evaluate the correctness assuming you know what the ground truth DAG is. In terms of overall predictive performance of BN model, you can use Cross Validation. (2) I am not sure what type of encoding you have in mind but categorical variables can be used as is in BN. (3) I guess it depends on the situation but the DAG and CPT table which are the BN model itself should be included. Some interesting probabilities can also be included instead of the whole CPT table if there are any, and maybe some cross validation score to show the predictive capacity of your model can be added as well. Providing some explanation on the relationships depicted in the DAG might be useful as well for the audience to understand the model. (4)&(5) I don't think it hurts to scale or standardize the data before training any algorithm in most cases.