For a classification problem, consider two steps: training and testing.

Assume that the Bayesian network classifier is required to be designed based on data.

At first, uniform distribution is assigned to each node because there is no prior information. Then in training step, the parameters of each node is determined based on Maximum Likelihood estimation.

Is it correct to say learned parameters define posterior distribution?

In addition, the next step is testing. P(Class | evidence) is calculated for each feature vectors and the posterior distribution is evaluated for each class node. I would assign the feature vector to any class that have the highest probability.

Is it called the maximum a-posteriori?

More Mahdieh Askarian's questions See All
Similar questions and discussions