Hello,

I've build a Conv1D model to classify items into 6 categories (from 0 to 5). (simplified example)

I use therefore a CrossEntropy loss function that penalizes the errors with high probabilities and also the low probabilities associated to right answers. And that is exactly what I was looking for.

But CrossEntropy considers the classes as independent, while I'd like reduce the loss for an error where model predicts 0 (cat) while it is 1 (dog) and increase the loss when the model predicts 0 (cat) when the true answer is class 5 (fish).

Is there a way to embed the 'class proximity' in the loss computation of a classification problem, so that a error between classes 0 and 1 will be less penalized than an error between classes 2 and 5 ?

Many thanks in advance for any help !

More Jean Dessain's questions See All
Similar questions and discussions