Hello, my fellow researchers,

I am dealing with purely unbalanced data for a supervised classification Deep Learning (DL) problem. The model that I am currently running is dealing with acceleration signals collected from sensors with the aim of assessing damage into civil engineering structural systems.

As it is well known, most DL algorithms work best when the number of sample in each class are about equal. In this case, the metric to assess the performance of the DL model is accuracy. However, in the case of unbalanced data, another metric called f1-score (The weighted average of precision and recall) is the performance metric to assess how well the model is classifying the data points.

I am having some results now, but my f1-scores are a bit low! (35%), however, the accuracy of the model is very good (more than 90%). These values can be accepted for thesis submission, however, I would like to have an idea of how scientific journals deal with this kind of situation.

Generally, for the case of balanced data, the reported accuracy is more than 90%. Yet, for f1-score, is there any minimum value that should be reported or what?

I look forward to hearing from you.

Thanks for your time and consideration.

Majdi

More Majdi Flah's questions See All
Similar questions and discussions