As is known, for AI/ML applications, operations such as data transformations/normalization are performed in the data preprocessing stage. In fact, Normalization aims to transform data so that features/values ​​of different scales/sizes are on the same scale. At first, this is an innocent start. Generally, in AI and ML applications, Min-Max or Z-score data transformation techniques are frequently preferred for data normalization. Although the traditional trend is "Which normalization method to choose may vary depending on the characteristics of the data set and the context of the application", one of these two methods is selected automatically/without questioning. However, when there are alternative solutions such as Vector, Max, Logarithmic, Sum, why are these two enough?

Similar questions and discussions