If you means the importance of each feature dimensionality, your can sort the weight vector for each category. For example, in FNN, we can have yi = wiTx after we trained a FNN model. y represents the output layer for i-th category and wi is the weight vector. So if you want to observe the importance of feature vector dimension, you can sort the weight vector wi and conclude which dimension is more important.
Please take a look at the papers used pruning for weight size reduction (compression) in the domain of inference. They showed good results in FC layers by keeping the most important weights (so your question). you can start by checking papers by Han ( https://songhan.mit.edu/ )
Other approaches such as random forest already have ways to do this in case you are interested in https://blog.datadive.net/selecting-good-features-part-iii-random-forests/