The weights are typically initialized with random values (often following a specific distribution). This randomness helps the network avoid getting stuck in bad local minima during training. During training, the network adjusts these random weights based on the training data and the chosen loss function. Through a process called backpropagation, the weights are fine-tuned to extract specific features from the input data. If you further need guidance connect with me on Https://wa.me/+923440907874.
if as your said, neither we are able to pre-design a convolution kernel to initialization, nor we are able to control the learning process for feature map we needed .
All are in one: we are shotting if our eyes are closed? shot-or-miss, only God knows?
I think that the convolution kernel weights is a distribution, if the distribution of convolution kernel weights is matched to distribution of input-data ,then the feature extraction is errorless, therefor the convolution kernel for feature extraction is pre-designable, and if not, then the feature extraction is non-errorless, therefor the convolution kernel for feature extraction is not pre-designable, with another word, in this caase the convolution kernel must be trainning to learn to yield to the input-data, as people usually do.
Yaozhi Jiang Perfectly pre-designing convolutional kernel weights isn't possible. But it's not a random guessing game either. We can influence the learning process. By carefully choosing an initialization strategy for the weights (like random values with a specific spread), we nudge the network away from getting stuck during training. The network architecture itself, with its kernel sizes, filter numbers, and activation functions, also guides what features it learns. Most importantly, the quality and variety of your training data steers the network towards the right features. Finally, tuning hyperparameters like learning rate and the optimizer further controls how the network learns. While we can't directly pick what each filter learns, these techniques give us significant control over the process. Imagine training a dog to recognize fire hydrants - we can't show it the exact image but by showing it many pictures and rewarding it for picking the right ones, we can shape its understanding. It's a guided learning process, not a random shot in the dark.
I am reading the book about meta learning, my goal is few-shot learning to avoid very big data to compute. Convolution neural network talks us many stories one by one, but its using effect leaves much to be desired. After consuming a lot of computing power, CNN gives us so many results that we do not even know what they are. I want some improvements on CNN, the most likely direction is convolution kernel, my view point is if the convolution kernel distribution is matching to the distribution of input-data, the feature extraction will be errorless and the convolution kernel is pre-designable, even the pre-designable may be automatically by machine.