MLP is the classical type of neural network. it consists of one or several hidden layers (depending on the abstraction required in deep learning). It performs a dot product between the input and weights and applies monotonic activation functions such as sigmoid or ReLU. In MLP, the fine-tuning of weights (the training) is usually done through backpropagation for all layers.
RBF is a neural network, consisting of just one hidden layer. For each of the neurons in the input layer, the hidden layer first computes the distance between inputs and weights, which can be viewed as centers, and then an activation function which is usually a Gaussian function is implemented to the calculated radial distance. this is why it is called "Radial Basis Function Networks". Since the Gaussian function has it's center at zero, when the input neurons are equal to the weights (centers) so the distance is zero, RBF neurons have maximum activation. The training of an RBF neural network can either be done through backpropagation or RBF Network hybrid learning. Also, typically the RBF network has a faster learning speed compared to MLP and are less sensitive to the order of presentation of training data.
GRNN belongs to the group of Bayesian Neural Networks which are feed-forward networks which do not use backpropagation. The architecture for the GRNN is similar to the radial basis network, but has a slightly different second layer. GRNN is is often used for function approximation and has a few advantages including a single-pass learning (backpropagation is not required), using Gaussian functions which results in high accuracy in the estimation, and also being able to converge to the underlying function of the data with only few training samples available.
I am agree with excellent explaination by Fereshteh Hassani., howevr, for more details, I would like to suggest you the attached related books for yours future reads.