The measurements and states could be vectors, in which case the Kalman gain is a matrix of gain from each measurement to each state estimation. It’s easiest to understand the meaning of the Kalman gain in the scalar case and generalize from there:
x[n+1] = x[n+1|n] + K*(y[n+1] - H*x[n+1|n])
In words, the estimation of the state after the measurement is the prediction from the last state (propagated by the model, e.g. with the last estimated velocity) plus the Kalman gain (K) times the innovation. The innovation is the difference between measurement y[n+1] and where we expected to find the measurement - the last propagated state passed through the model matrix H (remember: y = Hx).
If the gain is zero, the next state is just the propagated the last state - x[n+1|n]
If the gain is H^-1 (H being scalar h in this case), the next state is the measurement, transformed from the model - y[n+1] / h.
If there is high confidence in the measurement relative to the covariance of the propagated state, the gain will be high, otherwise, it would be low. Therefore, as the filter converges, the gains get smaller, into a steady state that is determined by the propagation noise matrix (Q) and the measurement covariance (R).
I suggest an article to read = Chapter Tuning of the Kalman Filter Using Constant Gains
The gain depends also on the measurement matrix which is a function of time. This means that the measurement matrix dosn't have a constant steady state value, and so does the gain.