Standardized variables are the coded variables. If their values are equidistant often standardized variables have integral values. With manual computations, use of standardized variables facilitates the computations. However with the computer there is no computational advantage with variable standardization. Dummy variables might still used with standardized variables.
Using standardized variables gives you the so-called Beta-coefficients. They allow for easier comparison of variable's explanatory power. The idea behind standardization is to transform all variables to the same scale - hence, easier comparison. However, frequently, scale matters, so it is not really advised to do it all the time. But you can report and interpret the beta-coefficients.
Sometimes standardisation means you are able to look at "across series" differences without the challenges of some of the series being on different scales.
the similarity ie "imagine.... are they truly analogous series?" can be investigated if scales are standardised.
Software tends to hide these issues from the researcher.
doing the regression with and without standardisation can be illuminating. It could even beg the question "should I be doing this multiple regression and if yes, with what series?"
I suspect this answer is not as helpful as you might hope. but it is an important issue.