If I remove all the outliers first then I am saying that my data collection method was so poor that I generated many mistakes. In which case I would ask how we (the scientific community) know that any of the data are good.
If you remove all of the data that does not conform to what you know is the answer then you will bias the data to give the answer you expect.
In general I am very against the practice of removing outliers just because we don't like them. If there is a specific reason for each data point then it is ok to delete affected data values, but it includes all affected values not just the ones that are outliers.
If you want an automated way to "remove" outliers (after considering the valuable advise of Timothy A Ebert ), you could change the loss function to a more robust measure which does not weight points far away from their predicted values as much (or if you want to remove them completely, use something like the 20% trimmed statistics discussed in several of Rand Wilcox's textbooks). I don't have SPSS on my current machine, but it used to allow you to define the loss function for some of its regression procedures.
ps. If you just remove, say the extreme 10% on each side, you cannot just go an do your analyses as if you hadn't done this. See the Wilcox textbooks for how to get better standard errors and p-values.
Maybe you would want to use something like Least Median of Squares. It can detect if groups of data points are outliers. It might be available in R (should be). Years ago I used a DOS program from I assume Rousseeuw) LMS regression has a breakdown point of 50%, Least Sum of Squares 0%. It means 1 observation can invalidate the results of a regression analysis. Therefore singe outlier detection can help, but with LSS groups are hard to identify (see Rousseeuw and Leroy (1987) Robust Regression and Outlier Detection). There is a new book by Maronna et al (2019) Robust Statistics: Theory and Methods (with R). It shows breakdown points for a lot of analyses methods, including Multilevel models (not easy to read btw). I have just see it and have not yet looked if LMS is described in Maronna 2019 (should be, but not sure). LMS might give a clue to reasons for points to be outliers. In the book by Rousseeuw there is a nice example of light emissions by type of sun. Without LMS, the regression line is at a 90 grade angle to the 'true' regression lines for both type of stars. But as previous authors indicate: be very careful at what you do and why.
Peter Moorer 's argument on influence plots and breakdown points is the more common math-y way to argue for more robust loss functions than OLS, but for a good argument without equations I like these by Galton (google should get let you get both):
Galton, F. (1907a). One vote, one value. Nature, 75, 414. doi: 10.1038/075414a0
Galton, F. (1907b). Vox populi. Nature, 75, 450–451. doi: 10.1038/075450a0