I think the first check would be to think about your data in the context of PCA's assumptions. Some of these seem really basic, like, multiple continuous variables. (Note: Ordinal variables are often used; think Likert scales).
Another assumption is approximate linearity between variables. If you have few variables you might be able to investigate this using a matrix scatterplot—if you are using R, I prefer psych::pairs.panel() or PerformanceAnalytics::chart.Correlation() for this.
Sampling adequacy. There's been a number of rules-of-thumb suggested, but the Kaiser-Meyer-Olkin (KMO) is common. You can use it for overall data set and for individual variables.
Your data should also be suitable for reduction, i.e., variables should be correlated enough to be truncated into a smaller set of variables. You'll get some insight into this from the matrix scatterplot, but might also use a test of sphericity.
And, you should check for erroneous data entry and outliers. A data point that is way off can influence your results and lead to a incorrect interpretation. Again, if you are an R user you might look into the vegan package; there's a set of diagnostic tools to check goodness of fit, linear dependencies, and (I think) influence functions.
I'd also check for normality in the residuals. I should mention that chapter 17 of Discovering Statistics Using R (Field, Miles, & Field 2012) is a great reference and guide. (https://books.google.com/books/about/Discovering_Statistics_Using_R.html?id=wd2K2zC3swIC)
There's some other detail on PCA diagnostics in this article: https://www.jstor.org/stable/2348133?seq=1#page_scan_tab_contents. (If you Google Scholar search that title and click related articles you'll find lots more)!
Hopefully this is a start for you and others can give more and better advice!
I initially made sure that those 5 necessary assumptions are not violated. However, I was curious if I could double check this by the outcome (e.g scree plot etc).
I see, sorry for any redundancy then. I think Field et al. 2012 reference will be most helpful. Check outlier residuals (not sure if I was clear on that above) using a Q residuals vs. Hotelling's T2. And, reporduced correlation (communality) and normally distributed data/residuals.
You might check model performance and importance of PCs with cross validation. There a few other bits you can find here: https://learnche.org/pid/latent-variable-modelling/principal-component-analysis/index.
That's about as far as my knowledge goes, and even then it's from more of an applied sense. Maybe there are other, more knowledgeable, users that add to the thread.