To some folks, "normalized" may mean that scores are constrained to fall within a certain, known range. Fabio Lobabto's answer appears to be based on that interpretation. I learned "normalize" as meaning that the scores are adjusted so that, as a vector in space, they have length = 1.
However, if, by "normalized," you mean convert scores to follow a more nearly normal distribution, then there's not one blanket answer to fit all possible cases. For unimodal distributions with mild positive skew, a square root transformation may be helpful; for strong positive skew, a logarithm transformation may help. Finally, for profound positive skew, a reciprocal transformation may work. If you have unimodal negative skew, then first reflect the scores (e.g., by subtracting all scores from 1 + highest value in set), then choose among the positive skew options given above.
Reflecting your data will reverse the direction of the scores (the lowest in the data set becomes highest, and vice versa), so bear that in mind when you interpret results. Likewise, a reciprocal transformation will reverse the direction of scores. To avoid this last occurrence, you could use the negative reciprocal of scores (e.g., new score = -1/old score). Also, reciprocals often yield very small values, so you may need to scale them up by multiplying all results by some constant (e.g., 1000).
Of course, it is quite possible that no transformation will work to make your scores behave like a normally distributed variable (especially if you're working with individually limited scores like a Likert-type item---generally you'll get better results if you work with scores representing sums across a set of related Likert-type items).
I have a data on circular economy readiness where 1 is strongly disagree and 5 is strongly agree. Total data set is 235 where I found that data is non-normal. Due to which I am not able to proceed further for research analysis.
Probably the simplest path is to use a method that recognizes ordinal scores without making distributional assumptions. You didn't mention your research question, so here's some generic instances:
1. To compare groups on the variable, you could use Mann-Whitney-Wilcoxon test (assumes heterogeneity, but not normality; two groups). For more than two groups, use Kruskal-Wallis one-way anova by ranks.
2. For correlations, use Spearman rank-order correlation.
regarding your point: "To some folks, "normalized" may mean that scores are constrained to fall within a certain, known range. Fabio Lobabto's answer appears to be based on that interpretation." She mentioned that it is a likert scale data... so, I infer that she know the range.
In other words, in this case... using more complex methods is to go against Occam's razor principle. In other words, it is completely unnecessary.