In general, the answer is no, because it is not any different from other statistical tests.
One possible alternative is if you are using Structural Equation Modeling, you can test whether specific parameters in your model differ, so that you do not have to specify moderation for every effect in the model.
Thank you @ David L Morgan. I am using a straight forward moderation, through R - using LME. There are some p values that are 0.08 which is close to 0.05. Not using SEM.
.05 is not sacred; a Google search will turn up many citations on that point. If the effect size of the moderation is substantively meaningful (especially if the sample size is only modest), then a p-value of .08 can be taken as indicating that moderation in the population is quite plausible. If the sign of the X-Y relationship different at different observed values of the moderator (as opposed to merely different in magnitude, but with the sign unchanged), that would generally be substantively meaningful. "Significant" moderation can be hard to obtain (in part because the interaction term tends to be rather highly correlated with the additive terms). If the direction of the moderation effect is consistent with what you would have predicted theoretically (preferably, what you actually did predict), then you could argue that a one-tailed test is appropriate, and hence divide the obtained p-value by 2.
Interactions can be difficult to detect sometimes due to low reliability. I'm a psychologist and so am used to using measures that are not error free, that is their reliability is less than 1. We know that failing to account for measurement error attenuates observed associations with other variables. The problem with interactions is that their reliability is the product of the reliabilities of the variables that make up the interaction. So if you are using the interaction of X and Y as a predictor it'll be fine if they each have high reliabilities - if each have a reliability of .9 then the interaction has a respectable reliability of .81 (.9 x .9). However, if if either, or both, variables have lower reliability then the relaibility of the interaction can be very low, for example if each variable has a relianility of .7 then the interaction has a reliability of .49 and this will make it difficult to detect an effect.
Mplus has some very nice features to help model latent variable interactions thereby helping to deal with the issue of measurement error. See
Burke and Mark gave excellent answers. The convention of using a fixed alpha belongs to the "null ritual" (Gigerenzer, 2004, Cohen, 1994) as it is the case for simple yes-no decisions based on the fixed level. This has been criticized for decades (see some lit below). Instead, decisions should be based on a trade-off of the costs (or disadvantages) of keeping the null versus assuming an effect.
As Mark pointed out in addition, you have little chance to find an interaction effect because of the low power and downward bias of the product term (according to Frazier et al., 2004, the power is between .20-.34!). Hence, it could be acceptable, to accept a larger p-value (I even saw papers that applied a .20 threshold which overstretches this argument). Hence, be modest and discuss carefully the pros and cons.
Actually, there are a number of authors (in the psychological field) that recommended using larger p-values when testing interaction effects (Jaccard & Wan, 1995, Judd et al., 1995; McClelland & Judd, 1993).
In addition to the comments by Mark about Mplus, I would like to point out that latent interaction modeling can be done in any other software, e.g., the lavaan package in the open source software R (www.lavaan.org). In this paper (if you allow me some self-ad), we recommended the residucal centering approach).
Steinmetz, H., Davidov, E., & Schmidt, P. (2011). Three approaches to estimate latent interaction effects: Intention and perceived behavioral control in the theory of planned behavior. Methodological Innovations Online, 6(1), 95-110.