I have found that items formatted consistently (e.g. requiring responses with the same number of response possibilities and in the same order) are generally easier for individuals to follow and respond to (thereby reducing the possibility of inconsistent response patterns in the instrument). However, this may depend on the nature of the construct, and characteristics of the respondents.
That said, I am not aware of any "rules of thumb".
This depends on the constructs you are using. One could surely find arguments for a consistent adoption (comparability, easier for respondents,...).
But in principle your scales should be operationalised by the objective to measure your constructs in the most valid way. This objective normaly is indipendent of other constructs.
I have found that items formatted consistently (e.g. requiring responses with the same number of response possibilities and in the same order) are generally easier for individuals to follow and respond to (thereby reducing the possibility of inconsistent response patterns in the instrument). However, this may depend on the nature of the construct, and characteristics of the respondents.
That said, I am not aware of any "rules of thumb".
Not necessarily. For example many measurements are validated with 7 point Likert scale instruments whilst others are validated with 5 point scale instruments. Is your question that any model suggested that consisted of a mix of independently validated scales (made up of both types) would not be feasible to test? Your inference is that the order may affect the interaction/construct. This of course is possible but can be tested and measured using a range of techniques such as structural equation modelling observing AGFI/GFI indicators, I have personally supervised and examined researchers work, on many occasions where mixtures of validated 5, 6, 7 and 9 point scale instruments have been used in explanatory models. Providing there is appropriate reliability, validity and testing of constructs/models in place the use of mixed scales is widely accepted
Interesting, if the constructs are to be cross-compared with other studies, the scale should reflect the other survey. The scale choice depends on the level of detail required in the survey. I find that beyond 5, respondents lose ground on the differences between elements in the scale, albiet this issue is more common in social themes.
A further issue concerns the type of analysis one wants to run with the empirical data. For example, if the data are employed in a econometric specification, one may use a standard 5-point Likert scale (e.g. ordered logit), that also can be reduced to a dichotomous variable (i.e. 1= 4-5 point Likert scale; 0 otherwise) for a probabilistic modelling. Besides, homogeneous scale are easier to be grasped by respondents