I know there is a push toward interpreting effect sizes over p-values. Does low effect size always mean there's nothing interesting going on with the data even though there is significance?
Sometimes people distinguish between what is "substantively" significant versus statistically significant, but that is always a matter of personal interpretation and judgement.
I can think of at least two reason why small effects might be statistically significant. One is a large sample size, because that will lower the value of the standard error for all your coefficients. the other is how the independent variable is scales -- such as income scaled in dollars rather than thousands of dollars.
Can you tell us more about the specific case you are working on?
A large sample size can often lead to significant results for small effects. A small effect can still be important, depending upon the context and other possible explanatory factors.
David, I ran a multiple logistic regression where my IV was cognitive ability (specifically verbal, nonverbal, and quantitative) and the criterion is ability profile on the Strong Interest Inventory (categorical, 6 different profiles). The sample size was 134 students.
There is yet another possibility: If you have a big number of observed variables without a theory, you will also get significant results in small groups! e.g. if you have 20 random variables you would expect 1 of them to deliver significant results on a 5% level by chance - although with a very low effect size. http://io9.gizmodo.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800
"statistically significant results" are results for which a significance test of a null hypothesis gives a "p-value" that small enough to make you deciding to take your experimental factor "serious" as being useful w.r.t to some explanation or prediction.
Note that this decision is nowhere dictated by nothing. It is your judgement and remains your judgement. And your decision is actually also NOT depending on the p-value. Your decision should only be "reasonable" in light of the data, and in the context of the experimental conditions and the scientific background. Calculating a p-value may give you some clue, some help, especially in cases where the (functional, practical) relation of the data to the hypothesis is hard to understand - but its absolute values is not a strict criterion for a decision.
The p-value is a "statistical signal-to-noise ratio". In principle, it tells you how good one can recognize the "statistical signal" (i.e. a deviation of a model that fits the data against a similar model with the restriction on the null hypothesis) relative to the "statistical noise" (i.e. the expexted variability of this deviation).
So you can look at it from a rather non-statistical perspective: you measure some effect. However, replicates of your measurements show considerable variation. You would take your mean effect serious when you find that this mean effect is considerably larger than the variability between your replicate measurements.
Thank you all for your responses. They've given me some good things to look into. As of right now my results are showing significance just for nonverbal cognitive ability and not the other two subtypes. Would the effect size change at all if I took the other two subtypes out of the model? Or is that something that would be unaffected?
You can try that, as items of iq-tests most likely correlate quite high. which means they take away a certain amount of explained variance from each other. but like already said by others: statistical number crunching means nothing without theoretical explanations :)
You should examine the basic correlation matrix to find out how strongly the three Independent Variables are related to each other and to the Dependent Variable. If the three IVs have reasonably strong inter-correlations, then you can certain expect the size of the coefficients to change, depending on which ones you enter into the analysis. This is known as multi-colinearity.