Were you doing a significance test or a hypothesis test? However, in any case the answer to your question is: no. But for different reasons. "Non-significant" in a significance test means that the data is not sufficient to rely on any conclusion. Discussing a "trend" would be rather stupid then. "Non-significant" in a hapothesis tests means accepting the null hypothesis with the confidence specified by the power. You then should act as if there was no relevant slope. Talking about a "trend" in this case is stupid, too.
Please see attached file. This is a latitudinal gradient of biodiversity. Solid line - polynomial trendline (non-significant, p = 0.12), dashed line represents the 95% confidence interval. Is it indicate that gradient is not simple and maximal values are observed around tropics (about 25 degrees at the northern and southern hemispheres). Or I have insufficient data to discuss it?
Just looking at your data in the plot, There is not much to go on. You fit a quadratic model to your data. What lines of latitude and longitude did you take the data from?
I studied planktonic ciliates in the Atlantic Ocean. Samples were taken between Fankland Islands (=Malvinas, 50S 61W) and the entrance of the English Channel (48N 5W).
Sorry, but I repeat my statement (although downvoted already): If you do a test, you had particular assumptions and requirements. When the test was done according Neyman (what I doubt, since it looks underpowered), you have the defined power at which confidence you then have to accept the null hypothesis (so there is no accosiation between lattitude and diversity index) when the result is "not significant". Full stop. When the test was done according to Fisher, would a "1 in 8"-finding surprise you (and others) enough to go on exploring the association in question? (I don't think so, and you already saidd "no" - so this is it). Full Stop.
You are now discussing your data as if you never tested. So you deny that you had particular assumptions and requirements. You simply wish to ignore this, and this is quite inconsequent. Your behaviour fits much better to an explorative analysis (what I think it actually was!). So describe your data. Argue with expertise. Why would it be interesting to consider a higher diversity index around 25°(N/S) (about 1 point higher than at around 0° and 50°; is this difference relevant?)?
Testing is about "inductive behavior". If you go this line, you have to behave according to the rules, otherwise you loose any control.
Is there a theoretical/physical reason to expect a quadratic model? - I see confidence intervals as more useful than significance. - Perhaps you could use more data, and an additional regressor, though only if it really helps.
Jochen Wilhelm – sorry for late response. I applied Fisher test. Is it possible that Neyman test would demonstrate significance?
James Knaub – many groups of marine organisms display pattern of increase from the poles toward the equator. Some of groups demonstrate also decrease around the equator. So I tried a quadratic model. Unfortunately, some of groups display gradient for the northern hemisphere only. Unfortunately, It seems that I have insufficient data.
Andrew Ekstrom – all samples were taken from surface waters.
Krzysztof, you seem to have misunderstood my post...
Fisher and Neyman were of considerably different opinions about the whole testing story. There is no logically sane compromise, and switching from one to the other is like converting to a different religion.
Unfortunately, the word "significance" is used in both approaches. But it only has some sensible meaning in Fisher's interpretation. Fisher is testing the "significance", whereas Neyman denies the existence (or estimatability) of "significance" - he is testing hypotheses to justify an (inductive) behaviour (given desired maximum rates for wrong decisions).
Neyman's strategy is to reject H0 when the test statistic falls into the rejection region. The test statistic is seen as a random variable with a known distribution under H0; any particular observed value is neiter interesting nor interpretable. Only the frequentistic property that (under H0) a particular fraction will accidently fall into the rejection region (in the long run) counts. The unfortunate confusion comes from the bad habit to call this (statistic falls into the rejection region) a "significant" result.
Actually, however, "significance" only matters in the Fischerian setup. But there, "significance" is not present or abscent - it is a continuous metric from non- over barely...slightly...moderately...quite...highly...very highly... to extremely significant. Nothing about a "demonstration of significance". It is just to *calculate* a metric for the significance (the p-value) and then judge if is is "significant enough" for the purpose.
Also following Neyman's startegy there is nothing like a "demonstration of significance" as you requested. Either you reject H0 or you accept it, depending on whether or not the calculated test statistic falls into the rejection region. The rejection region an the sample size have to be given in order to control the error-rates for false decisions. If you have not defined these error-rates (and, thus, not calculated how large samples you have to take!), Neyman is actually out of option anyway.