I don't understand what you mean with "How to set the parameter to get the down regulated genes". You measure the gene expression under two conditions. One of these conditions is selected as the "reference", "basline" or "control" condition and compared to the "treatment" or "experimental" condition. One would call a gene "donw-regulated" (in response to the treatment) when the expression under treatment conditions is lower than under control conditions.
Regarding the p-value: a p-value is the probability of more extreme test statsitics than the one obtained from your sample in a particular statistical model. you can see this as a kind of "statistical signal-to-noise ratio": the lower the p-value (the closer it is to zero), the clearer is the "statistical signal" above the "statistical noise" in your data. The signal is evaluated with respect to a particular statistical hypothesis. The p-value is thus a measure of how surprised one should be observing the sample data under the assumption that the the statistical model and hypothesis were actually correct.p-values are calculated as the result of significance tests that aim to reject the tested hypothesis when the observed data turns out to be very unexpected under this hypothesis. In the context of gene regulation one may set up a statistical model predicting the gene expression under two conditions. Such a model might include a term expressing the expected difference in gene expression between these conditions. A testable hypothesis within this model is that this expected difference is zero. The p-value for this hypothesis gives the probability of "more extreme data" in a model assuming that the expected difference is zero. If this is very unexpected one may believe that the model without suuch a restriction (i.e. a model that allows to use a non-zero expectation of the difference in expressions) is a reasonably better model.
As mentioned above is a p-value a kind of "surprise index" of the observed results under the assumption of a particular statistical model and hypothesis. This value is usually interpreted as a single outcome of a single, planned comparison. In such a particular comparison, we would be quite surprised to get a p-value as small as 0.01, say, if the model and the tested hypothesis was actually correct. However, if we do many comparisons, we would not be very surprised to get one ore even several small p-values. If I know that the respective p-value of 0.01 is the smallest p-value obtained from a series of many tests, one should not be as surprised. You may find it illustrative to think about winning a lottery: if you ask one person to play lottery, it is unexpected that the person will win when the person is just guessing. The observation that the person actually has won is surprising under the hypothesis that he was just guessing. You can calculate a p-value for this case and it will be tiny. However, if you let thousands of people play, some of them will almost surely win - that's not unexpected, not surprising. The p-value for each win is still the same tiny one for each winner, so now you have a considerable discrepancy between the meaning and the value of "p". A solution to this problem is to "adjust" or "correct" the p-values to be a more sensible "surprise index". A frequently used correction will change the question from "how likely will one observe more extreme data in this particular case?" to "how likely will one observe more extreme data in at least one of the considered cases?"
The over/under represented threshold may be defined by you. Some author use a value from to 1,5 to 3 times regard the control condition. If you have 3 replicates from one condition you can obtain a p-val using a simple t-test comparing with 3 replicates from your control condition. The adjusted p-value can be obtained using something like a Bonferroni correction.