If you know all previous primes before P then highly possibly the optimized sieve of Eratosthenes implementation is the fastest known method any case I think.
The answer may depend upon what you mean by "the next prime number": do you mean that you know with certainty that the number is prime, or do you want to know that you have the results of a test which fails only rarely suggesting that the number is prime (this is often called "probably prime", a misleading phrase, since the number is certain to be either prime or composite). If you are satisfied with probabilistic tests, this will speed things up a bit.
In either case, the following is true: by the prime number theorem the average distance between primes of near n is about log(n), that is, the size of the prime. Hence typically you will have to examine about log(n) numbers on average before identifying the next prime. Most of these can be ruled out as Esa Sakkinen suggests by sieving by small primes (is the candidate even, divisible by 3, 5, etc).
Since Agrawal Kayal and Saxena showed that primality testing (deterministic, not "probabilistic") is polynomial in the size of the input candidate, this means that for a randomly chosen n, finding the next prime after n will typically take time polynomial in log(n).
The fly in the ointment, however, is that we don't have good upper bounds on the distance between consecutive primes. It is still open, for example, to show that for large enough n, there is always a prime between n^2 and (n+1)^2.
If we believe that Cramer's conjecture is true, then we are better off (and since you said you were interested in heuristics, let us believe it for now): roughly speaking, this says that the gaps near n are of size at most roughly c log(n)^2: hence your algorithm using, say, AKS to determine primality, would have running time at most the maximum of log(n)^2 and running time of AKS. (AKS would dominate).
Following Neil Calkin's thread, may I call your attention to the recent advances in the study of "prime gaps" ? Define G(X) = max (pn+1 - pn) for two consecutive primes pn and pn+1 which are less than X . A conjecture of Cramer states that G(X) > logX.(log2X.log4X)/log3X , where logk means log composed k times with itself. For further details and developments, see e.g. T. Tao's blog. ¤
Following Neil Calkin's thread, may I call your attention to the recent advances in the study of "prime gaps" ? Define G(X) = max (pn+1 - pn) for two consecutive primes pn and pn+1 which are less than X . A conjecture of Cramer states that G(X) > logX.(log2X.log4X)/log3X , where logk means log composed k times with itself. For further details and developments, see e.g. T. Tao's blog. ¤
Thong nguyen quang do points out the beautiful and important recent work on prime gaps --- very well worth reading: unfortunately, it doesn't give any insight into upper bounds on the size of long gaps.... The lower bounds of just a little more than log(x) he cites shows that one will often have to eliminate that many composite numbers. An interesting question, to which I don't immediately know the answer, is this: for most of these composites they will be trivial to eliminate: what fraction of the composites will be as hard to eliminate as showing a comparably sized number is prime?
What you said means that there is a deterministic function to compute how many prime numbers between two given numbers. But all studies say that the behavior of prime number is pseudo random. For instance, F(12,34) = 7.
Thong's answer is correct --- what do you mean by a "deterministic function"? If we define q(x,y) = pi(y)-pi(x), is that a deterministic function? It is certainly determined precisely for any pair x
Since I can't tell from the original question whether this was a theoretical or practical-code question, I'll answer from a practical point of view, assuming you actually want fast ways to find the next prime, rather than asymptotic but not-for-practical-use discussions. There are lots of heuristics related to performance. We've got two issues that can be separated.
(1) how to quickly find the next probable prime.
(2) proving primality of the result if desired.
Note as well that the *size* of the input makes an enormous difference in what the fastest solution is. The method used for small numbers (e.g. 64-bit) is quite different than what's best for thousand-digit numbers, and there are different libraries that are faster once we get into the 5000+ digit range.
For tiny numbers, say under 10000, we can use a table or a wheel-sieve bitmask so only ~330 bytes are needed.
If you have a sieve handy and you're doing multiple calls or have another use for small primes, this can be extended to 1M quite easily. It's not worth it if you're sieving for every input, and the memory use makes larger amounts unpalatable.
For inputs larger than these small sizes and less than some threshold (120 bits with my code, your mileage may vary), I'd do a simple wheel plus fast primality test (see #2). It's trivial to skip evens or just check 2/6, and IMO worth it to look at a 8/30 wheel. If you're using bigints, you can do some speedups using the input mod 23# so your wheel and some pretests are done on a native 64-bit.
At some point (e.g. more than 120 bits) we can improve things by using a partial sieve out to, say, 30 merits. We then test the candidates using our fast primality test (#2) without pretests since we already know the candidate has no small factors. In case we have a huge gap, we have to repeat this as necessary (30+ merit gaps are rare but of course they exist).
#2 Primality Testing
Below 64-bit, there are small Miller-Rabin sets that are deterministic. Additionally, BPSW has no counterexamples below 64-bit and can be faster. This will be a few *millions* of times faster than AKS. Typically we want to precede these with a little trial division. Then do fast M-R or BPSW. Below 32-bit, hashed single-test M-R is attractive, 33 to 64-bit can be done with various hashing M-R methods, non-hashed deterministic M-R, or BPSW. There are a few implementations that take advantage of x86_64 instructions and Montgomery form to speed things up quite a bit.
Sorenson/Webster (2015) shows a deterministic M-R set below ~ 2^81. After that BPSW is IMO the best choice for probable primality test. Hopefully you're using GMP. Once past 3000-5000 digits on x86_64, gwnum starts being faster, but coding it is much harder.
Pretests (trial division) for larger numbers gets interesting. On average this is quite useful, see Park (ISPEC 2005) or Menezes 4.45 for some discussion. How this is done depends on your bigint library. At some point remainder trees become worth using.
If our result is less than 64-bit, we're done. If above, we have a few options:
1) BPSW has had no counterexamples in 35 years. Call it good enough. This is what most math packages do.
2) BPSW is great and all, but add some extra tests such as a few random-base M-R tests. This is more or less what FIPS 186-4 discusses, and some more paranoid math packages do this.
3) we're pedantic mathematicians so we want the result proven. At this point we have lots of choices, noting that we're looking at 20+ digit numbers. If we're past 30k digits then we're almost certainly out of luck -- general-form proof methods take months to complete on current multi-processor hardware for this size.
a) if below the Sorenson/Webster (2015) limit of ~2^81 we can do the 12 Miller-Rabin tests (13 if we didn't already do base 2).
b) trial division to sqrt(n). Way too slow.
c) AKS. Way too slow, impractical vs. other proof methods below. Great for discussing theory though.
d) Special forms are quite unlikely to apply in this case, but it's cheap to check for LLR and Proth forms.
e) BLS75 methods (from the seminal 1975 paper looking at partial factoring of n-1 and/or n+1). If you have half-decent factoring code, this will be practical up to ~80 digits, and is *very* fast for numbers under ~200 bits. It doesn't scale well for larger inputs, but it's cheap to try on smaller ones.
f) APR-CL. Fast in practice -- used by Pari/GP for instance. It's almost polynomial, in that the exponent is c*log(log(log(n))) which grows very slowly. In fact the exponent is lower than AKS's 6 until the input has millions of digits, so AKS doesn't even have a *theoretical* advantage for any numbers we're actually computing.
g) ECPP. A very commonly used method for large inputs. Produces a certificate which can be independently verified, which is something other methods other than BLS75 don't do. Has been used up to ~30k digits. Expected polynomial time with lower exponent than AKS.
Regarding a sieve being the fastest method, other than for amortized time on very small numbers, I do not believe this is the case. Using a partial sieve (that is, running a sieve with early termination) can be used to remove composite candidates, but that is just an optimization of the wheel+primality test method.
This becomes very obvious at large sizes. For instance if P = 10^1000 (with the next prime being 10^1000+453), then how would we use a sieve to find the next prime? The outer loop would have over 10^496 iterations! It's only a quarter second using a partial sieve plus primality tests. APR-CL and ECPP prove it in about 7 minutes on a single core.
For small inputs, of course we can use a sieve to generate a list of primes and then have very fast answers using a binary search, or store the primes using a bitmask and then scan bits for the next prime. But if we have to sieve a new range, then we're effectively doing efficient trial division across the range. But trial division doesn't scale well. Note that at small sizes, the average distances to the next prime is small, so few primality tests have to be done. As the size increases sieving has much more work to do.
Empirically, even at 10^5 using a sieve is slower than a wheel + primality tests using my software. But there are *many* pieces of code involved, so the crossover point will vary depending on relative speeds. Pari/GP uses the same basic method, as does SymPy.
Addendum: a sieve is pretty easy to implement, just be careful to read the Wikipedia page or other source and do it right. For small inputs this should be fine. Scaling to larger numbers gets more complicated because you want to sieve from P+1 to P+delta, not 2 to P+delta. Still not hugely complicated. For primality tests, RosettaCode has examples of Miller-Rabin in at least 45 languages, and you can look up deterministic tests to ranges past where a full sieve would work. I'm not sure it really is more complicated once your solution needs to work well beyond, say, 1 million.
I propose you search for my yesterday's answer to a relevant question of Mr Kalayn Hasda on Research Gate and I think it will cover you. The only condition, is to know all the primes plus one more, up to the square root of the prime given (and It is not necessary by my method that the number is prime).
Since you did not ask how long it might take to find the next prime, but rather what the fastest technique is, and assuming you do not have an array of prime numbers at the start, my short answer is this:
Select any odd numbered value to begin. Call this initial value "num."
The candidate value, num, is tested by dividing it by all odd numbers (3, 5, 7, 9, etc.) less than sqr(num). If the quotient ever comes out integer, num cannot be prime, and num is therefore incremented by 2 (remember, num was odd to begin with).
If dividing by all odd numbers, smallest to largest, until sqr(num) is reached, fails to result in an integer quotient, then num must be prime.
And you can continue the process, adding 2 to each number you determine to be prime, to come up with the next candidate number to test. This will generate a long list of prime numbers greater than that num you started off with.
Obviously, this isn't the fastest possible method, but only because you do not go in assuming you already know a long list of prime numbers. If you did have a long initial array of prime numbers, you would test those in sequence, smallest to greatest, until you exhaust that list of pre-known primes. And then, if you still hadn't determine whether your number was prime or not, you would have to proceed with the algorithm described above.
With slower computers of years ago, having an array of pre-determined primes to begin made quite a big difference. I've found that these days, you can hardly tell the difference, given that the initial array can only be so large anyway. Every programming language has an upper limit of array size.
Another fascinating topic is so-called "prime pairs." These are prime numbers separated by only one even number between them. You know, like 3 and 5. You'd think that with enormous prime numbers, prime pairs would become extremely rare. But they aren't all that rare. For example,
8,683,187 is a prime number. And so is
8,683,189.
So sure, you say, but I'll bet the next prime pair is far down the road. Right? Not so. the next prime pair is
8,683,217 and
8,683,219.
Amazing. Between those two prime pairs, there was only ONE prime number:
8,683,201.
How totally weird is that? And it goes on this way. This is by no means an anomaly.
I was asking the same question before. Did some work on this problem. Came up with one possible solution, but need more work on it, I share if you wish to get some idea. https://www.researchgate.net/publication/265014334_ON_A_SIMPLIFIED_CONSTRUCTION_METHOD_OF_AN_ARITHMETIC_NUMBER_SERIES
Conference Paper ON A SIMPLIFIED CONSTRUCTION METHOD OF AN ARITHMETIC NUMBER SERIES
The attached article includes an efficient method to find the next prime starting from any number. Also, you can compute the nth prime number without using any previous prime number.
The methods shown in that paper are just sieving with a mod 6 wheel. This is not efficient -- the times taken are many orders of magnitude slower than modern methods used in math packages. E.g. nextprime for numbers in the 10^15 range can be done in a couple _micro_seconds, not hundreds of seconds. The nth prime can be done using fast prime counting methods so the 10^12th prime can be exactly determined in under a second.
I agree, every day we have new results and new algorithms with a better complexity. Sometimes you need to start at some level to show others how the primality test works using elementary number theory. The mentioned paper is useful at the undergraduate level. There are hundreds of similar results available online. So to complete your answer, as a computer man, it is better to show an algorithm such as the algorithm that determined by using the results of AKS. For example, see AKS Primality Test - GeeksforGeeks www.geeksforgeeks.org › aks-primality-test Regards
The mentioned algorithm has high time complexity compared with the KAS algorithm where testing prime needs only upper bound time complexity is O((long)10.5 ), and there are many more new modifications.
To prove whether a number is a prime number, first try dividing it by 2, and see if you get a whole number. If you do, it can't be a prime number. If you don't get a whole number, next try dividing it by prime numbers: 3, 5, 7, 11 (9 is divisible by 3) and so on, always dividing by a prime number https://www.factmonster.com/math-science/mathematics/prime-numbers-facts-examples-table-of-all-up-to-1000
dCode uses an algorithm that performs a probabilistic primality test (Miller-Rabin test) on each of the numbers greater than or equal to the number requested, then check it with a deterministic test. Example: The first prime number following 1000 is 1009. https://www.dcode.fr/next-prime-number