The question is pretty vague ("very large" is subjective), but I'm assuming based on the "RSA Cryptography" tag that you mean 1024 to 8192 bits and are interested in good probable prime tests rather than proofs. Quick answer for RSA: multiple Miller-Rabin tests with random bases followed by a single strong Lucas test, using the FIPS 186-4 tables to decide how many to use. Quick answer for non-RSA: BPSW for non-proofs, APR-CL or ECPP for proofs.
Typically we first add a pretest looking for small factors, as this returns very quickly for composites. This can be done by trial division or a big gcd, for example.
Easiest: multiple Miller-Rabin tests with random bases. See FIPS 186-4 for a suggested number of tests for given sizes and security levels, if this is for cryptographic use. There are deterministic base sets for 64-bit numbers.
Better: Add a Lucas test, preferably the strong test. This is anti-correlated to the M-R tests, and runs in about the speed of 2 M-R tests, so gives better results in practice. FIPS 186-4 also gives numbers for how many M-R tests to run in combination with a Lucas test. For non-crypto work, a single base-2 M-R test plus a strong Lucas test makes the BPSW primality test, which is quite popular for general probable prime testing. It runs quickly, is deterministic, is shown correct for all 64-bit numbers, and even the weakest variants have no known counterexamples for larger inputs (though we expect them to exist).
For primality proofs, there are various methods for special input forms that are fast, such as the Lucas-Lehmer test for Mersenne numbers. For tiny 64-bit numbers we can just use BPSW. For somewhat larger numbers, the BLS75 methods work well if you have integer factoring code. APR-CL and ECPP are recommended methods for large inputs, with ECPP having done a few numbers in the 18k to 25k digit range. ECPP generates a certificate, unlike APR-CL or AKS. AKS is far too slow to be useful at any size.