I have large-sized (~8-10 GB) trimmed paired-end datasets (separated into two single files). I installed a local blast on a linux environment, and am planning to do nucleotide alignments with a locally curated database. I want to catch distantly related species, that is why I am intending to do blastn (as compared to megablast). However it is very resource-intensive and time-costly.
Can I instead do megablast with altered parameters (e.g., word size, percent identity cut-off etc.)? Would it be faster than blastn and yet efficient?