It seems you are misinterpreting the term draft. The genome which can not be closed are submitted as draft. And this mean the contigs can not be connected end to end due to ambiguous regions. This is the main reason of the genome to remain as draft version. Sequencing long reads together with short reads can solve this problem. This just need time, efforts and capital. And there is no way to say that the draft genome is full of error. This is totally a wrong interpretation.
You can not directly compare the data from different sequencing technology. For any kind of comparison, the data must be analysed and as per my understanding, final data would be highly comparable. To understand this better, please read more about different sequencing technologies.
I understand your point. You are not sure about the draft.
Trust me, if you carried out assembly with high quality fastq data, your drafts are fine enough to analysis or to submit. In NCBI submission, you have to delete some contigs having
On one hand, you can have a look at assembly statistics (N50 etc.). But that won’t tell you “are the nucleotides correct”. On the other hand, you can check for gene content and for gene function content. For example, if your assembly is prokaryotic and you ran some tool for gene prediction: check number and length distribution of genes. If you get an unusually high number of short genes, there is likely something wrong. For eukaryotes, BUSCO seems popular to assess functional completeness.