I have a huge oncoarray_genotype_XXXX.txt file with 11.7 GB size. I would like to convert this file into adequate files for analyzing in plink software. However, I could not find how to conduct this work in R.
The format of your file is somewhat similar to a multiFASTA? If yes, you can use the software snp-sites (https://github.com/sanger-pathogens/snp-sites) to generate a .vcf and then transform it to plink via PGDSpider (http://www.cmpg.unibe.ch/software/PGDSpider/)
I don't think it is a multi-fasta file. It should be a list of SNPs, one per line, I think.
In any case, if your problem is not memory, you might either (laborious) split the file in chunks and load them separately (https://stackoverflow.com/questions/6144285/open-large-files-with-r) or (steep learning curve) use some memory-efficient packages like data.table (https://cran.r-project.org/web/packages/data.table/index.html)
if your problem is memory, your only chance is to process your big file line by line and you can do this only if our wished output files (PED, map, etc) do not contain aggregated information.