For some time, a number of software packages have been developed to handle data sets larger than what can reside in memory. This is typically accomplished by: (a) using updating algorithms for summary statistics and covariances/correlations; (b) writing intermediate results to disk; and/or (c) using disk storage as virtual memory (sometimes managed by the OS, sometimes by the software). The only downside of (c) is that it renders the process notably slower, though SSD drives will minimize the lag compared to traditional hard drives.
As an example, stata offers versions that can handle up to 120,000 variables and 20 billion cases, but require considerably less memory (e.g., 4 Gbytes) in order to function satisfactorily. (https://www.stata.com/products/).
Some of the R libraries have very large capacity, though if the largest object (for you, your data frame) exceeds about 35% of physical memory, virtual memory will likely be used during the processing).
One final point, for factor analysis, the data frame is two-dimensional: some objects to be factored and some replications dimension. So, I wasn't fully clear on the 3-D data set.