If you mean in your job script - one way is to use unpacked nodes. For example you chose 12 nodes each with 16 cores and you run only in 12 cores in each node so that memory of remaining 4 cores in each node will be shared among 12 cores that you are using. You could do this by using -N (nodes) and n (total tasks). Here -N 12 -n 144 will use 12 core/nodes. It also depends upon the computer system you are using to do your calculations.
Some supercomputers have designated nodes with higher memory and the way you assigned depend on how it is designed in particular to that supercomputer. Depending upon the system you are doing calculation on using ISYM=2 tag in INCAR helps to reduce memory requirement in your calculation as well.