In general, it's a bad idea to organize your code in such a way that requires you to store large arrays of data inside our program. Why not to separate data and code? Of course, you can define data on your own and do all the jobs with it inside your program. But it's easier (in long timescale) to use some universal data format to do things like these. Check, for example, netCDF library and their examples of code - http://www.unidata.ucar.edu/software/netcdf/examples/programs/.
- arrays (when at compile-time you know their size)
--> eg
INTEGER :: arr(1:50)
REAL :: arr2D(1:3,1:100)
- dynamical arrays (when their size is not known at compile-time).
--> eg:
INTEGER, ALLOCATABLE :: darr(:)
REAL,ALLOCATABLE :: darr3D(:,:,:)
and later:
allocate(darr(1:n))
allocate(darr3D(x,y,z))
where n,x,y,z are the size of the array in a given direction/dimension. Note that the last statement is equivalent with allocate(darr3D(1:x,1:y,1:z)).
With regard to dynamical arrays there are 2 ways of doing this, first with the 'ALLOCATABLE' keyword and second with the 'POINTER' keyword. The main difference between the two is that the former uses a contiguous piece of memory, while for the latter (pointer) this is not necessarily true, making the former theoretically faster wrt looping through the array.
Depending on your application, you may consider moving to fortran 2003 modules and make the array an object, including functions that work on the array itself.
There exist many good manuals for fortran 95 (and even some for fortran 2003)
--> http://www.uv.es/dogarcar/man/IntrFortran90.pdf (I learned programming fortran 90/95 with this one, and still have it on my desk though most of my coding is object oriented fortran 2003 nowadays)
In general, it's a bad idea to organize your code in such a way that requires you to store large arrays of data inside our program. Why not to separate data and code? Of course, you can define data on your own and do all the jobs with it inside your program. But it's easier (in long timescale) to use some universal data format to do things like these. Check, for example, netCDF library and their examples of code - http://www.unidata.ucar.edu/software/netcdf/examples/programs/.
I usually use a dynamic declaration of the arrays inside a module. This procedure has advantages. If we declare the array inside the main program (even inside a subroutine), the array will be its values atributed every time that the procedure that uses the array is initialized, and this is not very good when the array is used by several procedures inside the main program. On the other hand, if the values of the array are storaged in a file, we create a dependency on the speed of the HDD (which puts limits on the speed of the program itself). By the using a dynamic declaration inside a module, we avoid these two problems, because the array will have its values atributed once, and these values will be storaged in the memory system.
The procedure is the follow (only for an example):
MODULE MODULENAME
REAL*8, ALLOCATABLE, DIMENSION(:,:) :: C
CONTAINS
SUBROUTINE NAME(C)
IMPLICIT NONE
INTEGER m
REAL*8, ALLOCATABLE, DIMENSION(:,:) :: C
ALLOCATE(C(0:1000,0:1000)
OPEN(2, FILE = 'filename.dat', STATUS = 'old') !here the values of the array are read from a file
I wuld prefer the Vasil'ev's suggestion: When you have large array data, it is better to use commands: READ and WRITE to an opened fiie, where the writing and reading FORMAT are the same.
Bejo, I think the preference hugely depends on what is understood by data, and what is done with is. If it is a large set of parameters/data that you program needs you may decide to store it externally, instead of hard-coding it. If it is input-data from another program (e.g. a large grid) on which you are doing operation, (e.g. a set of coefficients for functions etc) I think keeping them in memory as long as possible is the most efficient way to go. Only when your data-set gets so large that you have to move to IO you should do this, because this is extremely slow.(e.g. full CI calculations for a large system etc) However, for small systems, lats say 10Mb of data in an array should not be an issue, and if many operations(e.g. 1^9 operations when you need to acquire data from disc tends to be not that fun) on this data are required, keeping it internally in an array is the fasted way to go.
I would use a "flat-file" unformatted output. Each line is a single datum (i.e., integer, character, real, or double) and sequential "read" statements input the data from the file. You don't even have to know how many lines are in the file, just choose an end-of-file datum such as "EOF", read each line in an infinite loop as a DUMMY character variable, and If (DUMMY.EQ."EOF") then kick it out of the read loop. Otherwise, rewind the file one line, and re-read the datum as the type it was intended to be. True, you have to read each line twice, but it lets you record output as unformatted flat-files for read-in later without knowing the number of data to be read in.