I would like to know how I can perform the velocity of the CCSD(T) calculation. Should I put more CPU or increment the RAM memory like in MP2 calculation? What is a suitable way? Thank you very much.
Can you please tell what program or package you are using to calculate the CCSD calculation? Normally it depends on the cluster size. If you are calculating for small clusters (bare clusters upto 2 atoms) then 3-4 Gb is more than enough. Incase you have ligands attached to you clusters then it requires more memory. If you are working with complexes with transition metal clusters then I think your memory requirement should be higher.
Can you please tell what program or package you are using to calculate the CCSD calculation? Normally it depends on the cluster size. If you are calculating for small clusters (bare clusters upto 2 atoms) then 3-4 Gb is more than enough. Incase you have ligands attached to you clusters then it requires more memory. If you are working with complexes with transition metal clusters then I think your memory requirement should be higher.
Thank you for answer. I use gaussian 09 with cluster with 12 cpu min to 48 with max 2.6TB ram. For now I am doing try for implent ccsd(t)/cbs and I tried with water molecules. The calculus that I have now is with 24cpu and 200 GB but for now stay all day running only for H2O and later, if I achive to understant extrapolation, I should do with tetrahedric metals. My objective is reduce the time, because I think that more than 24 hours it is too much, but really I don't know the normal time for example TiCl4 (opt+freq).
to be short: use a lot memory and check scaling with the number of processors. It might happen that with >12 CPUs you loose time on communication between nodes (I do not know exact configuration but CCSD typically do a lot of I/O from a disk). 24h for "production" run of CCSD(T) with large basis and moderately sized molecule is nothing. I personally done CCSD(T) calculations that took over a month (single point, no gradients etc.) and this are still considered as a "routine" type. Just be patient :)
some other comment: I agree with Debapriya that in any CC calculation memory is a major concern. However, do not be surprised that your calculations take time, especially that you ask for gradients and 2nd derivatives. Modern DFT functionals typically do provide good structures and harmonic frequencies and in the latter the accuracy can be systematically improved by adding anharmonic corrections. Be aware that such correction should also be applied for CCSD(T) frequencies if high accuracy is desired - and this becomes quickly too expensive.
I completely agree with Oleg and Adam. H2O is a very simple structure and if your calculation terminates successfully within 24 hrs it is reasonable. Trust me, I work with transition metal clusters all day and I've been calculating EOM-CCSD (post CCSD step) for the last 3 years. I have calculated upto 4 centered structures and I never needed a memory more than 10 Gb. There are many routines (using IOPS) which you can use to change the diagonalization techniques. You can also change the cutoffs and use a lighter convergence criterion but you will have to pay a price for that that (the accuracy of the calculation is compromised). As I said before, as the cluster size increase the time required for your CCSD calculation will increase. I have worked with clusters that had 100 atoms and it took more than 1month to get it converged. It also depends the number of users in the server. If there are too many users who submit jobs with high memory requirements then the calculation slows down terribly. Just be patient. As long as it terminates successfully don't worry.
I have confused writing water, my calculation was with CCl4. With DZV cost 13h and the (T,Q)ZV are already running ( 35h and 33h for now).
I ask all of this because this calculations are the first time that I use it and I don't have references and I must take in account the time because if finally I use CBS for my study I must finish all before May.
You can try one more technique. If you find your calculation is progressing very slowly then you can try with a smaller basis set. If you use bigger basis sets then the calculations slows down considerably. Normally the CCSD calculations need atleast a day if you use moderate basis set. Make sure you don't submit too many jobs Gaussian at the same time. You must monitor the nature of the convergence in the CCSD step. If you find it decreasing linearly then you are on the right track, and in case you observe the convergence behavior is oscillatory then you can try changing some of the keywords. I faced this situation a couple of times.
You should generate a good geometry, perhaps at the level of MPT2 then you can perform single point calculations by selecting a proper basis for other properties. For example for a single point energy calculations just type CCSD(T)/(basis set specification) in the command line. Make sure you have enough memory allocated.
Thank you very much. Now I am searching the best method that gives me the better geometry (in comparison with experimental) and then I will do the single point calculations for do the cbs extrapolation.
Some packages have DF-CCSD(T) implemented, which speeds up the integral evaluation and takes away some of the IO overhead. However, the (T) step is rate limiting and, unless some extra progress has been made, this part typically doesn't see any improvement from density fitting.
My preferred (perhaps biased) option is to run CCSD(T)-F12 calculations (in Molpro, Turbomole, Orca etc), where the basis set dependence is greatly reduced - QZ quality results with a DZ basis are typical.
As you know CCSD(T) is too time demanding. I think it would be better first you submit a CCSD calculation and find how many hours one step of CCSD takes time,when you have it, multiple the time by number of occupied orbitals, so you have a time estimation on how many hours CCSD(T) takes time with respect to mem and nproc you have specified. For correlated calculations you need to increase RAM and cpu both. If you are doing your calculation on a cluster,specify more cpu to have more memory, and I suggest you that it would be better if all the cpus are on same node.