Good afternoon everyone,
I'm currently in the process of running bedpostx on a series of HYDI data with 144 directions that has been collected using a 64-channel head coil. Typically, bedpostx should not take more than 48 hours to run in full for a sample of my size (48 subjects) when using a high-power computing cluster or machine. With that being said, only one of my subjects' worth of data has successfully completed running bedpostx, and it's been 6 days since I started running my scripts.
I'm currently running the scripts on a Linux machine that has 46 processors. I'm running 10 scripts concurrently (4 or 5 subjects in a loop), and each script is taking up 100% of 1 processor each. In other words, 10 full processors are being used to run these concurrently running scripts.
An example of the scripts I'm running include the following:
for n in 1 2 3 4 5
do
bedpostx /data/projects/dti/${n}/bedpostx_input --rician
done
....
for n in 44 45 46 47 48
do
bedpostx /data/projects/dti/${n}/bedpostx_input --rician
done
Please note that even after I killed 9/10 of my jobs, leaving only one script running, bedpostx is still taking an exorbitant amount of time to run.
Does anyone have any insights into why this might be happening? Is there a way that I can somehow optimize my bedpostx scripts in order to use our resources more efficiently?
Thank you all in advance for all your help!!
Kind regards,
Linda