Is there a tool on linux (server) that can stop/continue the transfer, download all addresses contained in a text, and be able to manually change the address to continue the download after the distant server updates its address?
It is not 100% clear what you would like to achieve.
You might want to check the tools "wget" or "curl" if you meant HTTP/FTP as download. For example with wget, you can supply a list of URLs in a text file by using "-i" option, something like this:
wget -i url_list.txt
Or if you want to save the progress output of wget into a file (instead of the console), you can use the "-o" option too:
wget -i url_list.txt -o logfile.txt
If wget finds an already existing file in the current directory and its size is smaller than on the remote server, then it will continue downloading the file where it had left off.
To stop the download (wget itself) you can just use "kill $(pidof wget)", but be careful if you have more than wget running at the same time. If you want to pause/resume wget, you can try using "kill -SIGSTOP $(pidof wget)", then "kill -SIGCONT $(pidof wget)" to resume the same process.
Also you can then run wget periodically from cron.
If you have a Linux machine connected to the internet you can always curl "Link" or wget "Link " to download. If you want to transfer something you can use scp command. Or if you are looking for free software you can use Filezilla or WinSCP tool for the same.
Amit Kumar Thakur David Vincze Thank you for your reply. However, sometimes the links might change when start downloading files. For example this one. https://qiita.ucsd.edu/download_raw_data/12675
I can download with IDM, but failed with wget.
my code: wget https://qiita.ucsd.edu/download_raw_data/12675 --no-check-certificate
The file should be a file more than 100GB, but the output of above code is a 10KB file.
I'm no familiar with site, maybe you first have to log in on that site to download data files. And when you use a browser, after login, some session data is stored locally probably in a cookie. And when you use wget or curl, they do not know about this (session) cookie, there from the server's viewpoint you're not logged in. You can try copying the corresponding cookies from your browser to the machine where you use wget, and use that cookie with wget... It's a bit of hack, maybe worth a try.
Or try to do the login with wget before the actual download to get the cookie, something like this:
Yes, you can use the wget command-line tool in Linux to download files from a list of addresses contained in a text file and have the ability to stop and continue downloads. You can also manually change the address in the text file to continue the download from a different source if needed.
Here's how you can use wget for this purpose:
Create a text file (e.g., download_list.txt) containing the list of URLs you want to download. Each URL should be on a separate line.
Open a terminal and navigate to the directory where you want to download the files.
Use the following wget command to start downloading the files from the list:
wget -i download_list.txt
This command tells wget to read the list of URLs from the download_list.txt file and start downloading them one by one.
To stop a download in progress, you can press Ctrl + C in the terminal. wget will interrupt the current download and provide an option to continue later.
If you need to change a URL in the list, you can edit the download_list.txt file to replace the URL you want to change.
To continue downloading after changing a URL, use the same wget command as in step 3. wget will start downloading from the point where it left off or fetch the new URL if it has been updated.
This approach allows you to manage downloads from a list of URLs, stop and resume downloads, and manually modify the list as needed.