I am working under Windows 7, and I want to download all new files from a Windows server directory over HTTPS using wget. In addition, I want to resume the download of large files in case of connection loss during transfer.
When I run
wget.exe --continue --recursive https://://pdf.dll
everything works fine.
But using
wget.exe --continue --no-clobber --recursive https://://pdf.dll
the download is not resumed after a connection loss, but the incomplete file remains on my local file system. The message of wget being:
File '//pdf.dll' already there; not retrieving.
(We want to use the --no-clobber
option in order to avoid sending HEAD requests for all files that are already transferred.)
Does this mean that --continue
does not work well together with --no-clobber
?
Answer
That is because you are combining two options (--no-clobber
and --continue
):
--continue
: Continue getting a partially-downloaded file--no-clobber
: This will clobber/overwrite the previously downloaded file before restarting to download again
As you can see, these two options ask Wget to perform quite opposite tasks, so it does not know what to do by the end. Do not combine them. You can read about download Options in details.
No comments:
Post a Comment