wget
on Linux is a very useful tool to download files from the web. In its default use, one can download individual files from a directory on the web. However, downloading all files form a HTTP directory may be achieved using the following command.]# wget -r -nH -nd -np -R index.html* http://example.com/directory/
The arguments in the command above have the following meanigns:
-r
Recursive download -nH
This prevents the creation of directory structure as on the host -nH
This ensures that all files are saved in the current direcotry -nH
Don't download parent folder when -r option is given -nH
Reject list, exclude these file(s) from the download
0 comments:
Post a Comment