I next used the following command to change to the directory I wanted the files saved in.
cd C:\Documents and Settings\User Name Here\Desktop\Folder Name
Then, I entered the following command to begin fetching the files from the webserver.
wget -A.pdf,jpg,zip,rar,jpeg,gif -r -l 10 -R "*index.html*" -c http://www.website-name-here.com/parent/child
However, it seems like the resume switch (-r) does not actually resume from where I last left off downloading. The website I'm trying to download from has folders at least 3 levels deeper than what I have in the wget command. There are at least 28 folders in the "child" folder, and under each of those is at least 60 folders, each containing 4-5 files. I'm aware that the "-r" trigger works for single files. Is there a way to resume downloads after I shut down my computer, or do I have to leave my computer on until it has finished downloading?
As well, is there a trigger that prevents it from downloading files of certain extensions? Right now, my command downloads all the files, and then deletes any file named index.html after downloading. Is there a way to prevent it from downloading all the index.html files, and instead just the file types I need? This would save a lot of time.