I need to download all the PDF files present on a site. Trouble is, they aren't listed on any one page, so I need something (a program? a framework?) to crawl the site and download the files, or at least get a list of the files. I tried WinHTTrack, but I couldn't get it to work. DownThemAll for Firefox does not crawl multiple pages or entire sites. I know that there is a solution out there, as I couldn't have possibly been the first person to be presented with this problem. What would you recommend?
Subscribe to:
Post Comments (Atom)
hard drive - Leaving bad sectors in unformatted partition?
Laptop was acting really weird, and copy and seek times were really slow, so I decided to scan the hard drive surface. I have a couple hundr...
-
This question is prompted by the following regularly observed phenomena I'd like to find an explanation for: Current commit is regularly...
-
I'm currently trying to set up my environment such that I can open a ConEmu tab and it will automatically initialize Far Manager and the...
-
Before taking ownership of files and folders I would like to understand how to view the current permissions so that they may be reverted. I ...
No comments:
Post a Comment