I need to download all the PDF files present on a site. Trouble is, they aren't listed on any one page, so I need something (a program? a framework?) to crawl the site and download the files, or at least get a list of the files. I tried WinHTTrack, but I couldn't get it to work. DownThemAll for Firefox does not crawl multiple pages or entire sites. I know that there is a solution out there, as I couldn't have possibly been the first person to be presented with this problem. What would you recommend?
Subscribe to:
Post Comments (Atom)
hard drive - Leaving bad sectors in unformatted partition?
Laptop was acting really weird, and copy and seek times were really slow, so I decided to scan the hard drive surface. I have a couple hundr...

-
I tried adding grubx64.efi in the Windows Boot Manager using BCDEdit. However when I boot up my computer and try to start GRUB from Windows ...
-
I have a hp notebook (HP 240 G5 with an Intel Core i3-5005U CPU) with a preinstalled 64 bit Windows 10 Home Single Language. For testing my...
-
Using Windows 7. When I RDP to a PC I'd like to be able to logout of the session without the screen reverting to a Ctrl+Alt+Del Login sc...
No comments:
Post a Comment