I need to download all the PDF files present on a site. Trouble is, they aren't listed on any one page, so I need something (a program? a framework?) to crawl the site and download the files, or at least get a list of the files. I tried WinHTTrack, but I couldn't get it to work. DownThemAll for Firefox does not crawl multiple pages or entire sites. I know that there is a solution out there, as I couldn't have possibly been the first person to be presented with this problem. What would you recommend?
Subscribe to:
Post Comments (Atom)
hard drive - Leaving bad sectors in unformatted partition?
Laptop was acting really weird, and copy and seek times were really slow, so I decided to scan the hard drive surface. I have a couple hundr...
-
Using Windows 7. When I RDP to a PC I'd like to be able to logout of the session without the screen reverting to a Ctrl+Alt+Del Login sc...
-
I tried adding grubx64.efi in the Windows Boot Manager using BCDEdit. However when I boot up my computer and try to start GRUB from Windows ...
-
So I installed Ubuntu Netbook Remix 9.10 onto my Asus EeePC 1008HA netbook. It worked perfectly and was pretty quick. Restarting, suspending...
No comments:
Post a Comment