@morganni
I've used HTTracker, and yes, it will snag EVERYTHING connected to the site you are scraping for datum.
And usually, using the default settings all of the javascript code and the html pages will be saved, along with the "overscript" of the page layout and an offline version of the master index of the site so you can use it as if you actually were online, as long as you don't try to access offsite links while you actually are offline.
That said, given enough time, it will snag everything, and that will include pages, subpages, source code pages, and thankfully, this means any page still on the server yet still hidden behind those hideous 'we don't want to talk about it" curtains will be accessible, though it may require tweaking the page code manually and removing the offending filter page, as HTTracker rebuilds the code base offline as close as possible to the form it was in online.
I've used HTTracker, and yes, it will snag EVERYTHING connected to the site you are scraping for datum.
And usually, using the default settings all of the javascript code and the html pages will be saved, along with the "overscript" of the page layout and an offline version of the master index of the site so you can use it as if you actually were online, as long as you don't try to access offsite links while you actually are offline.
That said, given enough time, it will snag everything, and that will include pages, subpages, source code pages, and thankfully, this means any page still on the server yet still hidden behind those hideous 'we don't want to talk about it" curtains will be accessible, though it may require tweaking the page code manually and removing the offending filter page, as HTTracker rebuilds the code base offline as close as possible to the form it was in online.