I've been using Httrack many times before but never had this issue. I try to
download a website and Httrack runs for 2 seconds and says it's done but only
downloaded the index.html file.
Tried it on many websites including my own. These sites have no robots.txt and
I double checked by turning off robots in settings > spider but doesn't help
either. I've already reset options back to default but still not working on
any website I try.
Here's how it scans every website:
[domain.com]
[domain.com/]
[domain.com/robots.txt] (whether it exists or not)
And then it just stops.
Please advice
|