| Ok, so if 200 means everything went well then httrack should know that all
those pdf files are already there on my hard drive. So how come it's not
seeing them? Please answer this question.
>>Nothing. The action will default to Update on a previously completed
mirror.
It can't default to a previously completed mirror, WHRoeder, because there
isn't one. There is no completed mirror - httrack doesn't want to resume
downloading the partially completed mirror. That's what the problem is. The
"continue interrupted download" option is NOT working like it's supposed to. I
included some new.txt and new.lst snippets so you could tell me if there were
any errors in there that are causing httrack not to want to resume. Do you
understand now?
>>1) Always post the ACTUAL command line used (or log file line two) so we
know
what the site is, what ALL your settings are, etc.
Ok, I think I already did that.
>>2) Always post the URLs you're not getting and from what URL it is
referenced.
I'm getting all the URLs I want to get. The problem is that httrack doesn't
check if they're already there when I tell it to resume.
>>3) Always post anything USEFUL from the log file.
The log file is very short and I included it in my previous post. It's
doit.log.
>>4) If you want everything use the near flag (get non-html files related)
not
filters.
Yes, I want to download chesscafe.com in its entirety. I'm not familiar with
the near flag. Can you provide me with any examples on how to use the near
flag.
>>5) I always run with A) No External Pages so I know where the mirror ends.
With B) browser ID=msie 6 pulldown as some sites don't like a HTT one. With
C)
Attempt to detect all links (for JS/CSS.) With D) Timeout=60, retry=9 to
avoid
temporary network interruptions from deleting files.
I need help with this. Where do I specify No External Pages and Browser ID?
Under Scan Rules?
Thank you for your replies WHRoeder. I hope I'm explaining myself clearly and
that you understand what I'm struggling with. | |