| Dear HTTrack Developers!
Please explain me, why does your program, when I'm trying to copy a site,
download all the Wikimedia resources from Wikipedia? I'm not trying to copy
Wikipedia, but the program does that instead of me. As Wikipedia is huge, this
process is endless, it seems no disk will enough to make the mirror of a small
site. The attempt to make mirrors of numerous sites lead to copying Wikipedia
or Wikimedia resources. Please explain me what's the reason of this strange
behaviour of your program.
As an example you can try to copy this site
<http://homepage.divms.uiowa.edu/~jones/>
To get a proper mirror it seems one must specify two starting addresses:
<http://homepage.divms.uiowa.edu/~jones/>
<http://www.cs.uiowa.edu/~jones/>
and then get a copy of... Wikipedia.
| |