| Hello guys,
I am trying to download a wiki, not the Wikipedia. Anyways I turned off the
robots.txt in the spider option.
At some point it just stops downloading, bandwidth and connections go to zero
and the status is "parsing html", though nothing happens. Aboarding and
restarting the download doesn't help, it stops again.
Before I turned off the robots.txt it piled up a lot of wiki template files
which were marked with an error. Since it has been turned off, it jumps over
these template files with an error and continues to a lot of images that pile
up, marked as ready. But nothing happens anymore from this point on.
Another thing is that this content (at that point of the download) doesn't
seem to be related to the wiki I am trying to download anymore and it is from
the commons.wikipedia.com website (different domain) although I didn't specify
other domains in the scan rules. Just the default settings apart from the
spider.
Any suggestions on what's stopping the download, and what is the thing with
the unknown content?
Greetings,
Sebastian | |