HTTrack Website Copier
Free software offline browser - FORUM
Subject: Ripping a websites database / links scanned limit?
Author: Miles
Date: 10/06/2014 04:38
 
Hello all,

I am trying to do a siterip of Animemusicvideos.org.  The site appears to
mainly be a search engine allowing access to the thousands of video files
there.  I have tried several different configurations but cannot seem to get
much video content (i have selected to allow all video formats I can think of
with tags like +*.MKV, etc)


Starting at the top level 'rel=nofollow'
(http://www.animemusicvideos.org/)'rel=nofollow HTTrack will scan and
complete in about 30 seconds so I have been ripping from the first page of the
super search results

The attempts seems to scan links, and takes 12 hours or so to eventually get
to "links scanned:  some number / 99XXX."  I never see it get to 100,000 and
it stops on its own there.  I attempt to resume and get the option to continue
an interrupted download but it basically does the same thing taking hours to
scan links and then reached a point where it quits again after finding 99,xxx
links.

any suggestions on how to overcome this issue either in a better way to set my
settings, or a better search starting point would be appreciated.
 
Reply


All articles

Subject Author Date
Ripping a websites database / links scanned limit?

10/06/2014 04:38
Re: Ripping a websites database / links scanned limit?

10/06/2014 04:49




d

Created with FORUM 2.0.11