HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: Capture specific pages
Author: Gadrin
Date: 10/13/2007 20:34
 
Well I started a new project...

used
<http://www.ukdfd.co.uk/ukdfddata/showcat.php?cat=all&page=1&what=allfields&name=Cheryl%20Hodgson&name=Cheryl%20Hodgson&mcats=all>

as the starting URL...

and added these scan rules:

+*.png +*.gif +*.jpg +*.css +*.js -ad.doubleclick.net/*
-mime:application/foobar
+*/*name=Cheryl%20Hodgson*

the last one to find URLs with her name in them (meaning the 16 across the
bottom of the results page).

I stopped it after about 10 megs or so and it seemed to have done it's job. It
even got the items desciption pages of what was on display.

You may need to add more scan rules to get rid of any undesirable pages or
widen things so you get a mini-site type of mirror.

But give that a try and let us know.
 
Reply Create subthread


All articles

Subject Author Date
Capture specific pages

10/05/2007 17:01
Re: Capture specific pages

10/05/2007 21:05
Re: Capture specific pages

10/06/2007 16:34
Re: Capture specific pages

10/13/2007 20:34
Re: Capture specific pages

10/15/2007 22:11




2

Created with FORUM 2.0.11