HTTrack Website Copier
Free software offline browser - FORUM
Subject: Re: First page of the URL
Author: William Roeder
Date: 12/18/2008 00:10
> I used the following statement to crawl the webpage.
> I need only html and images used in the initial
> page.   But the below statement crawl, all the links
> used in the web page. Please advice  me on this
> httrack -O
> \\home\\test\\httrack-3.43.1\\data\\websec-1.9.0\\1\\8
> -q -Q -N 20081217100027\\%n.%t -o0 -X0 -T30
> -R1 -I0 -%F "" -F "Mozilla/5.0 Firefox/3.0.3" -%h
> -* +*.jpg +*.jpeg +*.css +*.js +*.gif +*.bmp +*.tif*
> +*.png +*.swf -*.exe -*.pdf -*.doc -*.zip

Per -%h doesn't take an argument.

If your running under windows you probably need quotes and not apostrophes and
if your using a bat file all percents need to be doubled.

Your output directory should not have the hts-cache, as that is created by

Finally the default is to mirror -w or --mirror. You want to just get the page
-g or --get-files or set maximum depth -r1 or --depth=1
Reply Create subthread

All articles

Subject Author Date
First page of the URL

12/17/2008 19:16
Re: First page of the URL

12/18/2008 00:10
Re: First page of the URL

12/18/2008 00:27


Created with FORUM 2.0.11