| This discussion seems not be finished...
I am attempting to download a page and the result does not look like the
original online page. The head section of the page contains some links as
follows:
link rel="alternate stylesheet" media="screen" type="text/css"
href="/includes/stylesheets/tekstgrootte2.css" title="maat2"
After setting spider option "no robots.txt rules", these css files exist in
the offline cache. However, it looks as if they are not applied to the page.
The command generated by the GUI is:
HTTrack3.40-2+swf launched on do, 18 jan 2007 10:29:13 at www.minvws.nl +*.png
+*.gif +*.jpg +*.css +*.js -ad.doubleclick.net/* -mime:application/foobar
(winhttrack -qir1%e0C2%Ps0u1%s%uN0%I0p3DaK0H0%kf2A25000%f#f -F "Mozilla/4.5
(compatible; HTTrack 3.0x; Windows 98)" -%F "<!-- Mirrored from %s%s by
HTTrack Website Copier/3.x [XR&CO'2006], %s -->" -%l "en, en, *" www.minvws.nl
-O1 "C:\Data\My Web Sites\VWS mirror" +*.png +*.gif +*.jpg +*.css +*.js
-ad.doubleclick.net/* -mime:application/foobar )
(Copied from log and omitted proxy setting)
Has this to do with the copying mechanims, or is there something wrong with
the way the rel link tags are put in the html? | |