| > I'd like to use HTTrack to download a site (html,
gif
> etc), WITHOUT saving it to the local hard disk. The
> purpose would be to fill our proxy server overnight
> with pages we're interested in.
> Can it do this? Or is there anything else I should
> use?
Well, you can only scan HTML structure without saving
it (--spider) but not other files.
The best thing to do would be a batch file with
temporary file.
If you don't want to waste (even temporarily) space,
use the batch -V option, to specify a command to
execute after each file being downloaded: (example for
Linux/unix)
httrack www.foobar.com -V "rm \$0"
| |