| I want to use httrack as a spider to retrieve a list of URLs and not the files
Ex:
<http://blabla/1.html>
<http://blabla/2.asp>
<http://blabla/contact>
...
I do not want to download the files.
I use the --spider option.
However, it save temporary files on the disk
...
tmpfile1075.tmp
tmpfile1076.tmp
tmpfile1077.tmp
They take a lot of space. How to not save/have theses temporary files?
Besides, I just want a list of URL (line by line) and not parse the
new.txt/new.lst file, is there to save URL list via a simple format?
Weird to have a such powerfull tool not have such basic features!
:)
| |