| So far this program looks great! Thanks to those who have
created it. I need to download a "cafe" sub web on a site.
The site requires login etc. I've used the capture URL
feature to successfully login in and init the capture but
I'm getting the entire huge website. The url for the "sub
web" that I want is like:
<http://cafe.empas.com/wallflower>
The actual sub web isn't wallflower, it's wallstreet, just
munging a bit here...anyways, even tho that is the url
that is captured by this cool program, the entire titanic
website starts to download.
Am I missing something? Or do I just have to let the
download complete and discard what I don't need? | |