| > I'm just asking when we get an improved text parser
> to ''break'' those bloody web pages whom javascripts
> protect them so good ?
This is not as simple as it look - you do not just need an "improved parser",
you need a real parser with static analysis of the code behind. Executing the
javascript is already a complex task, and yet it would not be sufficient (all
execution pathes might not be run, and you have to solve the link rewrite
issue)
I'm afraid "improving" the parser will be extremely hard - I'm aware of the
limits of the current one, but I do not have any magic solution yet
unfortunately. | |