Our biggest GUI webspider, Teleport VLX has the intuitive Teleport interface and its easy-to-use project-based approach -- but it also has the ability to scan up to 40 million addresses in a single project. Like Teleport Pro and Teleport Ultra, Teleport VLX can handle multiple servers in a single project, dramatically improving its throughput. Its much larger database, however, allows it to handle enormous websites and large numbers of servers, enabling you to download much more information in a single project.
Teleport VLX is also an enhanced capability webspider, having exploration, filtering, and rewriting capabilities that Teleport Pro lacks. Its enhanced abilities allow you to:
- Use regular expressions to specify included as well as excluded areas to crawl
- Specify domain aliases for crawling servers with multiple names
- Borrow the browser's cookie cache, letting you perform complex authentication with your browser, and then crawl with Teleport
- Inject custom HTTP headers into server requests
- Synchronize your offline copy so that old files and orphans are automatically removed
- Control HTML markup and inject meta tags with original URL and retrieval date/time stamps
- Use customizable messages when rewriting links to unretrieved files
In addition, Teleport VLX is now able to crawl HTTPS (secure server) sites.
Teleport VLX requires a lot of memory to run efficiently. We recommend that your machine have 128MB free memory for every 1 million addresses scanned. Machines with enough RAM will run Teleport VLX at its maximum speed and efficiency. The program can, however, run with less RAM -- it will just run more slowly.
What's new in version 1.72?
- Improved parser, handles strings in scripts better
- Removed known problem scripts (jquery, addthis) from rewrite process
- Updated company contact information