Get Urls Pro

Get Urls Pro

by maged120

This Apify actor crawls websites, extracts and creates a hierarchy of links, allowing you to visualize the structure of a website. The crawler can be ...

3,353 runs
32 users
Try This Actor

Opens on Apify.com

About Get Urls Pro

This Apify actor crawls websites, extracts and creates a hierarchy of links, allowing you to visualize the structure of a website. The crawler can be configured to use either standard HTTP requests with BeautifulSoup (fast HTML parsing) or Selenium (for JavaScript-heavy pages)

What does this actor do?

Get Urls Pro is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Website Crawler This Apify actor crawls websites, extracts and creates a hierarchy of links, allowing you to visualize the structure of a website. The crawler can be configured to use either standard HTTP requests with BeautifulSoup (fast HTML parsing) or Selenium (for JavaScript-heavy pages). ## Features - Crawl any website starting from a specified URL - Control crawl depth and number of links per page - Filter out specific file extensions - Option to use Selenium for JavaScript-heavy websites - Prevent duplicate URLs in the output - Proxy support (via Apify Proxy) ## Input Parameters | Parameter | Type | Description | |-----------|------|-------------| | startUrl | String | The starting URL to crawl (e.g., https://jamesclear.com/five-step-creative-process) | | useSelenium | Boolean | Use Selenium for JavaScript-heavy pages | | allowDuplicates | Boolean | Allow duplicate URLs in the output | | maxDepth | Integer | Maximum depth of link recursion (1-30) | | maxChildrenPerLink | Integer | Maximum number of children per parent link (1-100) | | sameDomainOnly | Boolean | only crawl urls with the same domain as the start url, (default: true) | | ignoredExtensions | Array | File extensions to ignore when crawling | ## Output The actor outputs a JSON object with the following structure: json [ { "url": "https://jamesclear.com/five-step-creative-process", "name": null, "query": "", "depth": 0, "parentUrl": null }, { "url": "https://jamesclear.com/", "name": null, "query": "", "depth": 1, "parentUrl": "https://jamesclear.com/five-step-creative-process" }, { "url": "https://jamesclear.com/books", "name": "Books", "query": "", "depth": 1, "parentUrl": "https://jamesclear.com/five-step-creative-process" }, { "url": "https://jamesclear.com/articles", "name": "Articles", "query": "", "depth": 1, "parentUrl": "https://jamesclear.com/five-step-creative-process" }, { "url": "https://jamesclear.com/3-2-1", "name": "Newsletter", "query": "", "depth": 2, "parentUrl": "https://jamesclear.com/" }, { "url": "https://jamesclear.com/events?g=4", "name": "Speaking", "query": "g=4", "depth": 2, "parentUrl": "https://jamesclear.com/" } ] ## Example Usage ### Basic Crawl To create a basic map of a website with default settings: json { "startUrl": "https://google.com", "useSelenium": false, "maxDepth": 2, "maxChildrenPerLink": 5, } ### Deep Crawl with Selenium For a deeper crawl of a JavaScript-heavy website: json { "startUrl": "https://jamesclear.com/five-step-creative-process", "useSelenium": true, "maxDepth": 2, "maxChildrenPerLink": 5, "allowDuplicates": false, "ignoredExtensions": ["gif", "jpg", "png", "css", "jpeg", "pdf", "doc", "docx"] } ## Implementation Details This actor is built with: - Apify Python SDK - BeautifulSoup for standard HTML parsing - Selenium with Chrome WebDriver for JavaScript-heavy pages - Asynchronous processing for better performance ## notes - JavaScript-heavy pages may require the useSelenium option enabled - Very large websites should use lower maxDepth and maxChildrenPerLink values to avoid hitting memory limits, or talking way long time

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Get Urls Pro now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
maged120
Pricing
Paid
Total Runs
3,353
Active Users
32
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support