Get Urls Pro
by maged120
This Apify actor crawls websites, extracts and creates a hierarchy of links, allowing you to visualize the structure of a website. The crawler can be ...
Opens on Apify.com
About Get Urls Pro
This Apify actor crawls websites, extracts and creates a hierarchy of links, allowing you to visualize the structure of a website. The crawler can be configured to use either standard HTTP requests with BeautifulSoup (fast HTML parsing) or Selenium (for JavaScript-heavy pages)
What does this actor do?
Get Urls Pro is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Website Crawler This Apify actor crawls websites, extracts and creates a hierarchy of links, allowing you to visualize the structure of a website. The crawler can be configured to use either standard HTTP requests with BeautifulSoup (fast HTML parsing) or Selenium (for JavaScript-heavy pages). ## Features - Crawl any website starting from a specified URL - Control crawl depth and number of links per page - Filter out specific file extensions - Option to use Selenium for JavaScript-heavy websites - Prevent duplicate URLs in the output - Proxy support (via Apify Proxy) ## Input Parameters | Parameter | Type | Description | |-----------|------|-------------| | startUrl | String | The starting URL to crawl (e.g., https://jamesclear.com/five-step-creative-process) | | useSelenium | Boolean | Use Selenium for JavaScript-heavy pages | | allowDuplicates | Boolean | Allow duplicate URLs in the output | | maxDepth | Integer | Maximum depth of link recursion (1-30) | | maxChildrenPerLink | Integer | Maximum number of children per parent link (1-100) | | sameDomainOnly | Boolean | only crawl urls with the same domain as the start url, (default: true) | | ignoredExtensions | Array | File extensions to ignore when crawling | ## Output The actor outputs a JSON object with the following structure: json [ { "url": "https://jamesclear.com/five-step-creative-process", "name": null, "query": "", "depth": 0, "parentUrl": null }, { "url": "https://jamesclear.com/", "name": null, "query": "", "depth": 1, "parentUrl": "https://jamesclear.com/five-step-creative-process" }, { "url": "https://jamesclear.com/books", "name": "Books", "query": "", "depth": 1, "parentUrl": "https://jamesclear.com/five-step-creative-process" }, { "url": "https://jamesclear.com/articles", "name": "Articles", "query": "", "depth": 1, "parentUrl": "https://jamesclear.com/five-step-creative-process" }, { "url": "https://jamesclear.com/3-2-1", "name": "Newsletter", "query": "", "depth": 2, "parentUrl": "https://jamesclear.com/" }, { "url": "https://jamesclear.com/events?g=4", "name": "Speaking", "query": "g=4", "depth": 2, "parentUrl": "https://jamesclear.com/" } ] ## Example Usage ### Basic Crawl To create a basic map of a website with default settings: json { "startUrl": "https://google.com", "useSelenium": false, "maxDepth": 2, "maxChildrenPerLink": 5, } ### Deep Crawl with Selenium For a deeper crawl of a JavaScript-heavy website: json { "startUrl": "https://jamesclear.com/five-step-creative-process", "useSelenium": true, "maxDepth": 2, "maxChildrenPerLink": 5, "allowDuplicates": false, "ignoredExtensions": ["gif", "jpg", "png", "css", "jpeg", "pdf", "doc", "docx"] } ## Implementation Details This actor is built with: - Apify Python SDK - BeautifulSoup for standard HTML parsing - Selenium with Chrome WebDriver for JavaScript-heavy pages - Asynchronous processing for better performance ## notes - JavaScript-heavy pages may require the useSelenium option enabled - Very large websites should use lower maxDepth and maxChildrenPerLink values to avoid hitting memory limits, or talking way long time
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Get Urls Pro now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- maged120
- Pricing
- Paid
- Total Runs
- 3,353
- Active Users
- 32
Related Actors
Video Transcript Scraper: Youtube, X, Facebook, Tiktok, etc.
by invideoiq
Linkedin Profile Details Scraper + EMAIL (No Cookies Required)
by apimaestro
Twitter (X.com) Scraper Unlimited: No Limits
by apidojo
Content Checker
by jakubbalada
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support