Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)
by lucrateresults
A production-ready Apify actor for extracting emails and metadata from websites. Uses Puppeteer for JavaScript rendering and is optimized for parallel crawling.
Opens on Apify.com
About Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)
Need to pull contact info and page details from websites at scale? I built this Apify actor to handle just that. It's a production-ready scraper using PuppeteerCrawler from the Crawlee library, which means it properly renders JavaScript-heavy sites. I've configured it for efficient parallel crawling, so it can process multiple pages simultaneously to save you time. It also integrates with Apify's proxy rotation to help avoid IP blocks during larger runs. You get structured data with emails and key metadata from public pages. I use it for building lead lists, researching competitors, or gathering public contact information for outreach. Just point it at your starting URLs, and it handles the rest. Remember, always check a website's terms of service and only scrape publicly available data. It's a straightforward tool that does one job well.
What does this actor do?
Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee) is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)
Overview
This Actor is a production-ready template for building web crawlers that require JavaScript execution. It uses Crawlee's PuppeteerCrawler with headless Chrome to scrape websites, making it suitable for extracting data like emails and metadata from dynamic pages.
Key Features
- Puppeteer Crawler: Parallel crawling using headless Chrome via Puppeteer to handle JavaScript-rendered content.
- Proxy Management: Built-in configurable proxy support using
Actor.createProxyConfiguration()to circumvent IP blocking. Works with Apify Proxy or custom proxy URLs. - Structured Output: Stores scraped data in an Apify Dataset, where each item has consistent attributes.
- Input Validation: Uses an input schema (
INPUT_SCHEMA.json) to define and validate run parameters. - Enhanced Performance: Configured for deeper crawls (up to ~500 requests), higher concurrency (25-50 workers), and longer timeouts (90s).
How to Use
The Actor's logic is centered around a PuppeteerCrawler instance and a custom request router.
- Input & Setup: The Actor starts by fetching input (like start URLs) from
INPUT.json. A proxy configuration is created to manage IP rotation. - Crawler Initialization: A
PuppeteerCrawleris instantiated with the proxy config and arequestHandler. The handler uses a router defined inroutes.jsto process pages. - Request Handling: In
routes.js, you define handlers for different URL patterns usingcreatePuppeteerRouter().- Use
router.addDefaultHandler()for general page processing. - Use
router.addHandler(label, ...)for specific page types (e.g., product detail pages).
- Use
- Data Extraction: Inside your handlers, use Puppeteer's
pageobject to interact with the page (e.g.,page.title(),page.$eval()). Push structured results to the dataset withDataset.pushData(). - Execution: Start the crawl with
crawler.run(startUrls).
Example Handler:
router.addHandler('detail', async ({ request, page, log }) => {
const title = await page.title();
// Add your custom scraping logic here
await Dataset.pushData({
url: request.loadedUrl,
title,
});
});
Input/Output
- Input: Configured via
INPUT.json. Typical parameters include an array ofstartUrlsand proxy settings. The exact schema is defined inINPUT_SCHEMA.json. - Output: The Actor stores results in an Apify Dataset. Each item is a JSON object containing the scraped fields you define (e.g., URL, page title, extracted emails, metadata).
Resources
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee) now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- lucrateresults
- Pricing
- Paid
- Total Runs
- 269
- Active Users
- 19
Related Actors
Video Transcript Scraper: Youtube, X, Facebook, Tiktok, etc.
by invideoiq
Linkedin Profile Details Scraper + EMAIL (No Cookies Required)
by apimaestro
Twitter (X.com) Scraper Unlimited: No Limits
by apidojo
Content Checker
by jakubbalada
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support