Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)

Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)

by lucrateresults

A production-ready Apify actor for extracting emails and metadata from websites. Uses Puppeteer for JavaScript rendering and is optimized for parallel crawling.

269 runs
19 users
Try This Actor

Opens on Apify.com

About Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)

Need to pull contact info and page details from websites at scale? I built this Apify actor to handle just that. It's a production-ready scraper using PuppeteerCrawler from the Crawlee library, which means it properly renders JavaScript-heavy sites. I've configured it for efficient parallel crawling, so it can process multiple pages simultaneously to save you time. It also integrates with Apify's proxy rotation to help avoid IP blocks during larger runs. You get structured data with emails and key metadata from public pages. I use it for building lead lists, researching competitors, or gathering public contact information for outreach. Just point it at your starting URLs, and it handles the rest. Remember, always check a website's terms of service and only scrape publicly available data. It's a straightforward tool that does one job well.

What does this actor do?

Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee) is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee)

Overview

This Actor is a production-ready template for building web crawlers that require JavaScript execution. It uses Crawlee's PuppeteerCrawler with headless Chrome to scrape websites, making it suitable for extracting data like emails and metadata from dynamic pages.

Key Features

  • Puppeteer Crawler: Parallel crawling using headless Chrome via Puppeteer to handle JavaScript-rendered content.
  • Proxy Management: Built-in configurable proxy support using Actor.createProxyConfiguration() to circumvent IP blocking. Works with Apify Proxy or custom proxy URLs.
  • Structured Output: Stores scraped data in an Apify Dataset, where each item has consistent attributes.
  • Input Validation: Uses an input schema (INPUT_SCHEMA.json) to define and validate run parameters.
  • Enhanced Performance: Configured for deeper crawls (up to ~500 requests), higher concurrency (25-50 workers), and longer timeouts (90s).

How to Use

The Actor's logic is centered around a PuppeteerCrawler instance and a custom request router.

  1. Input & Setup: The Actor starts by fetching input (like start URLs) from INPUT.json. A proxy configuration is created to manage IP rotation.
  2. Crawler Initialization: A PuppeteerCrawler is instantiated with the proxy config and a requestHandler. The handler uses a router defined in routes.js to process pages.
  3. Request Handling: In routes.js, you define handlers for different URL patterns using createPuppeteerRouter().
    • Use router.addDefaultHandler() for general page processing.
    • Use router.addHandler(label, ...) for specific page types (e.g., product detail pages).
  4. Data Extraction: Inside your handlers, use Puppeteer's page object to interact with the page (e.g., page.title(), page.$eval()). Push structured results to the dataset with Dataset.pushData().
  5. Execution: Start the crawl with crawler.run(startUrls).

Example Handler:

router.addHandler('detail', async ({ request, page, log }) => {
    const title = await page.title();
    // Add your custom scraping logic here
    await Dataset.pushData({
        url: request.loadedUrl,
        title,
    });
});

Input/Output

  • Input: Configured via INPUT.json. Typical parameters include an array of startUrls and proxy settings. The exact schema is defined in INPUT_SCHEMA.json.
  • Output: The Actor stores results in an Apify Dataset. Each item is a JSON object containing the scraped fields you define (e.g., URL, page title, extracted emails, metadata).

Resources

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Universal Apify Email & Metadata Scraper (Puppeteer + Crawlee) now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
lucrateresults
Pricing
Paid
Total Runs
269
Active Users
19
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support