Website Content Extractor

Website Content Extractor

by fastidious_drawer

This extractor lets you extract content from any website with a single or multiple URLs. Use selectors to choose specific sections like the body and e...

592 runs
140 users
Try This Actor

Opens on Apify.com

About Website Content Extractor

This extractor lets you extract content from any website with a single or multiple URLs. Use selectors to choose specific sections like the body and exclude elements like headers or navigation. It also extracts images and links, providing data in JSON and DataTable formats for easy processing.

What does this actor do?

Website Content Extractor is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Website Content Extractor The Website Content Extractor is a web scraping tool designed to extract text, images, metadata, and links from specified websites using Playwright and Crawlee. It allows users to define target URLs, CSS selectors for content extraction, and exclusion rules. ## Features - Extract Text: Extracts visible text from the website based on CSS selectors. - Extract Metadata: Extracts metadata including canonical URL, title, description, and Open Graph data. - Extract Images: Optionally extract all images from the page. - Extract Links: Optionally extract all links from the page. - Exclude Selectors: Excludes certain page elements (e.g., header, footer, nav) from the extraction. - Crawl Multiple Pages: Crawl and extract content from multiple pages if needed. ## How to Use ## Input The input is a JSON configuration that specifies the settings for the extraction process. ### Fields - urls (required): Array of URLs — List of website URLs to extract content from. - selectors: Array of CSS selectors — Specifies which elements to extract content from. - excludeSelectors: Array of CSS selectors — Specifies elements to exclude from extraction (e.g., header, nav, footer). - extractImages: Boolean — If set to true, images from the page will be extracted. - extractLinks: Boolean — If set to true, links from the page will be extracted. - maxPages: Integer — Limits the number of pages to crawl. Defaults to 1 if not set. ### Example Input json { "urls": [ "https://example.com" ], "selectors": [ "p", "h1" ], "excludeSelectors": [ "header", "footer" ], "extractImages": true, "extractLinks": true, "maxPages": 3 } ## Output The output consists of extracted data for each URL, including: - Text: All text content extracted from the specified selectors. - Markdown: Converted Markdown format of the extracted text. - Metadata: Metadata such as canonical URL, title, description, and Open Graph data. - Images: List of image URLs extracted from the page (if enabled). - Links: List of all links found on the page (if enabled). - Crawl Information: Includes the URL, loading time, HTTP status code, and crawl depth. ### Example Output json { "url": "https://example.com", "crawl": { "loadedUrl": "https://example.com", "loadedTime": "2025-03-10T10:00:00Z", "depth": 0, "httpStatusCode": 200 }, "text": "Extracted text content...", "markdown": "**Extracted Text:**\n\nExtracted text content...", "metadata": { "canonicalUrl": "https://example.com/canonical", "title": "Page Title", "description": "Page description here", "openGraph": [ { "property": "og:title", "content": "Page Title" }, { "property": "og:description", "content": "Page description here" } ], "jsonLd": [] }, "images": ["https://example.com/image1.jpg", "https://example.com/image2.jpg"], "links": ["https://example.com/page1", "https://example.com/page2"] } ## Notes - CSS Selectors: Use valid CSS selectors to extract the specific content you need from the web pages. - Limitations: Depending on the website, some content may be loaded dynamically via JavaScript. In such cases, make sure to enable Playwright's capabilities to handle dynamic content. - Crawl Depth: Set maxPages to crawl more pages from the same domain, but be mindful of rate limits and page load times.

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Website Content Extractor now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
fastidious_drawer
Pricing
Paid
Total Runs
592
Active Users
140
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support