Intelligent Website Scrapper

Intelligent Website Scrapper

by happitap

An intelligent website scraper that uses LangChain and LLM to extract and process content based on high-level goals like summarization, product extrac...

9,107 runs
147 users
Try This Actor

Opens on Apify.com

About Intelligent Website Scrapper

An intelligent website scraper that uses LangChain and LLM to extract and process content based on high-level goals like summarization, product extraction, service extraction, and FAQ extraction.

What does this actor do?

Intelligent Website Scrapper is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Intelligent Website Scraper An Apify actor that uses LangChain and LLM to intelligently scrape and process website content based on high-level goals. ## Features - Universal Website Scraping: Works on any website, not limited to specific platforms - Intelligent Content Processing: Uses LangChain + OpenAI to extract and summarize content - Multiple Task Types: Support for summarization, product extraction, service extraction, and FAQ extraction - Configurable Crawling: Optional internal link following with depth control - Clean Content Extraction: Removes scripts, styles, and irrelevant content ## Input The actor accepts the following input format: json { "startUrls": [ { "url": "https://example.com" } ], "taskType": "extractServices", "maxDepth": 1, "followInternalLinks": false } ### Input Parameters | Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | startUrls | Array | Yes | - | Array of objects with url property | | taskType | String | No | summarize | Type of content processing task | | maxDepth | Number | No | 1 | Maximum depth for internal link following | | followInternalLinks | Boolean | No | false | Whether to follow internal links | ### Supported Task Types | Task Type | Description | |-----------|-------------| | summarize | Summarize entire site content | | extractProducts | Identify and extract product-related sections | | extractServices | Extract service listings or offerings | | extractFAQs | Pull FAQ-like content from the page | ## Output The actor outputs structured data for each processed URL: json { "url": "https://example.com", "title": "Example Website", "taskType": "extractServices", "processedContent": "AI-processed content based on task type...", "rawContent": "First 1000 characters of raw content...", "scrapedAt": "2024-01-01T00:00:00.000Z", "metadata": { "wordCount": 1500, "linksFound": 25, "imagesFound": 10 } } ## Environment Variables | Variable | Required | Description | |----------|----------|-------------| | OPENAI_API_KEY | Yes | Your OpenAI API key for LangChain integration | ## Example Usage ### Basic Summarization json { "startUrls": [ { "url": "https://example.com" } ], "taskType": "summarize" } ### Extract Services with Internal Link Following json { "startUrls": [ { "url": "https://example.com" } ], "taskType": "extractServices", "followInternalLinks": true, "maxDepth": 2 } ### Extract Products from Multiple URLs json { "startUrls": [ { "url": "https://shop1.com" }, { "url": "https://shop2.com" } ], "taskType": "extractProducts" } ## How It Works 1. Content Extraction: Uses Puppeteer to load pages and Cheerio to extract clean content 2. Intelligent Processing: LangChain processes content based on the specified task type 3. Structured Output: Returns processed content with metadata and original URL 4. Optional Crawling: Can follow internal links to gather more comprehensive data ## Installation 1. Clone this repository 2. Install dependencies: npm install 3. Set your OPENAI_API_KEY environment variable 4. Run the actor: npm start ## Development - npm start - Run the actor - npm run format - Format code with Prettier - npm run lint - Run ESLint - npm run lint:fix - Fix ESLint issues ## Architecture - src/main.js - Main entry point and input validation - src/routes.js - Request routing - src/handlers/websiteScraper.js - Main scraping logic - src/services/langchainService.js - LangChain integration and task processing - src/puppeteerLauncher.js - Puppeteer browser configuration

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Intelligent Website Scrapper now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
happitap
Pricing
Paid
Total Runs
9,107
Active Users
147
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support