No-BS Content Crawler 🖕
by successful_nonagon
Fast web crawler that extracts clean text from websites. Returns readable content, headings, and links. Perfect for content aggregation, SEO research,...
Opens on Apify.com
About No-BS Content Crawler 🖕
Fast web crawler that extracts clean text from websites. Returns readable content, headings, and links. Perfect for content aggregation, SEO research, and data collection.
What does this actor do?
No-BS Content Crawler 🖕 is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
TypeScript Crawlee & CheerioCrawler Actor Template This template example was built with Crawlee to scrape data from a website using Cheerio wrapped into CheerioCrawler. ## Quick Start Once you've installed the dependencies, start the Actor: bash apify run Once your Actor is ready, you can push it to the Apify Console: bash apify login # first, you need to log in if you haven't already done so apify push ## Project Structure text .actor/ ├── actor.json # Actor config: name, version, env vars, runtime settings ├── dataset_schena.json # Structure and representation of data produced by an Actor ├── input_schema.json # Input validation & Console form definition └── output_schema.json # Specifies where an Actor stores its output src/ └── main.ts # Actor entry point and orchestrator storage/ # Local storage (mirrors Cloud during development) ├── datasets/ # Output items (JSON objects) ├── key_value_stores/ # Files, config, INPUT └── request_queues/ # Pending crawl requests Dockerfile # Container image definition For more information, see the Actor definition documentation. ## How it works This code is a TypeScript script that uses Cheerio to scrape data from a website. It then stores the website titles in a dataset. - The crawler starts with URLs provided from the input startUrls field defined by the input schema. Number of scraped pages is limited by maxPagesPerCrawl field from the input schema. - The crawler uses requestHandler for each URL to extract the data from the page with the Cheerio library and to save the title and URL of each page to the dataset. It also logs out each result that is being saved. ## What's included - Apify SDK - toolkit for building Actors - Crawlee - web scraping and browser automation library - Input schema - define and easily validate a schema for your Actor's input - Dataset - store structured data where each object stored has the same attributes - Cheerio - a fast, flexible & elegant library for parsing and manipulating HTML and XML - Proxy configuration - rotate IP addresses to prevent blocking ## Resources - Quick Start guide for building your first Actor - Video tutorial on building a scraper using CheerioCrawler - Written tutorial on building a scraper using CheerioCrawler - Web scraping with Cheerio in 2023 - How to scrape a dynamic page using Cheerio - Integration with Zapier, Make, Google Drive and others - Video guide on getting data using Apify API ## Creating Actors with templates How to create Apify Actors with web scraping code templates ## Getting started For complete information see this article. In short, you will: 1. Build the Actor 2. Run the Actor ## Pull the Actor for local development If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI: 1. Install apify-cli Using Homebrew bash brew install apify-cli Using NPM bash npm -g install apify-cli 2. Pull the Actor by its unique <ActorId>, which is one of the following: - unique name of the Actor to pull (e.g. "apify/hello-world") - or ID of the Actor to pull (e.g. "E2jjCZBezvAZnX8Rb") You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID. This command will copy the Actor into the current directory on your local machine. bash apify pull <ActorId> ## Documentation reference To learn more about Apify and Actors, take a look at the following resources: - Apify SDK for JavaScript documentation - Apify SDK for Python documentation - Apify Platform documentation - Join our developer community on Discord
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try No-BS Content Crawler 🖕 now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- successful_nonagon
- Pricing
- Paid
- Total Runs
- 60
- Active Users
- 8
Related Actors
Google Search Results Scraper
by apify
Google Search Results (SERP) Scraper
by scraperlink
Google Search
by devisty
Bing Search Scraper
by tri_angle
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support