Reddit Comments Scraper
by igview-owner
Scrape live Reddit comments into clean JSON with reliable pagination. Ideal for research, sentiment analysis, and building datasets directly on Apify.
Opens on Apify.com
About Reddit Comments Scraper
Need to pull live comments from Reddit for a project? This scraper is what I use. It grabs comments from any subreddit you point it at and structures everything into clean, ready-to-use JSON. One of the best parts is the cursor-based pagination—it makes collecting large volumes of data reliable, so you don't miss anything or hit rate limits unexpectedly. I've fed this data directly into Apify datasets for analysis, which is super convenient. It's perfect if you're tracking sentiment around a topic, gathering feedback on products, or building a dataset for machine learning. Whether you're a researcher, a marketer, or a developer setting up an ETL pipeline, this tool handles the messy part of data collection so you can focus on the insights. Just configure your target subreddit and let it run.
What does this actor do?
Reddit Comments Scraper is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Reddit Comments Scraper
Extract comments and discussions from Reddit subreddit streams. This Apify actor provides structured data with rich metadata, pagination, and filtering, suitable for data analysis, research, and monitoring.
Overview
The actor scrapes live comment feeds from specified subreddits. It handles pagination, extracts comprehensive metadata for each comment and its linked post, and outputs structured JSON. It's built for reliability with error handling and respectful rate limiting.
Key Features
- Comment Extraction: Scrapes live comment streams from any public subreddit.
- Pagination: Supports multi-page scraping with automatic cursor management for continuous data collection.
- Rich Metadata: Captures a detailed dataset for each comment.
- Structured Output: Returns clean JSON with comments organized in batches.
Extracted metadata includes:
* Comment Details: ID, author, body text, score, creation timestamp, and permalink.
* Post Context: Title, subreddit, and ID of the parent post.
* User Info: Author username and flair.
* Engagement: Upvote count and score.
* Thread Structure: Parent comment ID to map reply hierarchies.
How to Use
Configure the actor with input parameters via the Apify console, API, or a scheduled run. The core requirement is specifying a target subreddit.
Basic Configuration:
{
"subreddit": "technology",
"maxPages": 5
}
This scrapes up to 5 pages of the latest comments from r/technology.
Input Parameters
| Parameter | Type | Required | Description | Default |
|---|---|---|---|---|
subreddit |
String | Yes | Name of the subreddit to scrape (without the r/). |
- |
maxPages |
Integer | No | Number of pages of comments to scrape (1-50). | 1 |
Output Format
The actor outputs dataset items in JSON format. Each item represents a batch of comments.
Example Output Item:
{
"type": "comments_batch",
"comments": [
{
"comment_id": "abc123",
"author": "username",
"content": "This is a comment text.",
"score": 42,
"created_utc": 1678886400,
"permalink": "/r/technology/comments/...",
"post_title": "The linked post title",
"post_subreddit": "technology",
"post_id": "xyz789",
"upvotes": 42,
"user_flair": null,
"parent_id": "t1_def456"
}
]
}
The dataset can be exported to JSON, CSV, or other formats via the Apify platform. For full documentation and to run the actor, visit: https://apify.com?fpr=python_automation
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Reddit Comments Scraper now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- igview-owner
- Pricing
- Paid
- Total Runs
- 54
- Active Users
- 9
Related Actors
Web Scraper
by apify
Cheerio Scraper
by apify
Website Content Crawler
by apify
Legacy PhantomJS Crawler
by apify
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support