Reddit Scraper (subreddits/users)
by agenscrape
Want to know what people are saying on Reddit? Monitor subreddits, track discussions, find trending topics, and discover what your audience is talking...
Opens on Apify.com
About Reddit Scraper (subreddits/users)
Want to know what people are saying on Reddit? Monitor subreddits, track discussions, find trending topics, and discover what your audience is talking about. Perfect for social listening, market research, and content automation.
What does this actor do?
Reddit Scraper (subreddits/users) is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Reddit Scraper Want to know what people are saying on Reddit? Monitor subreddits, track discussions, find trending topics, and discover what your audience is talking about. Perfect for social listening, market research, and content automation. ## 🤔 Questions This Answers - 💬 What are people saying about my brand? Monitor brand mentions across Reddit - 📊 What's trending in my niche? Track hot topics and discussions - 🔍 How do I find relevant discussions? Search Reddit for specific keywords - 👥 What does my audience care about? Discover popular topics in your subreddits - 🚀 What content performs well? Analyze top posts and engagement ## 💼 Perfect For ### Social Media Managers - Monitor brand mentions and sentiment - Track competitor discussions - Find user-generated content opportunities - ROI: Real-time brand monitoring for pennies ### Market Researchers - Discover customer pain points - Identify trending topics and interests - Analyze competitor products/services - ROI: Deep market insights without expensive tools ### Content Creators - Find trending topics for content ideas - Discover what resonates with your audience - Track content performance metrics - ROI: Data-driven content strategy ### n8n Automation Users - Trigger workflows based on Reddit activity - Auto-post to Slack/Discord when keywords appear - Build custom monitoring dashboards - ROI: Automated social listening 24/7 ## ✨ What You Get ### Complete Post Data (40+ Fields) - ✅ Basic Info: id, name, title, author, author_fullname - ✅ Subreddit Data: subreddit, subreddit_id, subreddit_subscribers - ✅ Engagement: score, upvote_ratio, ups, downs, num_comments, gilded, total_awards_received - ✅ Content: selftext, selftext_html, url, permalink, full_link, domain - ✅ Timestamps: created_utc, created (ISO format) - ✅ Media: thumbnail, thumbnail_height, thumbnail_width, media, media_embed, preview - ✅ Flags: is_self, is_video, is_original_content, over_18, spoiler, locked, stickied - ✅ Flair: link_flair_text, link_flair_background_color, link_flair_css_class, author_flair_text - ✅ Metadata: distinguished, num_crossposts, is_reddit_media_domain ### Complete Comment Data (25+ Fields) - ✅ Basic Info: id, name, author, author_fullname - ✅ Content: body, body_html - ✅ Engagement: score, ups, downs, gilded, total_awards_received, controversiality - ✅ Context: subreddit, subreddit_id, link_id, link_title, link_permalink, parent_id, depth - ✅ Timestamps: created_utc, created (ISO format) - ✅ Links: permalink, full_link - ✅ Flags: is_submitter, stickied, score_hidden, distinguished - ✅ Flair: author_flair_text ## 📊 Sample Output ### Post Data json { "id": "1abc123", "name": "t3_1abc123", "title": "Just launched my SaaS product - here's what I learned", "author": "entrepreneur_joe", "author_fullname": "t2_abc123", "subreddit": "SaaS", "subreddit_id": "t5_2qh16", "subreddit_subscribers": 150000, "score": 247, "upvote_ratio": 0.95, "ups": 247, "downs": 0, "num_comments": 42, "created_utc": 1700000000, "created": "2023-11-14T22:13:20.000Z", "url": "https://example.com/article", "permalink": "/r/SaaS/comments/1abc123/just_launched_my_saas_product/", "full_link": "https://www.reddit.com/r/SaaS/comments/1abc123/just_launched_my_saas_product/", "domain": "example.com", "selftext": "After 6 months of development...", "selftext_html": "<div class=\"md\">After 6 months...</div>", "thumbnail": "https://b.thumbs.redditmedia.com/...", "thumbnail_height": 140, "thumbnail_width": 140, "is_self": false, "is_video": false, "is_original_content": true, "is_reddit_media_domain": false, "over_18": false, "spoiler": false, "locked": false, "stickied": false, "distinguished": null, "link_flair_text": "Success Story", "link_flair_background_color": "#0079d3", "link_flair_css_class": "success", "author_flair_text": "Founder", "gilded": 2, "total_awards_received": 5, "num_crossposts": 3, "media": null, "media_embed": {}, "preview": null } ### Comment Data json { "id": "abc123", "name": "t1_abc123", "author": "helpful_user", "author_fullname": "t2_xyz789", "body": "Great insights! Thanks for sharing your journey.", "body_html": "<div class=\"md\">Great insights!...</div>", "score": 45, "ups": 45, "downs": 0, "subreddit": "SaaS", "subreddit_id": "t5_2qh16", "created_utc": 1700000100, "created": "2023-11-14T22:15:00.000Z", "permalink": "/r/SaaS/comments/1abc123/post_title/abc123/", "full_link": "https://www.reddit.com/r/SaaS/comments/1abc123/post_title/abc123/", "link_id": "t3_1abc123", "link_title": "Just launched my SaaS product", "link_permalink": "/r/SaaS/comments/1abc123/just_launched_my_saas_product/", "parent_id": "t3_1abc123", "depth": 0, "is_submitter": false, "stickied": false, "score_hidden": false, "distinguished": null, "author_flair_text": "Moderator", "gilded": 0, "total_awards_received": 1, "controversiality": 0 } ## 🎯 Input Parameters ### Required Parameters | Parameter | Type | Description | Example | |-----------|------|-------------|---------| | mode | Select | What to scrape: Posts (Subreddit), Posts (Author), Comments (Subreddit), Comments (Author) | "posts_subreddit" | | subreddit | String | Subreddit name (without r/). Used for Subreddit modes | "technology" | | author | String | Reddit username. Used for Author modes | "reddit" | ### Optional Parameters | Parameter | Type | Description | Default | Example | |-----------|------|-------------|---------|---------| | sort | Select | Sort order: asc (oldest first) or desc (newest first) | "desc" | "desc" | | after | Number | Unix timestamp (seconds). Get posts/comments AFTER this time. Use with 'Oldest First' sort | null | 1700000000 | | before | Number | Unix timestamp (seconds). Get posts/comments BEFORE this time. Use with 'Newest First' sort | null | 1700000000 | | maxResults | Number | Maximum results to return (no limit) | 1000 | 5000 | ### Important Notes: - Choose the correct mode for subreddit or author scraping (they are separate) - after timestamp - use with ascending sort (oldest first) to get posts after a certain date - before timestamp - use with descending sort (newest first) to get posts before a certain date - maxResults has no limit - scrape as many as you need - Results are pushed to dataset in batches as they are scraped (real-time output) --- ## 🎯 How to Use ### Mode 1: Posts (Subreddit) Get posts from a specific subreddit Input: - Mode: "Posts (Subreddit)" - Subreddit: "technology" (without r/) - Sort: "Newest First" or "Oldest First" - Max Results: 1000 Use Case: Monitor subreddit activity ### Mode 2: Posts (Author) Get all posts by a specific user Input: - Mode: "Posts (Author)" - Author: "reddit" (username) - Sort: "Newest First" or "Oldest First" - Max Results: 500 Use Case: Track influencer or competitor posts ### Mode 3: Comments (Subreddit) Get comments from a specific subreddit Input: - Mode: "Comments (Subreddit)" - Subreddit: "AskReddit" (without r/) - Sort: "Newest First" or "Oldest First" - Max Results: 1000 Use Case: Track discussions and sentiment ### Mode 4: Comments (Author) Get all comments by a specific user Input: - Mode: "Comments (Author)" - Author: "reddit" (username) - Sort: "Newest First" or "Oldest First" - Max Results: 500 Use Case: Monitor user's comment activity ### Using Timestamps for Date Filtering #### Before Timestamp (use with "Newest First") Get posts/comments BEFORE a specific date: Input: - Mode: "Posts (Subreddit)" - Subreddit: "technology" - Sort: "Newest First" (desc) - Before: 1700000000 (Unix timestamp in seconds) Use Case: Get historical data before a certain date #### After Timestamp (use with "Oldest First") Get posts/comments AFTER a specific date: Input: - Mode: "Posts (Subreddit)" - Subreddit: "technology" - Sort: "Oldest First" (asc) - After: 1700000000 (Unix timestamp in seconds) Use Case: Incremental scraping, get only new posts since last run How to get Unix timestamp (seconds): - JavaScript: Math.floor(Date.now() / 1000) or Math.floor(new Date('2024-11-15').getTime() / 1000) - Python: int(datetime(2024, 11, 15).timestamp()) - Online: Use epochconverter.com ### Sorting Options - Newest First (desc): Most recent first - use with before timestamp - Oldest First (asc): Oldest first - use with after timestamp ## 💰 Pricing & Value ### Usage-Based Pricing Pay only for what you scrape! - $0.01 per run (base cost) - $0.001 per post ($1 per 1000 posts) #### Real Cost Examples: - 100 posts: $0.01 + (100 × $0.001) = $0.11 - 500 posts: $0.01 + (500 × $0.001) = $0.51 - 1000 posts: $0.01 + (1000 × $0.001) = $1.01 ### Why This Pricing Wins Compared to Social Listening Tools: - Brand24: $79/month (10,000 mentions) - Mention: $41/month (limited searches) - Brandwatch: $800+/month (enterprise only) - This Actor: $0.01-$2/day for most use cases Better for Everyone: - Light users: Monitor 5 subreddits daily = $1.65/month vs $79/mo - Heavy users: Track 100 keywords = $15/month vs $800/mo - Savings: Up to $785/month = $9,420/year ## 💡 Use Cases & Examples ### Monitor Subreddit Posts Track all posts from a subreddit: json { "mode": "posts_subreddit", "subreddit": "technology", "sort": "desc", "maxResults": 1000 } ### Track User Posts Monitor specific user's posts: json { "mode": "posts_author", "author": "reddit", "sort": "desc", "maxResults": 500 } ### Scrape Subreddit Comments Get all comments from a subreddit: json { "mode": "comments_subreddit", "subreddit": "AskReddit", "sort": "desc", "maxResults": 1000 } ### Track User Comments Monitor what a user is commenting: json { "mode": "comments_author", "author": "reddit", "sort": "desc", "maxResults": 500 } ### Get Historical Data (Before a Date) Get posts before a specific timestamp (newest first): json { "mode": "posts_subreddit", "subreddit": "technology", "sort": "desc", "before": 1700000000, "maxResults": 5000 } ### Incremental Scraping (After a Date) Get posts after a specific timestamp (oldest first): json { "mode": "posts_subreddit", "subreddit": "technology", "sort": "asc", "after": 1700000000, "maxResults": 5000 } Pro tip: Save the created_utc from your last result and use it as the after (for asc) or before (for desc) value in your next run to avoid duplicates. ## 🔧 n8n Integration Examples ### Subreddit Monitoring 1. Reddit Scraper → Get posts from r/yourproduct 2. Filter → Only posts with >10 upvotes 3. Slack → Send notification to #social-media channel ### Influencer Tracking 1. Reddit Scraper → Get posts by specific author 2. Airtable → Save to influencer database 3. Email → Weekly digest of their activity ### Comment Analysis 1. Reddit Scraper → Get comments from r/yourproduct 2. Filter → Sentiment analysis 3. Google Sheets → Log insights ### User Activity Alerts 1. Reddit Scraper → Track specific user's posts/comments 2. Filter → New activity only 3. Discord → Alert team channel ## ⚡ What Makes This Different - No API Keys: Uses public Reddit data, no authentication needed - Unlimited Subreddits: Monitor as many communities as you want - Incremental Scraping: Resume from any timestamp to avoid duplicates - Author Tracking: Monitor specific users across Reddit - Smart Pagination: Automatic retry on rate limits - Full Post Data: Get everything - title, content, comments, scores - Fast & Reliable: Built-in retry logic and error handling - n8n Ready: Perfect output format for workflow automation ## 📈 Key Insights You Can Get ### Engagement Metrics - Score Trends: What topics get most upvotes - Comment Activity: Which posts spark discussions - Time Patterns: When posts perform best - Subreddit Comparison: Where your audience is most active ### Content Analysis - Popular Topics: What's trending right now - Question Types: What people are asking - Problem Detection: Common pain points mentioned - Success Stories: What solutions work ### Audience Research - Language Used: How people talk about topics - Common Concerns: Frequently mentioned issues - Popular Solutions: What products/services they recommend - Community Size: How active each subreddit is ## ❓ FAQ ### Q: Do I need a Reddit account? A: No! This uses Reddit's public JSON API - no authentication required. ### Q: How fast is it? A: Scrapes 100 posts in 10-30 seconds depending on subreddit size. ### Q: Can I get comments too? A: Yes! Enable "Include Comments" to get top comments for each post. ### Q: What if I hit rate limits? A: Built-in retry logic automatically waits and retries. ### Q: Can I monitor private subreddits? A: No, only public subreddits can be scraped. ### Q: How fresh is the data? A: Real-time! Data comes directly from Reddit as it's posted. ### Q: Can I use this in n8n? A: Absolutely! Perfect for triggering workflows based on Reddit activity. ## 🎉 Success Metrics - 100% Public Data (no auth required) - 10-30 Second Scrape Time (per 100 posts) - 99%+ Success Rate (built-in retry logic) - Pay Per Result ($0.001/post + $0.01/run) ## 🚀 Getting Started ### Step 1: Choose Your Mode - Posts (Subreddit): Get posts from a subreddit - Posts (Author): Get posts by a specific user - Comments (Subreddit): Get comments from a subreddit - Comments (Author): Get comments by a specific user ### Step 2: Configure Filters - For Subreddit modes: Enter subreddit name (without r/) - For Author modes: Enter Reddit username - Choose sort order (newest/oldest first) - (Optional) Set before or after timestamp for date filtering - Set max results (no limit) ### Step 3: Run & Analyze - Click "Start" to begin scraping - Results appear in real-time as batches are pushed - Export to CSV or use in n8n workflows ### Step 4: Automate (Optional) - Connect to n8n for automation - Set up scheduled runs - Use timestamps for incremental scraping --- ## 💪 Competitive Advantages | Feature | This Actor | Brand24 | Mention | Manual | |---------|-----------|---------|---------|--------| | Reddit Coverage | ✅ All Public | ✅ | ✅ | ✅ | | Real-time | ✅ | ✅ | ✅ | ✅ | | No Subscription | ✅ | ❌ | ❌ | ✅ | | Pay Per Result | ✅ | ❌ | ❌ | N/A | | Comment Data | ✅ | ❌ | ❌ | ✅ | | n8n Integration | ✅ | ❌ | ❌ | ❌ | | API Access | ✅ | $$ | $$ | ❌ | | Cost (1000 posts/day) | $30/mo | $79/mo | $41/mo | Free | --- Ready to monitor Reddit like a pro? Start tracking now! Built for marketers, researchers, and automation enthusiasts 🚀
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Reddit Scraper (subreddits/users) now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- agenscrape
- Pricing
- Paid
- Total Runs
- 207
- Active Users
- 11
Related Actors
Google Search Results Scraper
by apify
Google Search Results (SERP) Scraper
by scraperlink
Google Search
by devisty
Bing Search Scraper
by tri_angle
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support