Reddit Scraper Pro
by kalirobot
High-reliability Reddit scraper with no artificial limits. Extract posts, comments, users, and subreddit data at scale. Supports JSON API, OAuth, HTML...
Opens on Apify.com
About Reddit Scraper Pro
High-reliability Reddit scraper with no artificial limits. Extract posts, comments, users, and subreddit data at scale. Supports JSON API, OAuth, HTML, and Playwright modes. Full comment trees, working filters, NSFW support, multi-account rate-boost.
What does this actor do?
Reddit Scraper Pro is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Reddit Scraper Pro Extract posts, comments, subreddits, and user data from Reddit without limitations. ## Features ✅ No artificial limits - Scrape 100K+ posts (competitors stop at 1000!) ✅ 98%+ Success Rate - Most reliable Reddit scraper ✅ All filters work - Date, upvotes, sort (competitors ignore them!) ✅ Nested comments - Full comment trees with unlimited depth ✅ 4 Scraping Modes - JSON API, OAuth API, HTML Scraping, Browser (Playwright) ✅ Multi-Account Support - Add multiple Reddit apps for 100 QPM each ✅ NSFW Support - Works without login (MODE 1, 2 & 4) ## Quick Start Examples ### Example 1: Scrape Multiple Subreddits json { "subreddits": ["python", "webdev", "machinelearning"], "scrapeOptions": { "posts": true, "comments": true, "subredditInfo": true }, "limits": { "maxPosts": 50, "maxCommentsPerPost": 20 }, "filters": { "sortBy": "hot", "minUpvotes": 10 } } ### Example 2: Scrape Specific URLs json { "startUrls": [ {"url": "https://www.reddit.com/r/python/"}, {"url": "https://www.reddit.com/r/webdev/comments/abc123/title/"}, {"url": "https://www.reddit.com/user/spez/"} ], "scrapeOptions": { "posts": true, "comments": true } } ### Example 3: r/popular with Country Filter json { "specialSubreddit": "popular", "countryCode": "US", "filters": { "sortBy": "top", "timeRange": "week" }, "limits": { "maxPosts": 100 } } → Scrapes: https://www.reddit.com/r/popular/top/?geo_filter=US&t=week ### Example 4: r/all - All of Reddit json { "specialSubreddit": "all", "filters": { "sortBy": "controversial", "timeRange": "month", "minUpvotes": 100 } } → Scrapes: https://www.reddit.com/r/all/controversial/?t=month ### Example 5: Search for Posts json { "searchKeywords": ["GPT-4", "artificial intelligence"], "filters": { "timeRange": "week", "minUpvotes": 50, "sortBy": "top" }, "limits": { "maxPosts": 200 } } ### Example 6: Discover Subreddits (Browser Mode) json { "scrapingMode": "browser", "searchSubreddits": ["webdev", "python", "datascience"] } → Finds: r/webdev, r/web_design, r/WebdevTutorials, r/python, r/learnpython, etc. ### Example 7: Deep Comment Extraction json { "subreddits": ["AskReddit"], "scrapeOptions": { "posts": true, "comments": true }, "limits": { "maxPosts": 10, "maxCommentDepth": 20, "maxCommentsPerPost": 100 } } ### Example 8: User Profile with Activity json { "startUrls": [ {"url": "https://www.reddit.com/user/AutoModerator/"} ], "scrapeOptions": { "users": true }, "scrapingMode": "browser" } ## Scraping Modes ### Mode 1: JSON API (Default - Recommended ⚡) - No authentication needed - ~60 requests/minute - ✅ NSFW content supported - Fast and reliable (1-2s per page) - Best for most use cases ### Mode 2: Official Reddit API (Power Users 🔑) - Requires Reddit App credentials (Create App) - 100 requests/minute per account - Add multiple accounts for higher limits! - ✅ NSFW content supported - OAuth-based authentication - Highest rate limits ### Mode 3: HTML Scraping (Fallback 📄) - No authentication needed - No JavaScript rendering required - Works for SFW content - ⚠️ NSFW not available (Reddit limitation) - Lightweight and efficient ### Mode 4: Browser (Playwright 🌐) - Headless Chromium browser - Full JavaScript rendering - ✅ NSFW content supported - ✅ Nested comments (unlimited depth!) - ✅ User profiles with their posts/comments - ✅ All media: Images, Videos, External URLs - Slower (~2-5s per page) but most robust ## What You Can Extract Posts: - Title, text, URL - Author, score, comments count - Timestamps, awards - Image URLs (direct download links!) - Video URLs - Subreddit info Comments: - Full nested comment trees - Author, body, score - Timestamps - Reply chains (unlimited depth!) Subreddits: - Subscriber count, description - Banner & icon images - Creation date Users: - Karma scores, account age - Profile icons - Post/comment history ## Input Configuration Subreddits: List of subreddit names (without r/) Search Keywords: Search Reddit for specific terms Start URLs: Direct URLs to posts, subreddits, or users Filters: - Time Range: hour, day, week, month, year, all - Min/Max Upvotes - Sort: hot, new, top, controversial, rising - Exclude NSFW Limits: - Max Posts (null = unlimited!) - Max Comments per Post - Max Comment Depth ## Multi-Account Setup (Optional) For higher rate limits, add multiple Reddit App credentials: 1. Create apps at https://www.reddit.com/prefs/apps 2. Choose "script" type 3. Add to input: json { "scrapingMode": "official_api", "redditCredentials": [ {"client_id": "...", "client_secret": "..."}, {"client_id": "...", "client_secret": "..."} ] } Result: 100 QPM per account = 200 QPM total! ## Why This Scraper? | Feature | Competitors | Reddit Scraper Pro | |---------|-------------|-------------------| | Post Limits | 1000-6000 | ✅ Unlimited | | Filters | Often broken | ✅ Always work | | Success Rate | ~94% | ✅ >98% | | Comments | Limited | ✅ Full depth | | Multi-Account | ❌ | ✅ Yes | | Modes | 1 | ✅ 3 | ## Use Cases - Research: Sentiment analysis, trend tracking - Marketing: Brand monitoring, competitor analysis - Data Science: Training data, NLP - Journalism: News monitoring, fact checking ## Output Example json { "id": "abc123", "type": "post", "subreddit": "Python", "title": "...", "author": "username", "score": 234, "num_comments": 89, "url": "https://i.redd.it/image.jpg", "created_at": "2025-11-03T10:00:00", "over_18": false, "is_video": false, "comments": [...] } ## API Resources - Reddit OAuth Guide - Data API Wiki - Create Reddit App ## Legal - Scrapes only public data - Users responsible for Reddit ToS compliance - Reddit Data API Terms ## Have any issues or feedback? A crawler can only be as good as its feedback! If you encounter any problems or have feature requests, please don't hesitate to reach out. I'm committed to fixing bugs and implementing your suggestions to make this the best Reddit scraper possible. Thank you for your support! 🙏 ## Version 0.1.0 - Initial Release
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Reddit Scraper Pro now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- kalirobot
- Pricing
- Paid
- Total Runs
- 246
- Active Users
- 13
Related Actors
Google Search Results Scraper
by apify
Website Content Crawler
by apify
🔥 Leads Generator - $3/1k 50k leads like Apollo
by microworlds
Video Transcript Scraper: Youtube, X, Facebook, Tiktok, etc.
by invideoiq
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support