GitHub Repository Scraper
by fresh_cliff
This actor scrapes detailed information from GitHub repositories using reliable HTTP requests and HTML parsing. It extracts repository metadata includ...
Opens on Apify.com
About GitHub Repository Scraper
This actor scrapes detailed information from GitHub repositories using reliable HTTP requests and HTML parsing. It extracts repository metadata including star counts, fork counts, topics/tags, license information, primary programming language, and last updated timestamps.
What does this actor do?
GitHub Repository Scraper is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
GitHub Repository Scraper for Apify A Python-based Apify actor that scrapes GitHub repository information using requests and BeautifulSoup. ## Features - Extracts repository information including: - Full name (owner/repo) - Star count - Description - Primary programming language - Topics/tags - Last updated time - License information - Fork count - Written in Python using requests and BeautifulSoup for reliable scraping - Built for the Apify platform ## Files - apify_actor.py - The main actor code for Apify deployment - requests_github_scraper.py - Standalone GitHub scraper (for local testing) - INPUT_SCHEMA.json - Input schema for the Apify actor - requirements.txt - Python dependencies - package.json - Actor metadata for Apify ## Local Testing 1. Install dependencies: pip install -r requirements.txt 2. Run the local version: python requests_github_scraper.py 3. Check results in the apify_storage directory ## Deploying to Apify ### Prerequisites 1. Create an Apify account if you don't have one 2. Install the Apify CLI: npm install -g apify-cli 3. Log in to your Apify account: apify login ### Deployment Steps 1. Initialize your project folder (if you haven't already): apify init github-scraper 2. Modify the Dockerfile to use Python: Dockerfile FROM apify/actor-python:3.9 # Copy source code COPY . ./ # Install dependencies RUN pip install --no-cache-dir -r requirements.txt # Define how to run the actor CMD ["python3", "apify_actor.py"] 3. Push your actor to Apify: apify push 4. After pushing, your actor will be available in the Apify Console. ### Running on Apify 1. Navigate to your actor in the Apify Console 2. Click on "Run" in the top-right corner 3. Enter the GitHub repository URLs you want to scrape in the Input form 4. Click "Run" to start the actor 5. Access the results in the "Dataset" tab once the run is complete ## Input Options - repoUrls (required): Array of GitHub repository URLs to scrape - sleepBetweenRequests (optional): Delay between requests in seconds (default: 3) ## Example Input json { "repoUrls": [ "https://github.com/microsoft/playwright", "https://github.com/facebook/react", "https://github.com/tensorflow/tensorflow" ], "sleepBetweenRequests": 5 } ## Output Format The actor provides clean, well-structured data for each GitHub repository in the following format: json { "url": "https://github.com/microsoft/playwright", "name": "playwright", "owner": "microsoft", "fullName": "microsoft/playwright", "description": "Playwright is a framework for Web Testing and Automation. It allows testing Chromium, Firefox and WebKit with a single API.", "stats": { "stars": "71.2k", "forks": "4k" }, "language": "TypeScript", "topics": [ "electron", "javascript", "testing", "firefox", "chrome", "automation", "web", "test", "chromium", "test-automation", "testing-tools", "webkit", "end-to-end-testing", "e2e-testing", "playwright" ], "lastUpdated": "2025-03-17T17:00:47Z", "license": "Apache-2.0 license" } ### Output Fields: | Field | Type | Description | |-------|------|-------------| | url | String | The full URL of the GitHub repository | | name | String | Repository name (without owner) | | owner | String | Username or organization that owns the repository | | fullName | String | Complete repository identifier (owner/name) | | description | String | Repository description | | stats.stars | String | Number of stars the repository has | | stats.forks | String | Number of forks the repository has | | language | String | Primary programming language | | topics | Array | List of topics/tags associated with the repository | | lastUpdated | String | ISO timestamp of the last update | | license | String | Repository license information | This structured output format makes it easy to: - Display repository cards in your applications - Create data visualizations - Filter and sort repositories by various attributes - Export to other data formats
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try GitHub Repository Scraper now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- fresh_cliff
- Pricing
- Paid
- Total Runs
- 258
- Active Users
- 27
Related Actors
Web Scraper
by apify
Cheerio Scraper
by apify
Website Content Crawler
by apify
Legacy PhantomJS Crawler
by apify
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support