Gsoc Finder

Gsoc Finder

by contributive_extraction

14 runs
2 users
Try This Actor

Opens on Apify.com

About Gsoc Finder

What does this actor do?

Gsoc Finder is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

πŸŽ“ GSoC Organization Crawler

An Apify Actor that crawls Google Summer of Code organizations and extracts detailed information to help students find the perfect organization to contribute to.

Features β€’ How It Works β€’ Quick Start β€’ Output β€’ Integration

--- ## ✨ Features | Feature | Description | |---------|-------------| | πŸ”„ Multi-Year Crawling | Scrapes organizations from multiple GSoC years (2022-2024+) | | πŸ”§ Technology Detection | Identifies programming languages, frameworks, and tools | | πŸ“Š Difficulty Assessment | Auto-determines difficulty (Beginner-Friendly, Intermediate, Advanced) | | πŸ“ˆ Acceptance Estimation | Estimates acceptance rates based on historical data | | 🏷️ Topic Categorization | Extracts and categorizes topics for each organization | | πŸ”— Data Merging | Combines data when organizations appear in multiple years | --- ## πŸ”„ How It Works β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CRAWLER WORKFLOW β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 1. INPUT β”‚ β”‚ 2. CRAWL β”‚ β”‚ 3. EXTRACT β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ Years to │────────▢│ Visit GSoC │────────▢│ Parse org β”‚ β”‚ scrape β”‚ β”‚ archive β”‚ β”‚ details β”‚ β”‚ [2024,2023] β”‚ β”‚ pages β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ 6. SAVE β”‚ β”‚ 5. ENRICH β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό β”‚ Push to │◀────────│ Add β”‚β—€β”€β”€β”€β”€β”€β”€β”€β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Apify β”‚ β”‚ difficulty & β”‚ β”‚ 4. MERGE β”‚ β”‚ Dataset β”‚ β”‚ acceptance β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ Combine β”‚ β”‚ multi-year β”‚ β”‚ data β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ### Step-by-Step Process 1. πŸš€ Initialize - Actor starts with input parameters (years, max requests) 2. πŸ” Discover - Visits GSoC archive pages, waits for JavaScript to render 3. πŸ“„ Extract - Visits each organization page, extracts all details 4. πŸ”— Merge - Combines data for organizations appearing in multiple years 5. πŸ“Š Enrich - Determines difficulty level and acceptance rate 6. πŸ’Ύ Save - Pushes all organizations to Apify Dataset --- ## πŸš€ Quick Start ### Prerequisites - Node.js 18+ installed - Apify CLI installed (npm install -g apify-cli) - Apify Account (free tier available) ### Local Development bash # 1. Navigate to the project cd gsoc # 2. Install dependencies npm install # 3. Run locally npm start ### Deploy to Apify Cloud bash # 1. Login to Apify (first time only) apify login # 2. Push to Apify Console apify push # 3. Run from Apify Console with input JSON --- ## πŸ“₯ Input Configuration | Field | Type | Default | Description | |-------|------|---------|-------------| | years | number[] | [2024, 2023, 2022] | GSoC years to scrape | | maxRequestsPerCrawl | number | 500 | Maximum HTTP requests per run | ### Example Input json { "years": [2024, 2023, 2022, 2021], "maxRequestsPerCrawl": 1000 } > πŸ’‘ Tip: Start with fewer years (e.g., [2024]) for faster testing --- ## πŸ“€ Output Format Each organization in the dataset includes: json { "name": "TensorFlow", "description": "An end-to-end open source machine learning platform...", "url": "https://summerofcode.withgoogle.com/archive/2024/organizations/tensorflow", "technologies": ["Python", "C++", "JavaScript", "TensorFlow", "Keras"], "topics": ["Machine Learning", "Deep Learning", "AI"], "difficulty": "Advanced", "acceptanceRate": "Low", "years": [2024, 2023, 2022, 2021, 2020], "projectTypes": ["Library Development", "Documentation", "Testing"], "category": "Machine Learning", "ideaListUrl": "https://github.com/tensorflow/tensorflow/wiki/gsoc", "logoUrl": "https://..." } ### Output Fields Explained | Field | Type | Description | |-------|------|-------------| | name | string | Organization's display name | | description | string | Full description from GSoC page | | url | string | Direct link to organization's GSoC page | | technologies | string[] | Programming languages, frameworks, tools | | topics | string[] | Project categories and domains | | difficulty | string | Auto-determined: Beginner-Friendly, Intermediate, Advanced | | acceptanceRate | string | Estimated: High, Medium, Low | | years | number[] | Years the organization participated in GSoC | | projectTypes | string[] | Types of projects available | | category | string | Primary category classification | | ideaListUrl | string | Link to project ideas page | | logoUrl | string | Organization's logo URL | --- ## πŸ“Š Difficulty Classification The crawler automatically determines difficulty based on technology complexity: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ DIFFICULTY LEVELS β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ β”‚ β”‚ 🟒 BEGINNER-FRIENDLY β”‚ β”‚ β”œβ”€β”€ HTML, CSS, JavaScript β”‚ β”‚ β”œβ”€β”€ Python basics β”‚ β”‚ └── Documentation projects β”‚ β”‚ β”‚ β”‚ 🟑 INTERMEDIATE β”‚ β”‚ β”œβ”€β”€ React, Vue, Angular β”‚ β”‚ β”œβ”€β”€ Node.js, Django, Flask β”‚ β”‚ └── Database work (PostgreSQL, MongoDB) β”‚ β”‚ β”‚ β”‚ πŸ”΄ ADVANCED β”‚ β”‚ β”œβ”€β”€ C++, Rust, Go (systems programming) β”‚ β”‚ β”œβ”€β”€ Kubernetes, Docker (infrastructure) β”‚ β”‚ β”œβ”€β”€ TensorFlow, PyTorch (ML frameworks) β”‚ β”‚ └── Compiler/kernel development β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ --- ## πŸ”— Integration Guide ### Step 1: Deploy Actor bash cd gsoc apify login apify push ### Step 2: Run Actor & Get Credentials 1. Go to Apify Console 2. Find your actor β†’ Click Start 3. Configure input JSON and run 4. After completion: - Actor ID: From the URL (e.g., username/gsoc-crawler) - Dataset ID: Storage tab β†’ Dataset β†’ Copy ID ### Step 3: Configure Frontend Create/update .env in your app root: env # Required for Apify integration VITE_APIFY_API_TOKEN=apify_api_xxxxxxxxxxxx VITE_APIFY_ACTOR_ID=your-username/gsoc-crawler VITE_APIFY_DATASET_ID=xxxxxxxxxxxxxxxxxxxx ### Step 4: Verify Integration bash # Run the frontend npm run dev # You should see: # βœ… "Live Data" badge in the UI # βœ… Last updated timestamp # βœ… Organizations loaded from Apify --- ## πŸ“ Project Structure gsoc/ β”œβ”€β”€ .actor/ β”‚ β”œβ”€β”€ actor.json # Actor metadata & configuration β”‚ β”œβ”€β”€ dataset_schema.json # Output data structure definition β”‚ β”œβ”€β”€ input_schema.json # Input validation schema β”‚ └── Dockerfile # Playwright container config β”œβ”€β”€ src/ β”‚ └── main.ts # Main PlaywrightCrawler logic β”œβ”€β”€ storage/ # Local development storage β”‚ β”œβ”€β”€ datasets/default/ β”‚ β”œβ”€β”€ key_value_stores/default/ β”‚ └── request_queues/default/ β”œβ”€β”€ package.json β”œβ”€β”€ tsconfig.json └── README.md --- ## πŸ–₯️ Console Output Preview When the crawler runs, you'll see formatted output like this: ╔════════════════════════════════════════════════════════════════╗ β•‘ πŸŽ“ GSoC CRAWLER RESULTS β•‘ ╠════════════════════════════════════════════════════════════════╣ β•‘ Total Organizations: 248 β•‘ β•‘ Years Crawled: 2024, 2023, 2022 β•‘ β•‘ Unique Technologies: 156 β•‘ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β• πŸ“Š DIFFICULTY BREAKDOWN β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Level β”‚ Count β”‚ Percentage β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 🟒 Beginner β”‚ 45 β”‚ 18.1% β”‚ β”‚ 🟑 Intermediate β”‚ 142 β”‚ 57.3% β”‚ β”‚ πŸ”΄ Advanced β”‚ 61 β”‚ 24.6% β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ πŸ”§ TOP 10 TECHNOLOGIES 1. Python ............... 187 orgs (75.4%) 2. JavaScript ........... 134 orgs (54.0%) 3. C++ .................. 89 orgs (35.9%) 4. Java ................. 76 orgs (30.6%) 5. TypeScript ........... 67 orgs (27.0%) ... βœ… Crawling complete! Data saved to Apify Dataset. --- ## πŸ› οΈ Troubleshooting ### Common Issues | Issue | Solution | |-------|----------| | 0 organizations found | GSoC uses JavaScript rendering - we use PlaywrightCrawler to handle this | | Timeout errors | Increase maxRequestsPerCrawl or check network connectivity | | Memory issues | Reduce the number of years in input | | Rate limiting | Actor automatically handles retries with exponential backoff | ### Debug Mode Enable detailed logging by setting environment variable: bash DEBUG=1 npm start --- ## πŸ“š Resources ### Apify Documentation - Apify SDK - Actor development toolkit - Crawlee - Web scraping library - PlaywrightCrawler - Browser automation - Input Schema - Configuration validation ### Tutorials - Quick Start Guide - Scraping Dynamic Pages - Getting Data via API ### Integrations - Zapier Integration - Make (Integromat) - Google Sheets --- ## πŸ“„ License MIT License - Feel free to use, modify, and distribute. ---

Built with ❀️ for GSoC Students
Happy contributing! πŸš€

## Getting started For complete information see this article. To run the Actor use the following command: bash apify run ## Deploy to Apify ### Connect Git repository to Apify If you've created a Git repository for the project, you can easily connect to Apify: 1. Go to Actor creation page 2. Click on Link Git Repository button ### Push project on your local machine to Apify You can also deploy the project on your local machine to Apify without the need for the Git repository. 1. Log in to Apify. You will need to provide your Apify API Token to complete this action. bash apify login 2. Deploy your Actor. This command will deploy and build the Actor on the Apify Platform. You can find your newly created Actor under Actors -> My Actors. bash apify push ## Documentation reference To learn more about Apify and Actors, take a look at the following resources: - Apify SDK for JavaScript documentation - Apify SDK for Python documentation - Apify Platform documentation - Join our developer community on Discord

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Gsoc Finder now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
contributive_extraction
Pricing
Paid
Total Runs
14
Active Users
2
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support