Jobs Scraper 🔥

Jobs Scraper 🔥

by webscrap18

A comprehensive job 💼 scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, an...

4,357 runs
74 users
Try This Actor

Opens on Apify.com

About Jobs Scraper 🔥

A comprehensive job 💼 scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, and Naukri.

What does this actor do?

Jobs Scraper 🔥 is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Job Scraper A comprehensive job scraping actor for Apify that collects job listings from multiple platforms including LinkedIn, Glassdoor, Google Jobs, Bayt, and Naukri. ## Description This Apify actor allows you to search and collect job listings from multiple job sites in a single run. It provides detailed job information with intelligent data processing, making it ideal for job market research, recruitment efforts, and career exploration. The actor implements robust error handling and retry mechanisms to ensure reliable results even when dealing with anti-scraping measures. Data is automatically processed, enriched, and categorized to provide actionable insights into the job market. ## Features - Scrape job listings from multiple platforms in one run - Customize search parameters (location, job type, remote, etc.) - Retrieve detailed job information including salary data when available - Convert salaries to annual format for easier comparison - Proxy support for avoiding rate limits - Resilient scraping with automatic retries - Site-by-site approach to ensure some results even if one site fails - Detailed error reporting and suggestions - Automatic job categorization and data enrichment - Clean data output by removing null/empty fields - Job statistics and summary included in results ## Input Parameters | Parameter | Type | Description | Default | |-----------|------|-------------|---------| | site_names | Array | List of job sites to scrape from (supports "linkedin", "glassdoor", "google", "bayt", "naukri") | ["linkedin", "glassdoor", "google"] | | search_term | String | Job search keywords (e.g., "software engineer", "data scientist") | Required | | location | String | Job location (e.g., "San Francisco, CA", "New York, NY") | Required | | results_wanted | Integer | Number of job listings to retrieve per site | 20 | | hours_old | Integer | Only show jobs posted within this many hours | 72 | | country | String | Country for Glassdoor searches (e.g., "USA", "UK", "Canada") | "USA" | | distance | Integer | Maximum distance from the location in miles | 50 | | job_type | String | Type of job to search for ("fulltime", "parttime", "internship", "contract") | null | | is_remote | Boolean | Only show remote jobs | false | | offset | Integer | Number of results to skip (useful for pagination) | 0 | | proxies | Array | List of proxies to use for scraping (format: "user:pass@host:port" or "host:port") | [] | ## Output The actor outputs a dataset of job listings with the following information: - Job title - Company name - Location (city, state, country) - Remote status - Job type (full-time, part-time, etc.) - Salary information (when available) - Job URL - Job description - Date posted - Job category (automatically derived from title) - Date scraped - Search query used - And more depending on the job board ### Output Format Example json { "jobs": [ { "company": "Cartesia", "company_url": "https://www.linkedin.com/company/cartesia-ai", "title": "Software Engineer, India", "date_posted": "2025-07-27", "job_url": "https://www.linkedin.com/jobs/view/4227402416", "skills": ["Python", "React", "AWS", "Docker", "PostgreSQL"], "job_type": "Full-time", "experience_range": "2-4 years", "location": "Bengaluru, Karnataka, India", "id": "li-4227402416", "site": "linkedin", "interval": "yearly", "min_amount": 1200000, "max_amount": 1800000, "currency": "INR", "is_remote": false, "job_level": "Mid-level", "job_function": "Engineering", "company_industry": "Artificial Intelligence", "company_rating": 4.4, "work_from_home_type": "Hybrid", "category": "Software Engineering" } ], } ## Advanced Features ### Data Enrichment The actor provides additional data enrichment: - Automatic job categorization based on title keywords - Derivation of job type from title if not explicitly provided - Detection of remote jobs based on title and description - Default values for missing fields ### Clean Output The clean_output parameter allows you to control how the data is returned: ### Site-by-Site Scraping The actor scrapes each site individually, which means: - If one site fails, the others will still be scraped - More detailed error reporting for each site - Random delays between sites to avoid detection ### Retry Mechanism The actor includes an automatic retry system: - Configurable number of retries per site - Increasing delays between retries - Proxy rotation between retries (if multiple proxies provided) ### Error Handling Improved error handling with: - Detailed error messages for each site - Specific suggestions for different error types - Comprehensive logging for troubleshooting ## Usage Tips 1. Handling Site Blocking - Many job sites implement anti-scraping measures - Using proxies is highly recommended to avoid IP blocking - Spread requests over time by reducing the number of sites you scrape at once 2. Google Jobs Scraping - For Google Jobs, the search query format is important - The actor will automatically generate a Google-specific search term if not provided 3. Improving Results - Use specific search terms and locations for better results - Setting linkedin_fetch_description to true provides more detailed job descriptions but is slower - For Glassdoor, setting the correct country parameter is important ## Example Usage json { "easy_apply": false, "enforce_annual_salary": true, "is_remote": true, "location": "India", "results_wanted": 50, "search_term": "software engineer", "site_names": [ "linkedin", "glassdoor", "google", "bayt", "naukri" ], "hours_old": 72, "country": "USA", "distance": 50, "description_format": "markdown", "offset": 0 } ## Troubleshooting If you encounter a 429 error (rate limiting) or 403 error (forbidden), try: 1. Using proxies 2. Reducing the number of results_wanted 3. Waiting some time between runs 4. Targeting fewer job sites at once 5. Increasing the max_retries value ## Limitations - LinkedIn has limitations on which parameters can be used together - Google Jobs requires a specific search term format - Rate limiting may occur when scraping a large number of jobs without proxies - Some job sites may block scraping attempts even with proxies - Search results may vary compared to manual searches due to personalization

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Jobs Scraper 🔥 now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
webscrap18
Pricing
Paid
Total Runs
4,357
Active Users
74
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support