Advanced LinkedIn Job Search API
by fantastic-jobs
Stop wasting hours manually searching LinkedIn. This actor gives you direct, programmatic access to a live stream of LinkedIn job postings, updated wi...
Opens on Apify.com
About Advanced LinkedIn Job Search API
Stop wasting hours manually searching LinkedIn. This actor gives you direct, programmatic access to a live stream of LinkedIn job postings, updated with over 10 million new listings every month. Think of it as your own searchable database for the world's largest professional job board. I use it to build targeted lists for recruitment, market research, and lead generation. You get the raw job data, plus the good stuff that comes with it: detailed company profiles, recruiter contact info, and AI-powered enrichments that add extra context. The real magic is in the filters. You can pinpoint exactly what you need by job title, full description text, location, company size, industry, seniority level, and even filter for 'Easy Apply' roles. This means no more sifting through irrelevant posts. It’s straightforward. You configure your search criteria—like "Senior Software Engineer in Berlin at companies with 200-500 employees"—and the API returns clean, structured JSON. This makes it perfect for feeding into your own dashboards, CRM systems, or recruitment pipelines. If you need fresh, accurate job data from LinkedIn without the manual hassle, this is how you get it.
What does this actor do?
Advanced LinkedIn Job Search API is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
The perfect actor for applications requiring high quality LinkedIn jobs, every week, day, or hour! We aim to index all LinkedIn jobs worldwide, over 10 million roles per month! The maximum number of jobs per run is 5,000. If you wish to go over this number, please reach out to us! - Do you love this Actor? Please leave a review! - Any issues or feedback? Please create an issue! ## Technical Details - This Actor calls a database that includes LinkedIn jobs posted during the last hour, day, or week. Our scrapers are continiously indexed new roles, several hundreds of thousands every day! - You may choose a time range using the 'Time Range' parameter. Please note that there are slight differences between the ranges: 1h: Includes jobs that have been indexed by our systems during the last hour 24h: Includes jobs that have been indexed by our systems during the last 24 hours 7d: Includes jobs that have been posted during the last 7 days - The job data is returned in our APIs with a one hour delay. For example, if a job is posted at 06:00 UTC, it will appear between 07:00 and 08:00 UTC - All jobs in the database are unique based on their URL. However, organizations occasionally create duplicates themselves. More commonly, organizations sometimes create the same job listing for multiple cities or states. If you wish to create a rich and unique dataset, we recommend further deduplication on title + organization, or title + organization + locations - All jobs are checked on expiry once per day. You can use our companion Actor to retrieve a daily list of expired jobs. The cost of using the companion actor is $20 per month. - BETA Feature. We extract useful job details from the description with an LLM. We are currently enriching over 99.9% of all Technology jobs. Please note that our enrichment is a simple one-shot prompt on each job description, so there might be some errors. - There are a number of jobs on LinkedIn without jobposting schema. We also index these but they have slightly less features. These are marked with the field no_jb_schema=true: - Where included we provide the external apply url. ## FAQ Wait, this isn't a scraper? Technically, no, or yes? We scrape all jobs in the backend, and you're accessing our database with scraped jobs with a small delay. This is a much more reliable system then scraping LinkedIn directly. This also allows us to enrich and derive data before sharing it with you, adding more value per job! Can I see how many jobs will be returned for my query Not at the moment, please test with the free plan or create an issue and we'll have a look for you! Make sure to include all parameters. How can I retrieve a XML with the jobs from my latest run? - Follow the documentation to create a saved task: https://docs.apify.com/platform/actors/running/tasks - Create a schedule for the task: https://docs.apify.com/platform/schedules - Copy the following endpoint to access the latest succesfull run from your scheduled task: - Replace task-id with the the id of your task, which is the last string of characters in the task's url:
- Replace apiKey with your api key. You can find your API key at 'Settings' --> 'API & Integrations' https://api.apify.com/v2/actor-tasks/*task-id*/runs/last/dataset/items?token=*apiKey*&format=xml&status=SUCCEEDED You can export in several formats, not just XML. Please see the documentation for more information: https://docs.apify.com/api/v2/actor-task-runs-last-get https://docs.apify.com/api/v2/dataset-items-get ## Input Parameters ### Maximum Jobs The maximum number of jobs that can be retrieved in a single run. Must be between 10 and 5,000. Please set the memory to 512 for runs above 2,000 jobs! ### Search Parameters Our search parameters allow you to include or exclude jobs based on keywords. You may include : for prefix matching (e.g., 'Soft:' will match 'Software', 'Softball', etc.) Don't use abbreviations for location searches. NY should be New York, US should be United States WARNING. The description searches are VERY intensive and at risk of time-out. Please be very specific, limit your searches to a handful of keywords, and combine with one of the other searches, preferably titleSearch. If you receive errors while using descriptionSearch or descriptionExclusionSearch, please reach out to us. - titleSearch: Terms to search in job titles - titleExclusionSearch: Terms to exclude from job titles - locationSearch: Terms to search in job locations - locationExclusionSearch: Terms to exclude from job locations - descriptionSearch: Terms to search in job descriptions (includes title) - descriptionExclusionSearch: Terms to exclude from job descriptions (includes title) - organizationSearch: Terms to search in organization names - organizationExclusionSearch: Terms to exclude from organization names - organizationDescriptionSearch: Terms to search in organization descriptions - organizationDescriptionExclusionSearch: Terms to exclude from organization descriptions ### Description Type Type of description to fetch. Options: - text: Plain text description ### Remote Filter for remote jobs only. Set to false to include all jobs. This filter is very sensitive and will include jobs that have 'remote' in the title, description, or location. ### LinkedIn Filters - seniorityFilter: Filter by seniority level. Available options: "Associate", "Director", "Executive", "Mid-Senior level", "Entry level", "Not Applicable", "Internship" - external_apply_url: Filter for jobs that include an external apply URL (the opposited of Easy Apply) - populateExternalApplyURL: When enabled, populates the external_apply_url field with the url field value if external_apply_url is null. Which is the case when the job uses LinkedIn EasyApply. This is especially useful for job boards. Default is false. - directApply: Filter for jobs that can be applied to directly through LinkedIn Easy Apply - organizationSlugFilter: Filter by LinkedIn organization slugs (exact match). The slug is the company specific part of the url. For example the slug in the following url is 'tesla-motors': https://www.linkedin.com/company/tesla-motors/ - organizationSlugExclusionFilter: Exclude jobs from specific LinkedIn organization slugs (exact match) - industryFilter: Filter by LinkedIn industries. Use exact industry names. Industries containing commas will be automatically wrapped in quotes. You can find a list of industries on our website: https://fantastic.jobs/article/linkedin-industries - organizationEmployeesLte: Maximum number of employees in the company - organizationEmployeesGte: Minimum number of employees in the company - removeAgency: Filter out recruitment agencies, job boards and other low quality sources - EmploymentTypeFilter: Filter by employment type. Available options: FULL_TIME, PART_TIME, CONTRACTOR, TEMPORARY, INTERN, VOLUNTEER, PER_DIEM, OTHER ### AI Filters - includeAi: BETA Feature: Include AI enriched fields. We enrich jobs with AI to retrieve relevant data from the job description. Please note that this performed with a one-shot prompt, so there might be some errors. We are currently only enriching technology roles - aiWorkArrangementFilter: BETA Feature: Filter by work arrangement. Remote OK = remote with an office available. Remote Solely = remote with no office available. Include both to include all remote jobs. Available options: On-site, Hybrid, Remote OK, Remote Solely - aiHasSalary: BETA Feature: Filter for jobs with salary information only. Set to false to include all jobs. Results include jobs that have either an AI enriched salary or a raw salary (discovered in the job posting schema). - aiExperienceLevelFilter: BETA Feature: Filter by years of experience. Available options: 0-2, 2-5, 5-10, 10+ - aiVisaSponsorshipFilter: BETA Feature: Filter for jobs offering visa sponsorship only. Set to false to include all jobs. - aiTaxonomiesFilter: BETA Feature: Filter by AI taxonomies. This filter is quite broad. Available options: Technology, Healthcare, Management & Leadership, Finance & Accounting, Human Resources, Sales, Marketing, Customer Service & Support, Education, Legal, Engineering, Science & Research, Trades, Construction, Manufacturing, Logistics, Creative & Media, Hospitality, Environmental & Sustainability, Retail, Data & Analytics, Software, Energy, Agriculture, Social Services, Administrative, Government & Public Sector, Art & Design, Food & Beverage, Transportation, Consulting, Sports & Recreation, Security & Safety - aiTaxonomiesPrimaryFilter: BETA Feature: Filter by primary AI taxonomy. This filter will select jobs based on their primary AI Taxonomy - aiTaxonomiesExclusionFilter: BETA Feature: Exclude jobs by AI taxonomies. - populateAiRemoteLocation: If enabled, populates ai_remote_location with locations_derived when ai_remote_location is empty. Useful for normalizing location data. - populateAiRemoteLocationDerived: If enabled, populates ai_remote_location_derived with locations_derived when ai_remote_location_derived is empty. Useful for normalizing location data. - excludeATSDuplicate: Set this parameter to true to remove the majority of duplicate jobs between this API and the 'Career Site Job Listing API' actor We have created a system where every LinkedIn job is checked against the ATS dataset. This system will perform 3 checks for every LinkedIn job: - A (cleaned) URL match - A match of job title + organization name - A match of job title + LinkedIn company profile mapping If any of these 3 have a hit, the LinkedIn job will be flagged as ats_duplicate=true in the API output. If none of these 3 have a hit, the LinkedIn job will be flagged as ats_duplicate=false Some jobs are not checked; these are jobs that originate from agencies/jobboards (linkedin_org_recruitment_agency_derived=true) or jobs with LinkedIn EasyApply (directapply=true). These jobs will be flagged as ats_duplicate=null We are hoping to flag the majority of duplicates in the datasets, but we are looking for exact hits only. This means that there will still be a number of false positives slipping through the cracks. To fully eliminate duplicates between the two datasets, we recommend adding a layer of fuzzy deduplication. ## Output Schema ### Output Fields | Name | Description|Type| | -------- | ------- | ------- | |id| The job's internal ID. Used for expiration | text | |title| Job Title| text | |organization| Name of the hiring organization | text |organization_url| URL to the organization's page | text |organization_logo| URL to the organization's logo | text |date_posted| Date & Time of posting | timestamp |date_created| Date & Time of indexing in our systems | timestamp |date_validthrough|Date & Time of expiration, is null in most cases | timestamp |locations_raw| Raw location data, per the Google for Jobs requirements | json[] |locations_alt_raw| Complimentary raw location field for ATS with limited location data, currently only in use for Workday | text[] |locations_derived| Derived location data, which is the raw data (locations_raw or location_requirements_raw) matched with a database. This is the field where you search locations on. | text[] [{city, admin (state), country}] |location_type| To identify remote jobs: 'TELECOMMUTE' per the Google for Jobs requirements | text |location_requirements_raw| Location requirement to accompany remote (TELECOMMUTE) jobs per the Google for Jobs requirements. | json[] |salary_raw| raw Salary data per the Google for Jobs requirements| json |employment_type| Types like 'Full Time", "Contract", "Internship" etc. Is an array but most commonly just a single value. | text[] |url| The URL of the job, can be used to direct traffic to apply for the job | text |source| the source ATS or career site | text |source_type| either 'ats' or 'career-site' | text |source_domain| the domain of the career site| text |description_text| plain text job description - if included | text |description_html| raw HTML job description - if included | text |cities_derived| All cities from locations_derived |json[] |regions_derived| All regions/states/provinces from locations_derived| json[] |countries_derived| All countries from locations_derived | json[] |timezones_derived| Timezones derived from locations_derived | json[] |lats_derived| lats derived from locations_derived | json[] |lngs_derived| lngs derived from locations_derived | json[] remote_derived | jobs flagged as remote by inclusion of the word 'remote' in title, description, raw location, and the offical google jobs 'TELECOMMUTE' schema | bool seniority | Seniority level: Associate, Director, Executive, Mid-Senior level, Entry level, Not Applicable, Internship | text directapply | 'true' if the end user can apply directly on the job page, in this case LinkedIn "easyapply". False if the job contains a link to a 3rd party | bool external_apply_url | The external application url, where included. We don't clean this url so there might be trackers (src, utm_, etc) | text no_jb_schema |Set to true if the job was indexed from LinkedIn without JobPosting schema. These jobs have slightly different data from jobs with a schema: - locations_raw has all location data under 'addressLocality' instead of being split up by region/country/locality. (locations_derived still has the split) - orglogo and jobimage both use the same company logo as seen on the page - The following fields are not included: locationtype, locationrequirements, salary_raw, datevalidthrough | bool recruiter_name | name of the recruiter (if present) | text recruiter_title | title of the recruiter (if present) | text recruiter_url | url to the LI profile of the recruiter (if present) | text linkedin_org_employees | the number of employees within the job's company according to LI | int linkedin_org_url | url to the company page | text linkedin_org_size | the number of employees within the job's company according to the company | text linkedin_org_slogan | the company's slogan | text linkedin_org_industry | the company's industry. This is a fixed list that the company can choose from, so could be useful for classification. Keep in mind that this is in the language of the company's HQ | text linkedin_org_followers | the company's followers on LI | int linkedin_org_headquarters | the company's HQ location | text linkedin_org_type | the company's type, like 'privately held', 'public', etc | text linkedin_org_foundeddate | the company's founded date | text linkedin_org_specialties | a comma delimited list of the company's specialties | text[] linkedin_org_locations | the full address of the company's locations|text[] linkedin_org_description | the description of the company's linkedin page | text linkedin_org_recruitment_agency_derived | If the company is a recruitment agency, true or false. We identify this for each company using an LLM. The accuracy may vary and job boards might be flagged as false. | bool linkedin_org_slug | The slug is the company specific part of the url. For example the slug in the following url is 'tesla-motors': https://www.linkedin.com/company/tesla-motors/ | text ### AI Output Fields BETA Feature We are currently only enriching technology roles Set include_ai to true to include the fields in this table These fields are derived from the text with an LLM and might contain mistakes. | Name | Description|Type| | -------- | ------- | ------- | ai_salary_currency | The salary currency |text ai_salary_value | The salary value, if there's a single salary with no salary range | numeric ai_salary_minvalue | The salary minimum salary in a range | numeric ai_salary_maxvalue | The salary maximum salary in a range| numeric ai_salary_unittext | If the salary is per HOUR/DAY/WEEK/MONTH/YEAR | text ai_benefits | An array with other non-salary benefits mentioned in the job listing | text[] ai_experience_level | years of experience required, one of: 0-2, 2-5, 5-10, or 10+ | text ai_work_arrangement | Remote Solely/Remote OK/Hybrid/On-site. Remote solely is remote without an office available, Remote OK is remote with an optional office. |text ai_work_arrangement_office_days | when work_arrangement is Hybrid, returns the number of days per week in office | bigint ai_remote_location | When remote but only in a certain location, returns the location | text[] ai_remote_location_derived | Derived remote location data, which is the raw data (ai_remote_location) matched with a database of locations. This is the same database as the locations_derived field. | text[] ai_key_skills | An array of key skills mentioned in the job listing | text[] ai_hiring_manager_name | If present, the hiring manager name | text ai_hiring_manager_email_address | If present, the hiring manager's email address | text ai_core_responsibilities | A 2-sentence summary of the job's core responsibilities | text ai_requirements_summary | A 2-sentence summary of the job's requirements | text ai_working_hours | The number of required working hours. Defaults to 40 if not mentioned | bigint ai_employment_type | One or more employment types as derived from the job description: FULL_TIME/PART_TIME/CONTRACTOR/TEMPORARY/INTERN/VOLUNTEER/PER_DIEM/OTHER | text[] ai_job_language | The language of the job description | text ai_visa_sponsorship | Returns true if the job description mentions Visa sponsorship opportunities | boolean ai_keywords | An array of AI extracted keywords from the job description | text[] ai_taxonomies_a | An array of AI assigned taxonomies for the job | text[] ai_education_requirements | An array of AI extracted education requirements from the job description | text[]
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Advanced LinkedIn Job Search API now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- fantastic-jobs
- Pricing
- Paid
- Total Runs
- 60,436
- Active Users
- 429
Related Actors
Company Employees Scraper
by build_matrix
🔥 LinkedIn Jobs Scraper
by bebity
Linkedin Company Detail (No Cookies)
by apimaestro
Linkedin Profile Details Batch Scraper + EMAIL (No Cookies)
by apimaestro
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support