Google Jobs Scraper
by epctex
Stop manually checking Google Jobs for openings. This scraper pulls everything—salaries, direct apply links, social profiles, and full job description...
Opens on Apify.com
About Google Jobs Scraper
Stop manually checking Google Jobs for openings. This scraper pulls everything—salaries, direct apply links, social profiles, and full job descriptions—into a structured dataset or API feed you can actually use. I built this because I needed reliable job data for a project, and nothing else gave me the control or detail I wanted. You can configure it to search by location, title, or company, and it handles the parsing so you don't have to. Need salary estimates? It grabs them. Want the direct link to apply or the company's LinkedIn? It’s included. It’s perfect for building job boards, conducting market research on compensation, or aggregating leads for recruitment. The data comes back clean and ready for analysis in JSON, CSV, or via API, making integration straightforward. Whether you're a developer building an app or a recruiter sourcing candidates, this tool turns the fragmented listings on Google Jobs into a usable, automated pipeline. No daily limits or restrictive quotas. Just set your search parameters and run it.
What does this actor do?
Google Jobs Scraper is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Actor - Google Jobs Scraper ## Google Jobs scraper The Google Jobs data scraper supports the following features: - Search any keyword and job title - Seach any keyword or job title on Google Jobs. Retrieve extensive set of results in seconds. - Target your job results - Enable filtering, sorting or targeting your results by country, language. Even pin point location and draw a radius. - Extremely detailed and enriched output, better visibility - Very detailed output with all the sections included. Salary, qualifications, social links, apply links and many more! - Highly configurable, easy to use - Flexible configuration enables you to easily use the actor without hassle, and focus only on your needs. - Optional CSV Friendly Output - Use the actor for data retrieval or API purposes. Optional CSV-Friendly field enables you to get all the results in one item or multiple items. Easy integration. - Daily maintenance, fast support - We offer daily maintenance and a blazing fast customer support. Your satisfaction is the number one priority. ## Bugs, fixes, updates, and changelog This scraper is under active development. If you have any feature requests you can create an issue from here. ## Input Parameters The input of this scraper should be JSON containing the list of pages on Google Jobs that should be visited. Required fields are: - startUrls: (Optional) (Array) List of Google Jobs URLs. URLs to start with. It should be Job Listing URL. - maxItems: (Optional) (Number) You can limit scraped items. This should be useful when you search through the big lists or search results. - endPage: (Optional) (Number) Final number of page that you want to scrape. The default is Infinite. This applies to all search requests and startUrls individually. - queries: (Optional) (Array) The keywords/queries you want to search on Google Jobs. This field is required whenever startUrls is not present. - countryCode: (Optional) (String) Country determines the Google Jobs search domain (e.g. google.es for Spain). This setting only applies to Search queries. - languageCode: (Optional) (String) This code specifies the precise location for Google Jobs searches. It's used as the 'uule' query parameter in Google Jobs Search URLs. Generate the UULE code using this generator: https://padavvan.github.io/. - radius: (Optional) (Number) Enables job searching within a specified radius, in kilometers. - includeUnfilteredResults: (Optional) (Boolean) If checked, the lower-quality results that Google normally filters out will be included. This usually consists of a few hundred extra results. - csvFriendlyOutput: (Optional) (Boolean) If checked, the crawler will return results in a structure suitable for CSV format. Only 'googleJobs' results are included. CSV headers would be: title, companyName, location, description, and more. - proxy: (Required) (Proxy Object) Proxy configuration. - extendOutputFunction: (Optional) (String) Function that takes a JQuery handle ($) as an argument and returns an object with data. - customMapFunction: (Optional) (String) Function that takes each object's handle as an argument and returns the object with executing the function. This solution requires the use of Proxy servers, either your own proxy servers or you can use Apify Proxy. ### Tip When you want to scrape over a specific list URL, just copy and paste the link as one of the startUrl. If you would like to scrape only the first page of a list then put the link for the page and have the endPage as 1. With the last approach that is explained above you can also fetch any interval of pages. If you provide the 5th page of a list and define the endPage parameter as 6 then you'll have the 5th and 6th pages only. ### Compute Unit Consumption The actor is optimized to run blazing fast and scrape as many items as possible. Therefore, it forefronts all the detailed requests. If the actor doesn't block very often it'll scrape 100 jobs in 2.5 minutes with ~0.1-0.15 compute units. ### Google Jobs Scraper Input example json { "startUrls": [ "https://www.google.com/search?q=Software+Engineer+Jobs&uule=w+CAIQICIKY2FsaWZvcm5pYQ==&hl=en&gl=us&udm=8&jbr=sep:0" ], "queries": [ "teacher" ], "countryCode": "us", "languageCode": "en", "locationUule": "w+CAIQICIFdGV4YXM=", "radius": 300, "includeUnfilteredResults": false, "csvFriendlyOutput": true, "proxy": { "useApifyProxy": true }, "endPage": 5, "maxItems": 100 } ## During the Run During the run, the actor will output messages letting you know what is going on. Each message always contains a short label specifying which page from the provided list is currently specified. When items are loaded from the page, you should see a message about this event with a loaded item count and total item count for each page. If you provide incorrect input to the actor, it will immediately stop with a failure state and output an explanation of what is wrong. ## Google Jobs Export During the run, the actor stores results into a dataset. Each item is a separate item in the dataset. You can manage the results in any language (Python, PHP, Node JS/NPM). See the FAQ or our API reference to learn more about getting results from this Google Jobs actor. ## Google Jobs Output ### CSV Friendly json { "title": "Software Developer", "companyName": "New York University", "location": "New York, NY", "via": "via ICIMS - NYU Jobs", "description": "Position Summary\n\nDevelop applications using current technologies to support business processes in an agile (scrum) development environment. Troubleshoot and debug existing applications. Participate in the design, development, testing, and deployment of upgrades and enhancements for the capital project management application. Prepare technical specifications; design, develop and test technical... solutions; and provide technical expertise upon request. Write documentation and online help systems for newly developed applications. Develop and implement innovative ways to improve quality and functionality of applications and share suggestions and knowledge capital with application owners and team members. Participate in peer code review processes. Mentor and develop student employees.\n\nQualifications\n\nRequired Education:Bachelor's Degree or equivalent in Computer Science or Computer Systems Engineering or equivalent combination of education and experience.Required Experience:3+ years of full-time related experience. Demonstrated programming experience in a scrum environment. 3+ years of full-time related experience in developing and maintaining complex enterprise applications in a construction management application with Oracle Unifier, Oracle Primavera, Kahua, eBuilder or ProCore.Required Skills, Knowledge and Abilities:C#, Javascript, Typescript, CSS, HTML, React.JS,, SQL,, and full stack .NET experience. Strong technical knowledge, written and verbal communication and good interpersonal skills. Strong analytical ability.Preferred Skills, Knowledge and Abilities:Experience with Oracle uDesigner, Kahua kBuilder, or similar. Experience building api integrations using an integration platform such as MuleSoft or similar. Knowledge of Database architecture and data manipulation techniques. Experience with developing microservice applications. Experience with big data analytics. Experience with Jelly.\n\nAdditional Information\n\nIn compliance with NYC's Pay Transparency Act, the annual base salary range for this position is USD $85,500.00 to USD $104,500.00. New York University considers factors such as (but not limited to) scope and responsibilities of the position, candidate's work experience, education/training, key skills, internal peer equity, as well as, market and organizational considerations when extending an offer. This pay range represents base pay only and excludes any additional items such as incentives, bonuses, clinical compensation, or other items. NYU aims to be among the greenest urban campuses in the country and carbon neutral by 2040. Learn more at nyu.edu/nyugreen.EOE/AA/Minorities/Females/Vet/Disabled/Sexual Orientation/Gender Identity", "jobHighlights": [ { "title": "Qualifications", "items": [ "Required Education:Bachelor's Degree or equivalent in Computer Science or Computer Systems Engineering or equivalent combination of education and experience", "Required Experience:3+ years of full-time related experience", "Demonstrated programming experience in a scrum environment", "3+ years of full-time related experience in developing and maintaining complex enterprise applications in a construction management application with Oracle Unifier, Oracle Primavera, Kahua, eBuilder or ProCore", "Required Skills, Knowledge and Abilities:C#, Javascript, Typescript, CSS, HTML, React.JS,, SQL,, and full stack .NET experience", "Strong technical knowledge, written and verbal communication and good interpersonal skills" ] }, { "title": "Responsibilities", "items": [ "Develop applications using current technologies to support business processes in an agile (scrum) development environment", "Troubleshoot and debug existing applications", "Participate in the design, development, testing, and deployment of upgrades and enhancements for the capital project management application", "Prepare technical specifications; design, develop and test technical solutions; and provide technical expertise upon request", "Write documentation and online help systems for newly developed applications", "Develop and implement innovative ways to improve quality and functionality of applications and share suggestions and knowledge capital with application owners and team members" ] }, { "title": "Qualifications", "items": [ "Required Education:Bachelor's Degree or equivalent in Computer Science or Computer Systems Engineering or equivalent combination of education and experience", "Required Experience:3+ years of full-time related experience", "Demonstrated programming experience in a scrum environment", "3+ years of full-time related experience in developing and maintaining complex enterprise applications in a construction management application with Oracle Unifier, Oracle Primavera, Kahua, eBuilder or ProCore", "Required Skills, Knowledge and Abilities:C#, Javascript, Typescript, CSS, HTML, React.JS,, SQL,, and full stack .NET experience", "Strong technical knowledge, written and verbal communication and good interpersonal skills", "Strong analytical ability", "Experience building api integrations using an integration platform such as MuleSoft or similar", "Knowledge of Database architecture and data manipulation techniques", "Experience with developing microservice applications", "Experience with big data analytics" ] }, { "title": "Responsibilities", "items": [ "Develop applications using current technologies to support business processes in an agile (scrum) development environment", "Troubleshoot and debug existing applications", "Participate in the design, development, testing, and deployment of upgrades and enhancements for the capital project management application", "Prepare technical specifications; design, develop and test technical solutions; and provide technical expertise upon request", "Write documentation and online help systems for newly developed applications", "Develop and implement innovative ways to improve quality and functionality of applications and share suggestions and knowledge capital with application owners and team members", "Participate in peer code review processes", "Mentor and develop student employees" ] } ], "applyLink": [ { "title": "icims.com", "link": "https://uscareers-nyu.icims.com/jobs/12183/software-developer/job" }, { "title": "higheredjobs.com", "link": "https://www.higheredjobs.com/admin/details.cfm?JobCode=178439960" }, { "title": "hercjobs.org", "link": "https://main.hercjobs.org/jobs/19449282/software-developer" }, { "title": "mendeley.com", "link": "https://www.mendeley.com/careers/job/software-developer-25672393" }, { "title": "simplyhired.com", "link": "https://www.simplyhired.com/job/wE0SSrM3o-ZJ4mqRmgyl5sYYOAcN8FIF79yQHjQcpaVHKHgDamVrYg" }, { "title": "magazine.org", "link": "https://jobs.magazine.org/jobs/rss/19449282/software-developer" }, { "title": "getwork.com", "link": "https://getwork.com/details/a175ebab149668617a0480a36cc7e03b" }, { "title": "adzuna.com", "link": "https://www.adzuna.com/details/4170656790" } ], "extras": [ "health_insurance" ], "metadata": { "postedAt": "1 day ago", "scheduleType": "Full-time", "salary": "10 an hour" }, "logo": "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTAzHMaOaLeGJbAGcR8rkK4uTsEoBCdFgvPLDf0&s=0" } ### Non-CSV Friendly json { "searchQuery": { "term": "Software Engineer jobs", "page": 1, "type": "SEARCH", "domain": "https://www.google.com", "countryCode": "US", "locationUule": null, "resultsPerPage": 10 }, "url": "https://www.google.com/search?q=Software+Engineer+Jobs&uule=w+CAIQICIKY2FsaWZvcm5pYQ==&hl=en&gl=us&udm=8&jbr=sep:0", "googleJobs": [...], "filters": [ { "name": "Remote", "parameters": { "uds": "ADvngMjcH0KdF7qGWtwTBrP0nt...", "q": "software engineer remote" }, "link": "https://www.google.com/search?sc..." } ] } ## Contact Please visit us through epctex.com to see all the products that are available for you. If you are looking for any custom integration or so, please reach out to us through the chat box in epctex.com. In need of support? business@epctex.com is at your service.
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Google Jobs Scraper now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- epctex
- Pricing
- Paid
- Total Runs
- 295,159
- Active Users
- 1,214
Related Actors
Company Employees Scraper
by build_matrix
🔥 LinkedIn Jobs Scraper
by bebity
Linkedin Company Detail (No Cookies)
by apimaestro
Linkedin Profile Details Batch Scraper + EMAIL (No Cookies)
by apimaestro
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support