NoFluffJobs.com | Search | Job Listings | $0.8 / 1K | Scraper

NoFluffJobs.com | Search | Job Listings | $0.8 / 1K | Scraper

by memo23

Uncover hidden job opportunities across 5 European countries with one search! This NoFluffJobs scraper delivers comprehensive, real-time data on tech ...

556 runs
24 users
Try This Actor

Opens on Apify.com

About NoFluffJobs.com | Search | Job Listings | $0.8 / 1K | Scraper

Uncover hidden job opportunities across 5 European countries with one search! This NoFluffJobs scraper delivers comprehensive, real-time data on tech jobs, including salaries, skills, and company insights. Save time, expand your job search, and make informed career decisions with ease.

What does this actor do?

NoFluffJobs.com | Search | Job Listings | $0.8 / 1K | Scraper is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

NoFluffJobs.com Job Listings Scraper ## How it works This actor allows you to scrape job listings from NoFluffJobs.com and extract comprehensive details about each job posting, including job title, company information, salary, location, required skills, benefits, recruitment process, and various other metadata. When you provide a search URL, the scraper automatically searches for matching jobs across all five regional versions of the site (Poland, Hungary, Czech Republic, Slovakia, and Netherlands). This ensures that you get a complete view of all relevant job listings, as different countries may have varying job opportunities. ## Features - Multiple Search Queries: Supports scraping based on multiple search URLs (just copy and paste the link/url from nofluffjobs.com site). - Cross-Region Search: When you enter a URL from the site, the scraper searches for that job across all five versions (countries/regions) of the site. This is because each country can have different job listings. - Detailed Job Information: Extracts comprehensive data about each job listing, including company details, job requirements, benefits, and more. - Multilingual Support: Capable of handling job postings in multiple languages. ## How to Use 1. Set Up: Ensure you have an Apify account and access to the Apify platform. 2. Configure input parameters: - Start URLs: Paste the NoFluffJobs search URLs you want to scrape. - Max Items (optional): Limit the number of job listings to scrape. - Max Concurrency (optional): Set the maximum number of concurrent requests. - Min Concurrency (optional): Set the minimum number of concurrent requests. - Max Request Retries (optional): Set the maximum number of request retries. 3. (Optional) Configure proxy settings for enhanced reliability. 4. Run the actor and obtain the extracted data in your preferred format (JSON, CSV, Excel, etc.). ## Input Data Here's an example of how to set up the input for the NoFluffJobs scraper: json { "startUrls": [ { "url": "https://nofluffjobs.com/sk/backend?criteria=category%3Dfrontend,fullstack,mobile,embedded" } ], "maxItems": 100, "maxConcurrency": 100, "minConcurrency": 1, "maxRequestRetries": 8, "proxyConfiguration": { "useApifyProxy": true, "apifyProxyGroups": [ "RESIDENTIAL" ] } } Note: Even though you provide a URL for a specific region (e.g., 'sk' for Slovakia in the example above), the scraper will search for matching jobs across all regions: Poland (pl), Hungary (hu), Czech Republic (cz), Slovakia (sk), and Netherlands (nl). ## Output Structure The output data is highly detailed and includes the following main sections: 1. Basic Job Information 2. Company Details 3. Job Description and Requirements 4. Location Information 5. Salary and Contract Details 6. Benefits 7. Application Process 8. Recruitment Details 9. Metadata and Analytics Here's a comprehensive breakdown of the output structure: json { "id": "data-engineer-ework-group-remote-2", "title": "Data Engineer", "apply": { "option": "email", "leadCollection": false, "leadCollectionInfoClause": "" }, "specs": { "details": { "custom": [] }, "help4Ua": false, "dailyTasks": [ "Collaborate with cross-functional teams to design, develop and maintain data pipelines and analytics solutions.", "Design and build a foundational platform for a modern data lake architecture, optimizing it for scalability, flexibility, and performance.", "Develop automated test to ensure data accuracy and quality.", "You will assist with planning and maintaining the Azure architectural runway and pipeline for multiple products, ensuring their stability and efficient operation.", "Continuously secure improvement that can make developers on the platform work even more efficiently and act as a sparring partner on use of Azure services for the organisation.", "Leverage your expertise in cloud development to design and implement innovative digital solutions focused on delivering business insights and patient care in real time.", "Overall, our goal is to improve the clinical experience for patients, doctors and nurses world-wide, and your role will support this journey." ], "referral": { "allowed": true } }, "basics": { "category": "data", "seniority": ["Senior"], "technology": "Python" }, "company": { "url": "www.eworkgroup.com", "logo": { "original": "companies/logos/original/ework_group_20210531_122823.png", "jobs_details": "companies/logos/jobs_details/ework_group_20210531_122823.png", "jobs_listing": "companies/logos/jobs_listing/ework_group_20210531_122823.png" }, "name": "Ework Group", "size": "100+", "video": "" }, "details": { "quote": "", "position": "", "description": "<p>For our client - a company from pharmaceutical area, we are looking for Data Engineer.</p>\n<p><strong>What you will be doing</strong></p>\n<p>You will be close to the heart of our client's clinical operations where you will play a key role in shaping the future of clinical trials and patient care, by building scalable solutions in the cloud.</p>\n<p><br></p>", "quoteAuthor": "" }, "benefits": { "benefits": [ "Sport subscription", "Private healthcare", "International projects" ], "equipment": { "computer": "", "monitors": "", "operatingSystems": { "lin": false, "mac": false, "win": false } }, "officePerks": [] }, "consents": { "infoClause": "The Controller of your personal data is Ework Group, with registered office at Plac Stanisława Małachowskiego 2, Warsaw. Your data is processed for the purpose of the current recruitment process. Providing data is voluntary but necessary for this purpose. Processing your data is lawful because it is necessary in order to take steps at the request of the data subject prior to entering into a contract (article 6 point 1b of Regulation EU 2016/679 - GDPR). Your personal data will be deleted when the current recruitment process is finished, unless a separate consent is provided below. You have the right to access, correct, modify, update, rectify, request for the transfer or deletion of data, withdrawal of consent or objection.", "personalDataRequestLink": "monika.jozwik@eworkgroup.com" }, "location": { "places": [ { "city": "Remote", "url": "data-engineer-ework-group-remote-2" }, { "country": { "code": "POL", "name": "Poland" }, "province": "opole", "url": "data-engineer-ework-group-opole-1", "provinceOnly": true } ], "remote": 5, "multicityCount": 100, "covidTimeRemotely": false, "remoteFlexible": false, "fieldwork": false, "defaultIndex": 1 }, "essentials": { "contract": { "start": "ASAP", "duration": {} }, "originalSalary": { "currency": "PLN", "types": { "b2b": { "period": "Month", "range": [25716, 32146], "paidHoliday": false } }, "disclosedAt": "VISIBLE" } }, "methodology": [], "recruitment": { "languages": [ {"code": "pl"}, {"code": "en"} ], "onlineInterviewAvailable": true }, "requirements": { "musts": [ {"value": "Python", "type": "main"}, {"value": "Azure", "type": "main"}, {"value": "Azure Data Factory", "type": "main"}, {"value": "Azure Databricks", "type": "main"}, {"value": "Spark", "type": "main"} ], "nices": [ {"value": "SQL", "type": "main"}, {"value": "CI", "type": "main"}, {"value": "CD pipelines", "type": "main"}, {"value": "Azure DevOps", "type": "main"} ], "description": "<p>We are seeking a candidate with an educational background in Computer Science and Software Development , as well as experience in some of the following areas:</p>\n<ul>\n<li>Strong proficiency in Python programming</li>\n<li>Extensive experience with Azure, including Azure Data Factory and Azure Databricks, and a deep understanding of Azure architecture and services</li>\n<li>Experience in using Spark, including Spark SQL and understanding of how to optimize Spark performance.</li>\n<li>Automated unit testing and code quality inspection</li>\n<li>CI/CD Pipelines using Azure DevOps (or similar)</li>\n<li>Working in pharma domain or other regulated area is considered an advantage</li>\n</ul>", "languages": [ {"type": "MUST", "code": "en", "level": "C1"}, {"type": "MUST", "code": "pl", "level": "C1"} ] }, "posted": 1725032841570, "postedOrRenewedDaysAgo": 0, "status": "PUBLISHED", "postingUrl": "data-engineer-ework-group-remote-2", "metadata": { "sectionLanguages": { "daily-tasks": "en", "description": "en", "requirements.description": "en" } }, "regions": ["pl"], "reference": "WZOXW66Z", "meta": { "videosInCompanyProfileVisible": true }, "companyUrl": "/company/ework-group-rlrciwbo", "seo": { "title": "Data Engineer @ Ework Group", "description": "Data Engineer @ Ework Group Fully remote job 25.7k-32.1k (B2B) PLN / month" }, "analytics": { "lastBump": 0, "lastBumpType": "SYSTEM", "previousBumpCount": 0, "nextBump": 1, "nextBumpType": "SYSTEM", "nextBumpCount": 6, "emissionDay": 0, "productType": "EXPERT", "emissionBumps": 6, "emissionLength": 30, "emission": "R1461A", "addons": { "bump": false, "publication": true, "offerOfTheDay": false, "topInSearch": false, "highlighted": false }, "topInSearchConfig": { "pairs": [] } } } ## Output Fields Explanation - id: Unique identifier for the job listing - title: Job title - apply: Application method and related information - specs: Job specifications, including daily tasks - basics: Basic job information (category, seniority, main technology) - company: Detailed company information - details: Job description and position details - benefits: List of benefits and perks offered - consents: GDPR and data processing information - location: Detailed location information, including remote work options - essentials: Contract and salary information - recruitment: Recruitment process details, including required languages - requirements: Required and nice-to-have skills, and language requirements - posted: Timestamp of when the job was posted - status: Current status of the job listing - postingUrl: URL slug for the job posting - metadata: Additional metadata, including language information for different sections - regions: Regions where the job is available - reference: Reference code for the job - seo: SEO-related information for the job listing - analytics: Analytics data related to the job posting on the platform ## Support - For issues or feature requests, please use the Issues section of this actor. - If you need customization or have questions, feel free to contact the author: - Author's website: https://muhamed-didovic.github.io/ - Email: muhamed.didovic@gmail.com - My Apify Actors/Scrapers: https://apify.com/memo23 ## Additional Services - Request customization or whole dataset: muhamed.didovic@gmail.com - If you need anything else scraped, or this actor customized, email: muhamed.didovic@gmail.com - For API services of this scraper (no Apify fee, just usage fee for the API), contact: muhamed.didovic@gmail.com

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try NoFluffJobs.com | Search | Job Listings | $0.8 / 1K | Scraper now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
memo23
Pricing
Paid
Total Runs
556
Active Users
24
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support