Jobat.be Scraper
by lexis-solutions
Jobat.be scraper for Belgium job data: extract structured listings, company info, salary, benefits and full descriptions from 100k+ vacancies across F...
Opens on Apify.com
About Jobat.be Scraper
Jobat.be scraper for Belgium job data: extract structured listings, company info, salary, benefits and full descriptions from 100k+ vacancies across Flanders, Brussels and Wallonia for market research, ATS enrichment and dashboards.
What does this actor do?
Jobat.be Scraper is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.
Key Features
- Cloud-based execution - no local setup required
- Scalable infrastructure for large-scale operations
- API access for integration with your applications
- Built-in proxy rotation and anti-blocking measures
- Scheduled runs and webhooks for automation
How to Use
- Click "Try This Actor" to open it on Apify
- Create a free Apify account if you don't have one
- Configure the input parameters as needed
- Run the actor and download your results
Documentation
Jobat.be Scraper
Welcome to the Jobat.be Scraper! Jobat.be is a leading Belgian job board with 100,000+ vacancies monthly across Wallonia, Brussels, and Flanders, serving Dutch, French, and English audiences with deep search and filtering, salary calculators, company reviews, and career resources. This actor collects listings from the platform—titles, company info, locations, addresses, benefits, and full descriptions—so you can tap its breadth of roles and metadata in your workflows. ## Introduction The scraper starts from Jobat.be search results or individual job URLs, paginates through listings, and visits each detail page to extract structured data: company profile links, locations, salary/benefits, job requirements, and the full narrative description. It supports both query-built searches and explicit start URLs, letting you target broad searches or specific jobs. ## Use Cases - Job market research: Analyze roles, locations, and sectors across Belgium. - ATS/CRM enrichment: Feed fresh postings with structured metadata. - Competitor monitoring: Track hiring activity by company or region. - Data aggregation: Build dashboards by category, location, or posting date. ## Input Supported fields: - startUrls (array, optional): Jobat.be result URLs or detail URLs containing job_<id>. Off-domain/unknown patterns are skipped. - query (string, optional): Search keyword. When set, the scraper builds the search URL automatically. - location (string, optional): Free-text location; converted to a slug via Jobat.be autocomplete. - region (string, optional): Valid region slug (see enum in code). If set, location is ignored. - category / subCategory (string, optional): Must exist in the category/subCategory lists; combinations must be valid. - kmfrom (number, optional): Radius km only applied if location is resolved and region is empty; value must be in the distance enum (0, 5, 10, 20, 30, 40, 50). - contract, workSchedule, sector, onlineSince, language (optional): Only enum values are applied. language=en removes the joblanguage filter (all languages). - maxItems (integer, optional): Limit of items to fetch. If empty, the crawler paginates until results end. - proxyConfiguration (object, optional): Apify proxy configuration. Example: { "useApifyProxy": false }. ## Notes - Off-domain URLs or unknown patterns in startUrls are skipped. - Either startUrls or query must be provided to run the scraper. - If category and subCategory are both set but the subCategory doesn’t belong to that category, the subCategory is ignored; if only subCategory is set, category is auto-filled from it. - region takes priority over location; if a valid region is set, location is ignored. When location is used, the first autocomplete suggestion is applied—if no suggestion, location is skipped. - kmfrom is processed only when location is provided and resolved (and no region is set). - language=en disables language filtering; other language values use the site’s numeric language filter. - maxItems caps the total items across pagination; if omitted, pagination runs until the list ends. ## Input Example - Example with query: json { "query": "developer", "location": "Brussels", "kmfrom": 20, "category": "ict", "subCategory": "development", "contract": "permanent", "maxItems": 25, "proxyConfiguration": { "useApifyProxy": false } } - Example with startUrls (list + detail): json { "startUrls": [ { "url": "https://www.jobat.be/en/jobs/results/ict/development" }, { "url": "https://www.jobat.be/en/jobs/front-end-developer/job_123456" } ], "maxItems": 15, "proxyConfiguration": { "useApifyProxy": false } } ## Output Each dataset item contains, for example: json { "url": "https://www.jobat.be/en/jobs/results/developer/brussels/job_123456", "title": "Frontend Developer", "companyName": "Acme NV", "companyGroupName": "Acme Group", "companyUrl": "https://www.jobat.be/en/companies/acme-nv", "companyLogo": "https://.../logo.png", "location": "Brussels", "duration": "Permanent", "categorys": ["ICT", "Development"], "jobId": "123456", "jobCompanyType": "Direct", "productId": "7890", "region": "brussels", "regionZipCode": "1000", "applicationType": "external", "regime": "Full Time", "reqDegree": "Bachelor", "reqLanguage": ["NL", "EN"], "salary": "Competitive", "date": "2025-12-01", "updateTimestamp": "2025-12-03T10:00:00Z", "datePosted": "2025-12-01", "validThrough": "2026-01-05", "employmentType": "FULL_TIME", "addressLocality": "Brussels", "addressRegion": "Brussels", "addressCountry": "BE", "postalCode": "1000", "streetAddress": "Main Street 1", "responsibilities": "Build and maintain UI components...", "skillsQualifications": "React, TypeScript", "jobBenefits": ["Meal vouchers", "Hospitalization insurance"], "fullDescription": "Full job description text...", "functionType": ["Development"], "sector": ["ICT, Telecom and Internet"], "locations": ["Brussels"], "requirements": ["3+ years experience", "Fluent in English"] } The scraper paginates until maxItems is reached or listings end. ## Why use the Jobat.be Scraper? - Fast: Automatic pagination with a clear maxItems cap. - Easy: Start from result URLs or just provide query + filters. - Flexible: Location via region or autocomplete, optional radius, validated category/subCategory. - Rich data: Google JSON metadata, structured address, benefits, and full description. - Built on Apify/Crawlee: Production-ready stability. ## FAQ - How many jobs can it fetch? Set maxItems; the crawler stops at the limit or when no more results remain. - Which URLs should I use? Use Jobat.be search/result URLs, or let query build the URL. Off-domain or unknown patterns are skipped. - Proxy support? Yes, via proxyConfiguration. - How does location filtering work? region wins; if empty, location is resolved via autocomplete. kmfrom applies only when location resolves and region is not set. ## Need to scrape other job platforms? Check out our other job scrapers on Apify: - Jobs.ch Scraper - Jobs.cz Scraper - VDAB.be Scraper - Jobs Ireland Scraper - Job Jobnet DK Scraper - Tyomarkkinatori.fi Scraper - AMS Austria Jobs Scraper --- Need help or want a custom solution? Lexis Solutions is a certified Apify Partner. Contact us for help or custom builds. Email: scraping@lexis.solutions LinkedIn: Lexis Solutions ## Support Our Work If this scraper helps you, please leave a company review here and review the scrapers you use. Thank you!
Categories
Common Use Cases
Market Research
Gather competitive intelligence and market data
Lead Generation
Extract contact information for sales outreach
Price Monitoring
Track competitor pricing and product changes
Content Aggregation
Collect and organize content from multiple sources
Ready to Get Started?
Try Jobat.be Scraper now on Apify. Free tier available with no credit card required.
Start Free TrialActor Information
- Developer
- lexis-solutions
- Pricing
- Paid
- Total Runs
- 7
- Active Users
- 2
Related Actors
Company Employees Scraper
by build_matrix
🔥 LinkedIn Jobs Scraper
by bebity
Linkedin Company Detail (No Cookies)
by apimaestro
Linkedin Profile Details Batch Scraper + EMAIL (No Cookies)
by apimaestro
Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.
Learn more about ApifyNeed Professional Help?
Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.
Trusted by millions | Money-back guarantee | 24/7 Support