Menicka Crawler

by janbuchar

Automatically scrape daily lunch menus from menicka.cz. Save time by getting structured menu data for dashboards, apps, or internal digests.

1,541 runs
9 users
Try This Actor

Opens on Apify.com

About Menicka Crawler

Ever tried to plan a work lunch and spent way too long clicking through restaurant websites? I built the Menicka Crawler to solve exactly that. It's a simple, focused tool that pulls the daily lunch menus from menicka.cz, the go-to site for Czech daily meal deals. Instead of manually checking each restaurant page, you can set this up to automatically gather all that menu data—dish names, prices, the works—and deliver it in a clean, structured format like JSON or CSV. The main benefit is saving a ton of time, whether you're a developer, a team manager, or just someone who likes organized info. You can use the data to power a internal company lunch board, populate a weekly digest email, or analyze pricing trends. It handles the tedious fetching so you can build something useful on top of it. I run it myself to feed a simple Slack bot for our office, and it just works without fuss. If you need reliable, automated access to Czech lunch menus, this is the tool.

What does this actor do?

Menicka Crawler is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Menicka Crawler

A web crawler built on Apify for extracting restaurant menu data. It uses PlaywrightCrawler to handle dynamic content and JavaScript-rendered pages.

Overview

This actor is designed to systematically crawl restaurant websites to collect structured menu information, including dish names, descriptions, prices, and categories. It's built with Crawlee and Playwright for reliable data extraction.

Key Features

  • Dynamic Content Handling: Uses Playwright to interact with and scrape modern JavaScript-heavy websites.
  • Recursive Crawling: Follows links within defined patterns to discover all relevant menu pages.
  • Structured Data Output: Returns clean, organized data in JSON format, ready for analysis or integration.
  • Apify Platform Integration: Runs on the Apify platform with built-in features for scaling, scheduling, and storage.

How to Use

You can run the actor on the Apify platform. Configure your run using the input schema, which typically includes:

  • Start URLs: The restaurant website URLs to begin crawling from.
  • Link Patterns: (Optional) Regex patterns to control which internal links the crawler should follow.
  • Max Depth/Pages: Limits to control the crawl scope.

For local development or customization:
1. Clone the actor's source code.
2. Install dependencies with npm install.
3. Configure your start URLs in the main.js file or via input.
4. Run it locally with npm start or build and push it to your Apify account.

Refer to the Crawlee tutorial and PlaywrightCrawler API docs for detailed development guidance. More examples are available in the Crawlee docs.

Input/Output

Input (JSON):

{
  "startUrls": ["https://example-restaurant.com/menu"],
  "maxCrawlDepth": 3,
  "maxCrawlPages": 100
}

Output:
The actor outputs a dataset of menu items. Each item is a structured object. The default dataset is available in Apify storage as JSON, with each item resembling:

{
  "restaurantName": "Example Restaurant",
  "dishName": "Sample Dish",
  "description": "Ingredients and description here.",
  "price": "15.50",
  "category": "Main Courses",
  "url": "https://example-restaurant.com/menu/main-courses"
}

Categories

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Menicka Crawler now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
janbuchar
Pricing
Paid
Total Runs
1,541
Active Users
9
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support