Wikipedia Summary (API-first) — EN/RU

by adaptable_initiative

Fetch stable Wikipedia summaries via the official API. Always succeeds, returning FOUND/NOT_FOUND status, the canonical URL, and text extract for EN/RU content. Perfect for automation.

78 runs
6 users
Try This Actor

Opens on Apify.com

About Wikipedia Summary (API-first) — EN/RU

Need a reliable way to pull clean, verified summaries from Wikipedia directly into your app? This actor is for you. It taps into Wikipedia's official REST API, so you're getting structured, up-to-date data the right way—no messy HTML parsing. I use it for projects where stability is non-negotiable; it always returns a clear result (SUCCEEDED), with a status of FOUND or NOT_FOUND, the canonical page URL, and the text extract. Whether you're building a research bot, enriching a dataset, or powering a fact-checking feature, it saves you from building and maintaining that API integration layer yourself. It handles both English (EN) and Russian (RU) content, making it a solid choice for multilingual applications. Think of it as a dedicated, fuss-free pipeline from Wikipedia to your code, letting you focus on what to do with the information rather than how to get it.

What does this actor do?

Wikipedia Summary (API-first) — EN/RU is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Wikipedia Summary (API-first) Actor

Overview

This actor fetches structured summaries from Wikipedia using only its official public APIs. You provide a topic, and it returns the article's title, a short extract, and the canonical URL. It's designed for reliability in automated workflows: if a page isn't found, the run still succeeds with a NOT_FOUND status instead of failing.

Key Features

  • API-Only & Reliable: Uses Wikipedia's official REST API (/api/rest_v1/page/summary/) and search API. No HTML scraping or bypassing of protections.
  • Deterministic Output: Every run finishes with SUCCEEDED. Missing pages return a clear NOT_FOUND status.
  • Fast & Lightweight: Typical runtime is 0.2–1.2 seconds with minimal memory (256–512 MB).
  • Safe for Concurrency: Can be run in parallel; it doesn't use sessions or cookies.
  • Input Flexibility: Handles topic redirects and performs a search fallback if the exact page isn't found.

How to Use

  1. Run the actor and provide a JSON input object.
  2. The main results are in:
    • The Dataset as a list of records.
    • The Key-Value Store as OUTPUT.json.
  3. Check the logs for resolution details or warnings.

Input

The actor requires a single topic parameter.

{
  "topic": "Web scraping"
}
  • topic (string, required): The article subject or title. Wikipedia handles redirects and title normalization.

Output

The actor returns a consistent JSON object. Key fields are status, resolved_title, url, title, extract, and lang.

Example Output (FOUND):

{
  "input_topic": "Web scraping",
  "status": "FOUND",
  "resolved_title": "Web scraping",
  "url": "https://en.wikipedia.org/wiki/Web_scraping",
  "title": "Web scraping",
  "extract": "Web scraping is data scraping used for extracting data from websites...",
  "lang": "en",
  "timestamp": "2025-09-11T03:00:00.000Z"
}

Example Output (NOT_FOUND):

{
  "input_topic": "Apify",
  "status": "NOT_FOUND",
  "resolved_title": null,
  "url": null,
  "title": null,
  "extract": null,
  "lang": "en",
  "timestamp": "2025-09-11T03:00:00.000Z"
}

Use Cases

  • Enriching data with short descriptions (e.g., for technologies, cities, companies).
  • Generating preview cards or snippets for search UIs.
  • Normalizing topic names before further processing.

Notes

  • Fair Use: Only public endpoints are used. It does not bypass authentication, captchas, or paywalls.
  • Data Licensing: Summaries are from Wikipedia; comply with its licensing terms for downstream use.
  • Support: Provide feedback via comments on finished runs.

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Wikipedia Summary (API-first) — EN/RU now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
adaptable_initiative
Pricing
Paid
Total Runs
78
Active Users
6
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support