Medium API

Medium API

by craftheon

The Medium API Actor automates the extraction and management of content from Medium.com, allowing users to programmatically access articles, author pr...

40 runs
5 users
Try This Actor

Opens on Apify.com

About Medium API

The Medium API Actor automates the extraction and management of content from Medium.com, allowing users to programmatically access articles, author profiles, publications, and engagement metrics without manual browsing or copy-pasting.

What does this actor do?

Medium API is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

Medium API Extract articles, author profiles, and publication data from Medium.com. This scraper can handle various Medium page types including topic pages, author profiles, and individual articles. ## What You Can Extract ### Article Data - Title and full content - Author information (name, username, profile URL) - Publication details (name, URL, description) - Publication date and reading time - Engagement metrics (claps, responses) - Tags and categories - Featured images ### Author Profiles - Name and bio - Follower count - Profile URL - Recent articles ### Publications - Name and description - URL and follower count - Published articles ## Input Parameters | Parameter | Type | Default | Description | | ------------------------ | ------- | --------------------- | ---------------------------------------------------------------------------------------------------- | | startUrls | Array | - | Required - URLs to start scraping from (articles, author profiles, publications, or topic pages) | | scrapeType | String | "articles" | Type of content to scrape ("articles", "authors", "publications", "mixed") | | maxArticles | Number | 100 | Maximum number of articles to scrape (0 = unlimited) | | includeAuthorInfo | Boolean | true | Whether to scrape detailed author profile information | | includePublicationInfo | Boolean | true | Whether to scrape publication details when available | | includeComments | Boolean | false | Whether to scrape article comments | | maxRequestsPerCrawl | Number | 1000 | Maximum number of pages to process (0 = unlimited) | | proxyConfiguration | Object | {useApifyProxy: true} | Proxy settings for anti-bot protection | ### Input Example json { "startUrls": [ { "url": "https://medium.com/topic/technology" }, { "url": "https://medium.com/topic/data-science" } ], "scrapeType": "articles", "maxArticles": 10, "includeAuthorInfo": true, "includePublicationInfo": true, "includeComments": false, "maxRequestsPerCrawl": 50, "proxyConfiguration": { "useApifyProxy": false } } ## Output The scraper saves results in two locations: ### 1. Dataset Items Each scraped item is saved to the dataset: json { "url": "https://medium.com/topic/data-science", "title": "The most insightful stories about Data Science - Medium", "author": { "name": "Eivind Kjosbakken", "username": "oieivind", "profileUrl": "https://medium.com/@oieivind", "bio": "", "followers": 0 }, "publication": { "name": "tag", "url": "https://medium.com/tag", "description": "" }, "content": "Data Science Collective\n\nData Science Collective\n\nEmmanuel O. Irekponor...", "excerpt": "Read stories about Data Science on Medium...", "publishedAt": "", "imageUrl": "https://miro.medium.com/v2/1*0L5w2b6T1yEVI3_ZYUWONw.png", "tags": ["Data Science", "Technology", "Programming", "AI"], "scrapedAt": "2025-11-08T09:59:25.186Z", "pageType": "article", "claps": 0, "responses": 0, "readingTime": 0 } ### 2. Statistics Comprehensive statistics are stored for tracking: json { "maxArticles": 5, "articlesScraped": 1, "includeAuthorInfo": true, "includePublicationInfo": true, "includeComments": false, "startTime": "2025-11-08T09:59:03.729Z", "completedAt": "2025-11-08T09:59:25.918Z", "errors": [], "urlsProcessed": ["https://medium.com/topic/data-science"] } ## Usage Examples ### Basic Topic Scraping json { "startUrls": [{ "url": "https://medium.com/topic/technology" }], "maxArticles": 20 } ### Author Profile Scraping json { "startUrls": [{ "url": "https://medium.com/@username" }], "scrapeType": "authors", "includeAuthorInfo": true } ### Publication Scraping json { "startUrls": [{ "url": "https://medium.com/publication-name" }], "scrapeType": "publications", "includePublicationInfo": true } ### Mixed Scraping json { "startUrls": [ { "url": "https://medium.com/topic/technology" }, { "url": "https://medium.com/@username" }, { "url": "https://medium.com/publication-name" } ], "scrapeType": "mixed", "maxArticles": 50 } ## Use Cases ### 📊 Content Analysis - Analyze trending topics and publications - Track author performance and engagement - Extract publication statistics ### 🎯 Research & Journalism - Gather data for media analysis projects - Monitor content trends across topics - Research competitor content strategies ### 🚀 Content Curation - Build content aggregation systems - Create recommendation engines - Monitor specific authors or publications ### 📈 SEO & Marketing - Analyze content performance metrics - Track brand mentions and coverage - Research keyword trends and topics ## How to Use 1. Configure Input: Set your desired URLs and scraping parameters 2. Run the Scraper: The actor will process all provided URLs 3. Access Results: Download scraped data from the dataset 4. Check Statistics: Review comprehensive scraping statistics ## Important Notes - Rate Limiting: Medium may implement rate limiting. Use proxies for large-scale scraping - Content Access: Some content may require login for full access - Terms of Service: Ensure compliance with Medium's Terms of Service - Data Quality: Content extraction depends on page structure and availability

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Medium API now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
craftheon
Pricing
Paid
Total Runs
40
Active Users
5
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support