F1 API

F1 API

by adriigarr

Get clean, reliable Formula 1 data for apps and analysis. Access real-time results, historical standings, driver stats, and team info in one API.

591 runs
20 users
Try This Actor

Opens on Apify.com

About F1 API

Ever tried to find a reliable, up-to-date source for Formula 1 data? It's tougher than it should be. That's why I built the F1 API. It pulls together real-time race results, historical standings, driver stats, and team info into one straightforward feed. Whether you're checking last season's constructor points or tracking live lap times, it's all here and ready to use. I use it for two main things: building apps and digging into data. If you're a developer, you can plug it right into your project to create fan dashboards, fantasy league tools, or race weekend trackers without scraping sketchy websites. For analysts and enthusiasts, it's a goldmine for exploring trends—compare driver performances across eras or model championship probabilities. The data is structured clearly, so you spend less time cleaning and more time building or analyzing. It just works. You get access to a comprehensive motorsport dataset that's maintained regularly, so you're not stuck with outdated info. It's the tool I wanted when I first started working with sports data.

What does this actor do?

F1 API is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

F1 API Actor

A Python-based Apify Actor template for scraping data from a single web page. It takes a target URL as input, fetches the page using HTTPX, parses its HTML with Beautiful Soup, and outputs structured data to an Apify dataset. The template is pre-configured to extract page headings (H1-H6), but you can modify the code to scrape any elements.

Key Features

  • Apify SDK for Python: Provides the core framework for building and running the Actor.
  • Input Schema: Validates the input, which must include the target page URL.
  • HTTPX: Handles fast, asynchronous HTTP requests to fetch page HTML.
  • Beautiful Soup 4: Parses HTML and extracts data using Pythonic methods.
  • Dataset Storage: Automatically stores scraped results in a structured Apify dataset for easy access and export.
  • Request Queue: Includes infrastructure for managing URLs to scrape, useful for extending to multi-page scraping.

Input/Output

Input (via Actor.get_input())
The Actor expects a JSON input containing the URL to scrape.

{
  "url": "https://example.com"
}

Output
The scraped data is pushed to an Apify dataset. By default, each item is an object containing a page heading.

{
  "heading": "Example Domain",
  "tag": "h1"
}

You can change the output structure by editing the parsing logic in the main script.

How to Use

  1. Configure Input: In the Apify Console, set the input to a JSON object with the url of the page you want to scrape.
  2. Run the Actor: Start the run. The Actor will fetch the page, parse it, and save the results.
  3. Access Data: After the run finishes, you can view, export (JSON, CSV, etc.), or access the dataset via API.
  4. Customize: To scrape different data, edit the Python code, specifically the Beautiful Soup parsing logic inside the main function.

The core scraping flow in the script is:

# 1. Get input URL
input = await Actor.get_input()
# 2. Fetch page HTML
async with httpx.AsyncClient() as client:
    response = await client.get(input['url'])
# 3. Parse with Beautiful Soup
soup = BeautifulSoup(response.content, 'lxml')
# 4. Extract data (example: headings)
for heading in soup.find_all(["h1", "h2", "h3", "h4", "h5", "h6"]):
    # 5. Store in dataset
    await Actor.push_data({"heading": heading.text, "tag": heading.name})

Local Development

To develop locally, use the Apify CLI to pull the Actor code:

  1. Install the CLI:
    bash npm -g install apify-cli # or: brew install apify-cli
  2. Pull the Actor using its unique name or ID (found in the Apify Console):
    bash apify pull <ActorId>

Resources

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try F1 API now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
adriigarr
Pricing
Paid
Total Runs
591
Active Users
20
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support