Deal Scraper PRNewswire

by defensible_radish

Scrapes last 10 articles published on M&A section of PRNewswire and provides relevant deal info

11 runs
2 users
Try This Actor

Opens on Apify.com

About Deal Scraper PRNewswire

Scrapes last 10 articles published on M&A section of PRNewswire and provides relevant deal info

What does this actor do?

Deal Scraper PRNewswire is a web scraping and automation tool available on the Apify platform. It's designed to help you extract data and automate tasks efficiently in the cloud.

Key Features

  • Cloud-based execution - no local setup required
  • Scalable infrastructure for large-scale operations
  • API access for integration with your applications
  • Built-in proxy rotation and anti-blocking measures
  • Scheduled runs and webhooks for automation

How to Use

  1. Click "Try This Actor" to open it on Apify
  2. Create a free Apify account if you don't have one
  3. Configure the input parameters as needed
  4. Run the actor and download your results

Documentation

M&A x402 Service A FastAPI service for Coinbase AgentKit that scrapes PR Newswire articles and extracts M&A (Mergers & Acquisitions) entities using OpenAI's GPT models. ## Features - 🔍 Scrapes PR Newswire article links from a given URL - 📄 Extracts full article text content - 🤖 Uses OpenAI API to extract structured M&A entities: - BUYER: Acquiring companies/entities - SELLER: Companies being sold/divested - FUND: Investment funds, PE, VC firms - LAW_FIRM: Legal firms involved - INTERMEDIARY: Investment banks, advisors - PROFESSIONAL: Individual professionals - MONEY: Deal values and financial figures - DATE: Transaction dates - DEAL_TYPE: Type of transaction ## Tech Stack - Python 3.11 - FastAPI: Modern web framework - Uvicorn: ASGI server - BeautifulSoup4: HTML parsing - Newspaper3k: Article extraction - OpenAI API: Entity extraction - Pydantic: Data validation ## Setup ### 1. Clone and Install Dependencies bash cd mna-x402-service pip install -r requirements.txt ### 2. Environment Variables Copy .env.example to .env and add your OpenAI API key: bash cp .env.example .env Edit .env: OPENAI_API_KEY=sk-your-actual-key-here ### 3. Run Locally bash uvicorn app.main:app --reload --host 0.0.0.0 --port 8000 The API will be available at http://localhost:8000 ## API Endpoints ### Health Check GET / GET /health ### Scrape Articles POST /x402/scrape Content-Type: application/json { "site_url": "https://www.prnewswire.com/news-releases/financial-services-latest-news/acquisitions-mergers-and-takeovers-list/", "max_articles": 10 } Parameters: - site_url (required): URL of the PR Newswire page to scrape - max_articles (optional): Maximum number of newest articles to process. Defaults to 10, maximum 100. Returns the most recently posted articles first. Helps prevent timeouts and control processing time. Response: json { "site_url": "https://...", "articles": [ { "url": "https://...", "content": "Full article text...", "entities": { "buyer": ["Company A"], "seller": ["Company B"], "fund": ["PE Fund XYZ"], "law_firm": ["Law Firm ABC"], "intermediary": ["Investment Bank DEF"], "professional": ["John Doe"], "money": ["$100M", "$50 million"], "date": ["2024-01-15", "Q1 2024"], "deal_type": "Acquisition" } } ], "count": 10, "total_found": 25, "processed": 10, "limit": 10 } Response Fields: - site_url: The URL that was scraped - articles: Array of article results with extracted entities - count: Number of articles in the response - total_found: Total number of articles found on the page - processed: Number of articles actually processed - limit: The limit that was applied (max_articles parameter) ## Docker ### Buildbash docker build -t mna-x402-service . ### Runbash docker run -p 8000:8000 --env-file .env mna-x402-service ## API Documentation Once running, visit: - **Swagger UI**: `http://localhost:8000/docs` - **ReDoc**: `http://localhost:8000/redoc` ## Project Structure mna-x402-service/ ├── app/ │ ├── init.py │ ├── main.py # FastAPI application │ ├── models.py # Pydantic models │ ├── scraper.py # PR Newswire scraping │ └── extractor.py # OpenAI entity extraction ├── requirements.txt ├── Dockerfile ├── .env.example └── README.md `` ## Error Handling The service includes comprehensive error handling: - Network errors during scraping - Article extraction failures - OpenAI API errors - JSON parsing errors Failed articles are included in the response with anerror` field. ## Production Considerations - Add rate limiting - Implement caching for repeated requests - Add authentication/API keys - Set up monitoring and logging - Configure proper CORS origins - Use environment-specific configurations ## License MIT

Common Use Cases

Market Research

Gather competitive intelligence and market data

Lead Generation

Extract contact information for sales outreach

Price Monitoring

Track competitor pricing and product changes

Content Aggregation

Collect and organize content from multiple sources

Ready to Get Started?

Try Deal Scraper PRNewswire now on Apify. Free tier available with no credit card required.

Start Free Trial

Actor Information

Developer
defensible_radish
Pricing
Paid
Total Runs
11
Active Users
2
Apify Platform

Apify provides a cloud platform for web scraping, data extraction, and automation. Build and run web scrapers in the cloud.

Learn more about Apify

Need Professional Help?

Couldn't solve your problem? Hire a verified specialist on Fiverr to get it done quickly and professionally.

Find a Specialist

Trusted by millions | Money-back guarantee | 24/7 Support